40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 107 of 341  Not logged in ELOG logo
ID Date Author Type Categorydown Subject
  14789   Sun Jul 21 12:54:18 2019 gautamUpdateLoss MeasurementMC2 loss map

Can you please be more specific about what the error is? Is this the usual instability with the camera server code? Or was it something new?

Quote:

The camera server is throwing an error and is not grabbing snapshots :(

  14791   Sun Jul 21 17:17:03 2019 KruthiUpdateLoss MeasurementMC2 loss map

The camera server keeps throwing the error: failed to grab frames. Milind suggested that it might a problem with the ethernet cable, so I even unplugged it and connected it again; it still had the same issue. One more thing I noticed was, it does take snapshots sometimes with the terminal command caput C1:CAM-ETMX_SNAP 1, but produces a segmentation fault when ezca.Ezca().write(C1:CAM-ETMX_SNAP, 1) ezca.Ezca().write(CAM-ETMX_SNAP, 1) is used via ipython. When the terminal command also fails to take snapshots, I noticed that the SNAP button on the GigE medm screen remains on and doesn't switch back to OFF like it is supposed to.

Quote:

Can you please be more specific about what the error is? Is this the usual instability with the camera server code? Or was it something new?

Quote:

The camera server is throwing an error and is not grabbing snapshots :(

  14792   Sun Jul 21 19:27:25 2019 MilindUpdateLoss MeasurementMC2 loss map

I think ezca.Ezca().write() takes the string "CAM-ETMX_SNAP" as an argument and not C1:CAM-ETMX_SNAP. See this, line 47. Are you sure this is not the problem?

Quote:

The camera server keeps throwing the error: failed to grab frames. Milind suggested that it might a problem with the ethernet cable, so I even unplugged it and connected it again; it still had the same issue. One more thing I noticed was, it does take snapshots sometimes with the terminal command caput C1:CAM-ETMX_SNAP 1, but produces a segmentation fault when ezca.Ezca().write(C1:CAM-ETMX_SNAP, 1) is used via ipython. When the terminal command also fails to take snapshots, I noticed that the SNAP button on the GigE medm screen remains on and doesn't switch back to OFF like it is supposed to.

  14796   Mon Jul 22 12:57:35 2019 KruthiUpdateLoss MeasurementMC2 loss map

In my script I have used "CAM-ETMX_SNAP" only; while entering it in the elog I made a mistake, my bad!

Quote:

I think ezca.Ezca().write() takes the string "CAM-ETMX_SNAP" as an argument and not C1:CAM-ETMX_SNAP. See this, line 47. Are you sure this is not the problem?

Quote:

The camera server keeps throwing the error: failed to grab frames. Milind suggested that it might a problem with the ethernet cable, so I even unplugged it and connected it again; it still had the same issue. One more thing I noticed was, it does take snapshots sometimes with the terminal command caput C1:CAM-ETMX_SNAP 1, but produces a segmentation fault when ezca.Ezca().write(C1:CAM-ETMX_SNAP, 1) is used via ipython. When the terminal command also fails to take snapshots, I noticed that the SNAP button on the GigE medm screen remains on and doesn't switch back to OFF like it is supposed to.

  14815   Mon Jul 29 13:32:56 2019 gautamUpdateLoss MeasurementLoss measurement PD installed in AS path

[yehonathan, gautam]

  • we placed a PDA520 photodiode in the AS beampath, so AS110 and AS55 no longer see any light.
  • ITMX and ETMX were misaligned (since the plan is to measure the Y arm loss).
  • The PDA520 and MC2 transmission are currently going to the Y arm ALS beat channels in the DAQ system. Unfortunately, we have no control over the whitening gains for these channels because of the c1iscaux2 situation.
  14816   Mon Jul 29 19:08:55 2019 yehonathanUpdateLoss MeasurementReviving loss measurement by reflection

1. X arm is totally misaligned in order to measure the Y arm loss using the reflection method. Each measurement round consists of measuring the reflected power when the Y arm is aligned and when it is misaligned.

2. The measurement script used is /scripts/lossmap_scripts/armLoss/measureArmLoss.py. It generates a log file in the /logs folder specifying the alignment and misalignment times.

3. The data extraction script dlData.py processes the raw data in the log file and creates a hdf5 file in the /Data folder conataining the data of the measurement it self.

4. dlData.py labels the the aligned and misaligned datas incorrectly when the number of measurement is odd. I use only even number of measurements then.

5. In order to clip the chaotic transition between the aligned and misaligned states I use tDur attribute smaller than the actual sleep time used in the measurement script itself.

6. plotData.py (written by Gautam) and AnalyzeLossData.ipynb (written by me) can be used to calculate the loss and to plot some analyses (see https://nodus.ligo.caltech.edu:8081/40m/14568). They give roughly the same answer. The descripancy can be explained by the different modulation and ITM transmissions used.

7. I take a measurement of 8 repeatitions. I plot the measured reflected power alternating between the aligned and misaligned states. 

I find that the reflected power is repeatable to within 1%.

This is consistent with the transmission data plotted here which is also repeatable to within 1%.

8. I take an overnight measurement of 100 repeatitions.

  14825   Fri Aug 2 17:07:33 2019 yehonathan, gautamUpdateLoss Measurement 

We run a loss measurement on the Y arm with 50 repetitions.

  14827   Mon Aug 5 14:47:36 2019 yehonathanUpdateLoss Measurement 

Summary:

I analyze the 100 reps loss measurement of the Y arm using the AnalyzeLossData.ipynb notebook.

The mean of the measured loss is ~ 100ppm and the variation between the repititions is ~ 27%.

 

In Detail

In the real measurement the misaligned and locked states are repeatedly switched between each other. I plot the misaligned and locked PD readings seperately over time.

There seems to be a drift that is correlated between the two readings. This is probably a drift in the power after the MC2. To verify, I plot the ratio between those readings and find no apparent drift:

The variation in the ratio is less than 1%. The loss figure, computed to be 1 minus this ratio times a big number, give a much worse variation. I plot the histogram of the loss figure at each repitition (excluding extremely bad measurements):

The mean is ~ 100ppm. And the variation is ~ 27%.

  Draft   Mon Aug 5 16:28:41 2019 yehonathanUpdateLoss Measurementwhat is going on with the loss measurements ?

We hypothesize that the systematic error in the loss measurement can come from the fact that the requirement on the alignment of the cavity mirrors is not stringent enough.

We repeat the loss measurement with 50 measurements. This time we change the thresholds for the error signals of the dither-align in the measureArmLoss.py file from 0.5 to 0.3.

We repeat the analysis done before:

We plot the reflected power of the two states on top of each other:

This  time it appears there was no drift. The histogram of the loss measurement:

The mean is 104ppm and the variation is 27%.

What I notice this time is that the PD readings in the aligned and misaligned states are anti-correlated. This is also true in the previous run (where there was drift) when looking in the short time scales. I plot several time series to demonstrate:

I wonder what can cause this behaviour.

  14830   Mon Aug 5 17:36:04 2019 yehonathanUpdateLoss Measurement 

We check for unexpected drifts in the PD reading (clipping and such). We put a pickoff mirror where the PD used to be and place the PD at the edge of the table such that the beam is focused on it (see attachment).

The arms are completley misaligned. We note the time of start of measurement to be 1249086917.

Attachment 1: 20190805_171511.jpg
20190805_171511.jpg
  14834   Tue Aug 6 16:44:50 2019 yehonathanUpdateLoss Measurement 

I grab 2 hours of the PD measurements using dlData_simple.ipynb in the misaligned state.

I get pretty much a normally distributed reading without drifts (Attachements 1 and 2).

The error in the reading is ~ 0.5%.

 

I am pretty sure this amount of noise is enough to explain the big noise in the Loss figure measurement.

 

The reason is that the loss formula is #(1-P_Locked/P_Misaligned+T1)-T2) where T1 and T2 are the transmissions of the ITM and ETM.

The average of the ratio P_Locked/P_Misaligned is ~ 1.01 for a loss figure of ~ 100ppm.

The standard deviation of the ratio is ~ 1% which is also the standard deviation of the expression in the brackets.

The average of this experssion however is ~ 0.01.

The reduction of the mean amplifies the error in the loss measurments by a factor of a few 10s!

Attachment 1: figure_1.png
figure_1.png
Attachment 2: figure_1-1.png
figure_1-1.png
  15076   Thu Dec 5 08:44:44 2019 GavinUpdateLoss MeasurementQ Measurements of Test Masses

[Yehonathan, Gavin]

Measuring POX11_Q_MON and injecting a signal into the ITMX_UL_IN port a signal could not be seen on the function generator. After debugging the source of the issue was two fold:

  • By using the quadrant drives for coils (UL, UR etc) a signal has to pass through a switch before reaching the driver. To resolve this the signal input was switched to POS_IN (driving the entire coil at once rather than in quadrants) which has no switch to bypass.
  • The averaging on the Stanford SR785 was set too low. By increasing the averages from 10 to 25 the signal became more visible.

Unrelated to these issues the signal input was switched to POY11_Q_MON and ITMY_POS_IN as part of the debugging process. The function generator used was switched from the Stanford to the Siglant SDG 1032X.

An unrelated issue but note worthy was the Lenovo 40m laptop used to measure the IFO state (locked or unlocked) ran out of battery in a very short timespan.

To gauge where the resonance of the test masses are FEA model of a simple 40m test mass was computed to give an esitimate at what frequency the eigenmodes exist. For the first two modes the model gave resonances at 20.366 kHz (butterfly mode) and 28.820 kHz (drumhead mode). Then by measuring with an acquisition time of 1 s at they frequencies on the SR785 and injecting broad band white noise with a mean of 0 V and a stdev of 2 V, small peaks were seen above the noise at 20.260 kHz and 28.846 kHz. By then injecting a sine wave at those frequencies with 9 Vpp, the peak became clearly visible above the noise floor.

The last step is to measure the natural decay of these modes when the excitation is turned off. It is difficult to tell currently if these are indeed eigenmodes or just large cavity injections with an associated stabilisation time (what could appear as a ringdown decay). More investigation is required.

 

Attachment 1: 20191205_132158.jpg
20191205_132158.jpg
  15264   Tue Mar 10 19:59:09 2020 YehonathanUpdateLoss MeasurementArm transfer function measurement

I want to measure the transfer function of the arm cavities to extract the pole frequencies and get more insight into what is going on with the DC loss measurements.

The idea is to modulate the light using the PSL AOM. Measure the light transmitted from the arm cavities and use the light transferred from the IMC as a reference.

I tried to start measuring the X arm but the transmission PD is connected to the QPD whitening filter board with a 4 pin Lemo for which I couldn't find an adapter.

  • I switch to the Y arm where the transmission PD - Thorlabs PDA520 (250KHz Bandwidth) - is BNC all the way.
  • I lay an 82ft BNC cable from the Y Arm 1Y4 to 1Y1 where the BNC from the IMC Trans PD and an SR785 are found. 
  • I lock the Arm cavities.
  • I connect the AOM cable to the source, the TRY PD (Teed off from the QPD whitening filter) to CH1 and IMC_TRANS to CH2 and measure the transfer function using a swept sine with an offset of 300mV and amplitude of 100mV.
  • I fit it to a low pass filter function - see attachment 1 - but it seems like the fit rails off at 10KHz. 

Could this be because of the PDA520 limited BWs? I tried playing with the PD gain/bandwidth switch but it seems like it was already set to high bandwidth/low gain.

In any case, the extracted pole frequency ~ 2.9kHz implies a finesse > 600 (assuming FSR = 3.9MHz) which is way above the maximal finesse (~ 450) for the arm cavities.

I disconnected the source from the AOM. But left the other two BNCs connected to the SR785. Also, TRY PD is still teed off. Long BNC cable is still on the ground.

Attachment 1: YArmFrequencyResponse.pdf
YArmFrequencyResponse.pdf
  15269   Thu Mar 12 10:43:50 2020 ranaUpdateLoss MeasurementArm transfer function measurement

                               when doing the AM sweeps of cavities

make sure to cross-calibrate the detectors

                       else you'll make of science much frivolities

            much like the U.S. elections electors

  15277   Mon Mar 16 15:23:03 2020 YehonathanUpdateLoss MeasurementArm transfer function measurement

I measured the cross-calibration of the two PDs on the PSL table.

I used the existing flip mounted BS that routes the beam into a PDA255, the same as in the IMC transmission.

I placed a PDA520, the same as the one measuring TRY_OUT on the ETMY table,  on the transmission of the BS (Attachment 1).

I used the SR785 to measure the frequency response of PDA520 with reference to PDA255 (Attachment 2). Indeed, calibration is quite significant.

I calibrated the Y arm frequency response measurement.

However, the data seem to fit well to 1/sqrt(f^2+fp^2) - electric field response - but not to 1/(f^2+fp^2) - intensity response. (Attachment 3).

Also, the extracted fp is 3.8KHz (Finesse ~ 500) in the good fit -> too small.

When I did this measurement for the IMC in the past I fitted the response to 1/sqrt(f^2+fp^2) by mistake but I didn't notice it because I got a pole frequency that was consistent with ringdown measurements.

I also cross calibrated the PDs participating in the IMC measurement but found that the calibration amounted for distortions no bigger than 1db.

Attachment 1: Cross_calibration_setup.jpg
Cross_calibration_setup.jpg
Attachment 2: PDA520overPDA255_Response.pdf
PDA520overPDA255_Response.pdf
Attachment 3: YArmFrequencyResponse.pdf
YArmFrequencyResponse.pdf
  15307   Sat Apr 18 14:57:44 2020 YehonathanUpdateLoss MeasurementArm transfer function measurement

Ok, now I understand my foolishness. It should definitely be 1/sqrt(f^2+fp^2) .

Quote:
However, the data seem to fit well to 1/sqrt(f^2+fp^2) - electric field response - but not to 1/(f^2+fp^2) - intensity response. (Attachment 3).
  15323   Sat May 9 17:01:08 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation
I took the phase maps of the 40m X arm mirrors and calculated what is the loss of a gaussian beam due to a single bounce. I did it by simply calculating 1 - (overlap integral)^2 where the overlap is between an input Gaussian mode (calculated from the parameters of the cavity. Waist ~ 3.1mm) and the reflected beam (Gaussian imprinted with the phase map). The phase maps were prepared using PyKat surfacemap class to remove a flat surface, spherical surface, centering, etc. (Attachments 3, 4)
 
I calculated the loss map (Attachments 1,2: ~ 4X4 mm for ITM, 3X3mm for ETM) by shifting the beam around the phase map. It can be seen that there is a great variation in the loss: some areas are < 10 ppm some are > 80 ppm.
 
For the ITM (where the beam waist is) the average loss is ~ 23ppm and for the ETM its ~ 61ppm due to the enlarged beam. The ETM case is less physical because it takes a pure gaussian as an input where in reality the beam first interacts with the ITM.
 
I plan to do some first-order perturbation theory to include the cavity effects. I expect that the losses will be slightly lower due to HOMs not being completely lost, but who knows.
 
Attachment 1: ITMX_Loss_Map.pdf
ITMX_Loss_Map.pdf
Attachment 2: ETMX_Loss_Map.pdf
ETMX_Loss_Map.pdf
Attachment 3: ITMX_Phase_Map_(nm).pdf
ITMX_Phase_Map_(nm).pdf
Attachment 4: ETMX_Phase_Map_(nm).pdf
ETMX_Phase_Map_(nm).pdf
  15329   Wed May 13 15:13:11 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation

Koji pointed out during the group meeting that I should compensate for local tilt when I move the beam around the mirror for calculating the loss map.

So I did.

Also, I made a mistake earlier by calculating the loss map for a much bigger (X7) area than what I thought.

Both these mistakes made it seem like the loss is very inhomogeneous across the mirror.

Attachment 1 and 2 show the corrected loss maps for ITMX and ETMX respectively.

The loss now seems much more reasonable and homogeneous and the average total arm loss sums up to ~ 22ppm which is consistent with the after-cleaning arm loss measurements.

Attachment 1: ITMX_Loss_Map.pdf
ITMX_Loss_Map.pdf
Attachment 2: ETMX_Loss_Map.pdf
ETMX_Loss_Map.pdf
  15332   Thu May 14 12:21:56 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation

I finished calculating the X Arm loss using first-order perturbation theory. I will post the details of the calculation later.

I calculated loss maps of ITM and ETM (attachments 1,2 respectively). It's a little different than previous calculation because now both mirrors are considered and total cavity loss is calculated. The map is calculated by fixing one mirror and shifting the other one around.

 

The losin total is pretty much the same as calculated before using a different method. At the center of the mirror, the loss is 21.8ppm which is very close to the value that was calculated. 

 

Next thing is to try SIS.

Attachment 1: ITMX_Loss_Map_Perturbation_Theory.pdf
ITMX_Loss_Map_Perturbation_Theory.pdf
Attachment 2: ETMX_Loss_Map_Perturbation_Theory(1).pdf
ETMX_Loss_Map_Perturbation_Theory(1).pdf
  15333   Thu May 14 19:00:43 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation

Perturbation theory:

The cavity modes \left|q\rangle_{mn} , where q is the complex beam parameter and m,n is the mode index, are the eigenmodes of the cavity propagator. That is:

\hat{R}_{ITM}\hat{K}_L\hat{R}_{ETM}\hat{K}_L\left|q\rangle_{mn}=e^{i\phi_g}\left|q\rangle_{mn},

where \hat{R} is the mirror reflection matrix. At the 40m, ITM is flat, so \hat{R}_{ITM}=\mathbb{I}. ETM is curved, so \hat{R}_{ETM}=e^{-i\frac{kr^2}{2R}}, where R is the ETM's radius of curvature.

\phi_g is the Gouy phase.

\hat{K}_L=\frac{ik}{2\pi L}e^{\frac{ik}{2L}\left|\vec{r}-\vec{r}'\right|^2}is the free-space field propagator. When acting on a state it propagates the field a distance L.

 

The phase maps perturb the reflection matrices slightly so:

\hat{R}_{ITM}\rightarrow e^{ikh_1\left(x,y \right )}\approx 1+ikh_1\left(x,y \right )

\hat{R}_{ETM}\rightarrow e^{ikh_2\left(x,y \right )}e^{-i\frac{kr^2}{2R}}\approx\left[1+ikh_2\left(x,y \right )\right]e^{-i\frac{kr^2}{2R}},

Where h_12 are the height profiles of the ITM and ETM respectively. The new propagator is

H = H_0+V, where H_0 is the unperturbed propagator and

V=ikh_1\left(x,y \right )H_0+ik\hat{K}_Lh_2\left(x,y \right )e^{-i\frac{kr^2}{2R}}\hat{K}_L

To find the perturbed ground state mode we use first-order perturbation theory. The new ground state is then

|\psi\rangle=\textsl{N}\left[ |q\rangle_{00}+\sum_{m\geq 1,n\geq1}^{}\frac{{}_{mn}\langle q|V|q\rangle_{00}}{1-e^{i\left(m+n \right )\phi_g}}|q\rangle_{mn}\right]

Where N is the normalization factor. The (0,1) and (1,0) modes are omitted because they can be zeroed by tilting the mirrors. Gouy phase of TEM00 mode is taken to be 0.

Some simplification can be made here:

{}_{mn}\langle q|V|q\rangle_{00}={}_{mn}\langle q|ikh_1\left(x,y \right )|q\rangle_{00}+{}_{mn}\langle q|\hat{K}_Likh_2\left(x,y \right )e^{-i\frac{kr^2}{2R}}\hat{K}_L|q\rangle_{00}

{}_{mn}\langle q|\hat{K}_Likh_2\left(x,y \right )e^{-i\frac{kr^2}{2R}}\hat{K}_L|q\rangle_{00}={}_{mn}\langle q-L|ikh_2\left(x,y \right )e^{-i\frac{kr^2}{2R}}|q+L\rangle_{00}={}_{mn}\langle q+L|ikh_2\left(x,y \right )|q+L\rangle_{00}

The last step is possible since the beam parameter q matches the cavity.

 

The loss of the TEM00 mode is then:

L=1-\left|{}_{00}\langle q|\psi\rangle\right|^2

 

 

 

 

  15338   Tue May 19 15:39:04 2020 YehonathanUpdateLoss Measurement40m Phase maps loss estimation

I have a serious concern about this low angle scattering analysis:

Phase maps perturb the spatial mode of the steady-state of the cavity, but how is this different than mode-mismatch? The loss that I calculated is an overall loss, not roundtrip loss.

The only way I can think this can become serious loss is when the HOMs themselves have very high roundtrip loss. Attached is the modal power fraction that I calculated.

 

Attachment 1: Mode_power_fraction1.pdf
Mode_power_fraction1.pdf
  16253   Wed Jul 21 18:08:35 2021 yehonathanUpdateLoss MeasurementLoss measurement

{Gautam, Yehonathan, Anchal, Paco}

We prepared for the loss measurement using DC reflection method. We did the following changes:

1. REFL55_Q was disconnected and replaced with MC_T cable coming from the PD on the MC2 table. The cable has a red tag on it. Consequently we lost the AS beam. We realigned the optics and regained arm locks. The spot on the AS QPD had to be corrected.

2. We tried using AS55 as the PD for the DC measurement but we got ratios of ~ 0.97 which implies losses of more than 100 ppm. We decided to go with the traditional PD520 used for these measurements in the past.

3. We placed the PD520 used for loss measurements in front of the AS55 PD and optimized its position.

4. AS110 cable was disconnected from the PD and connected to PD520 to be used as the loss measurement cable.

5. In 1Y2 rack, AS110 PD cable was disconnected, REFL55_I was disconnected and AS110 cable was connected to REFL55_I channel.

So for the test, the MC transmission was measured at REFL55_Q and the AS DC was measured at REFL55_I.

We used the scripts/lossmap_scripts/armLoss/measArmLoss.py script. Note that this script assumes that you begin with the arm locked.

We are leaving the IFO in the configuration described above overnight and we plan to measure the XARM loss early AM. After which we shall restore the affected electrical and optical paths.


We ran the /scripts/lossmap_scripts/armLoss/measureArmLoss.py script in pianosa with 25 repetitions and a 30 s "duty cycle" (wait time) for the Y arm. Preliminary results give an estimated individual arm loss of ~ 30 ppm (on both X/Y arms) but we will provide a better estimate with this measurement. 

  16254   Thu Jul 22 16:06:10 2021 PacoUpdateLoss MeasurementLoss measurement

[yehonathan, anchal, paco, gautam]

We concluded estimating the XARM and YARM losses. The hardware configuration from yesterday remains, but we repeated the measurements because we realized our REFL55_I_ERR and REFL55_Q_ERR signals representing the PD520 and MC_TRANS were scaled, offset, and rotated in a way that wasn't trivially undone by our postprocessing scripts... Another caveat that we encountered today was the need to add a "macroscopic" misalignment to the ITMs when doing the measurement to avoid any accidental resonances.

The final measurements were done with 16 repetitions, 30 second duration, and the logfiles are under scripts/lossmap_scripts/armLoss/logs/20210722_1423.txt and scripts/lossmap_scripts/armLoss/logs/20210722_1513.txt

Finally, the estimated YARM loss is 39\pm7 ppm, while the estimated XARM loss is 38\pm8 ppm. This is consistent with the inferred PRC gain from Monday and a PRM loss of ~ 2%.


Future measurements may want to look into slow drift of the locked vs misaligned traces (systematic errors?) and a better way of estimating the statistical uncertainty (e.g. by splitting the raw time traces into short segments)

  16256   Sun Jul 25 20:41:47 2021 ranaUpdateLoss MeasurementLoss measurement

What are the quantitative root causes for why the statistical uncertainty is so large? Its larger than 1/sqrt(N)

  16257   Mon Jul 26 17:34:23 2021 PacoUpdateLoss MeasurementLoss measurement

[gautam, yehonathan, paco]

We went back to the loss data from last week and more carefully estimated the ARM loss uncertainties.

Before we simply stitched all N=16 repetitions into a single time-series and computed the loss: e.g. see Attachment 1 for such a YARM loss data. The mean and stdev for this long time series give the quoted loss from last time. We knew that the uncertainty was most certainly overestimated, as different realizations need not sample similar alignment conditions and are sensitive to different imperfections (e.g. beam angular motion, unnormalizable power fluctuations, etc...).


Today we analyzed the individual locked/misaligned cycles individually. From each cycle, it is possible to obtain a mean value of the loss as well as a std dev *across the duration of the trace*, but because we have a measurement ensemble, it is also possible to obtain an ensemble averaged mean and a statistical uncertainty estimate *across the independent cycle realizations*. While the mean values don't change much, in the latter estimate we find a much smaller statistical uncertainty. We obtain an XARM loss of 37.6 \pm 2.6 ppm and a YARM loss of 38.9 \pm 0.6 ppm. To make the distinction more clear, Attachment 2 and  Attachment 3 the YARM and XARM loss measurement ensembles respectively with single realization (time-series) standard deviations as vertical error bars, and the 1 sigma statistical uncertainty estimate filled color band. Note that the XARM loss drifts across different realizations (which happen to be ordered in time), which we think arise from inconsistent ASS dither alignment convergence. This is yet to be tested.


For budgeting the excessive uncertainties from a single locked/misaligned cycle, we could look at beam pointing, angular drift, power, and systematic differences in the paths from both reflection signals. We should be able to estimate the power fluctuations by looking at the recorded arm transmissions, the recorded MC transmission, PD technical noise, etc... and we might be able to correlate recorded oplev signals with the reflection data to identify angular drift. We have not done this yet.

Attachment 1: LossMeasurement_RawData.pdf
LossMeasurement_RawData.pdf
Attachment 2: YARM_loss_stats.pdf
YARM_loss_stats.pdf
Attachment 3: XARM_loss_stats.pdf
XARM_loss_stats.pdf
  231   Thu Jan 10 00:12:01 2008 tobinSummaryLockingDR
[John, Tobin, Rana]

1. We found SUS_BS_SENSOR_UL to have a ratty signal and low DC value. Twiddling the cables at the BS satellite amplifier and vacuum feedthrough brought the signal back (to 0.667V), but it is still spiky, spiking up to a couple times per second. Rana suggested that these spikes might be scattered YAG laser light (as hypothesized in August). The spikes go away when we misalign the PRM or either ITM, and when we unlock the mode cleaner, lending credance to this theory. SUS_BS_SENSOR_UR also spikes, but much less frequently. We turned off C1:SUS-BS_ULSEN_SW2 and continued.

2. After dither alignment the oplev beams were centred and we were able to lock DRM plus either arm reliably (however locking in this state broke ./drstep_bang at the first ``Going DD''). We ran scripts/DRFPMI/bang/nospring/drdown_bang and were subsequently able to lock DRFPMI (i.e., full IFO) a couple times.

3. To do: Debug ./drstep_bang with just the DRM (no arms).
  313   Tue Feb 12 16:39:52 2008 robUpdateLockingreport

Did some locking work on DRFPMI on sunday and (with John) on monday nights. So far progress has not been terribly encouraging.

Problems include the DD_handoffs not working and the CARM->MCL handoff not working so well. To get around the DD signals trouble, I decided for now to just ignore 67% of the DD signals. We should be able to run with PRC & MICH on single demod signals, and SRC on a DD signal. This seems to work well in a DRMI state, and it also works well in a DRMI+2ARMs state.

The CARM->MCL handoff actually works, but it doesn't take kindly to the AO path and it doesn't work very stably. I guess this was always the most fragile part of the whole locking procedure, and it's fragility is really coming to light now. Investigation continues.
  362   Thu Mar 6 00:17:37 2008 robUpdateLockingDD handoff working
Got the DD (double demod) handoff scripts working tonight, with just the DRMI. So, now acquisition with the single demod signals is working well, and handoffs to all double demod signals using the input matrix ramping worked several times with the scripts. Up next will be more work with the DRM+ARMs.
  366   Mon Mar 10 02:05:08 2008 robUpdateLockingDRMI+2ARMs working better

Some encouraging progress on the locking front tonight. After the work on the DRM loops last week and a review of the settings for initial lock acquisition (loop gains, tickle amplitude, filter states, so on), the DRMI+2ARMS locking is working pretty well. That's to say, it takes from 5-15 minutes generally for the IFO to lock in the offset CARM state, with the arm powers at 0.5. It's then possible to raise the arm powers slightly, and handing off control of CARM to MCL works at low power, but engaging the AO path (using PO_DC as an error signal) is not working so well. Taking swept sines indicates that the PO_DC should be a good error signal. The next good thing to try might be just using PO_DC as an error signal for the length path, without using the AO path at all, to see if it's something in the hardware.
  442   Thu Apr 24 14:10:26 2008 robUpdateLockinglocking work
Rob, Johnnie

We made some progress on locking last night (Wed night), namely that we were able to handoff (briefly) the CARM-MCL path the REFL-DC error signal. We tried this because we suspect that the reason the PO-DC is not a good CARM error signal is because at low powers, the dc light level in the recycling cavity is dominated by the +f2 RF sideband. Thus, REFL-DC should work a bit better at low powers, which it did. It wasn't super stable, though, so this will require a bit of work to make the transition reliable & stable. The next things to work on include setting the AO path gain properly and possibly going to higher arm powers before handing off (thus increasing the discriminant).

Another thing we found is that the alignment scripts are not working in an ideal fashion. Running the alignment scripts for the two arms (XARM & YARM) leaves the Michelson badly misaligned, making it impossible to get good DRM alignment. This will have to be fixed.
  531   Thu Jun 12 01:51:23 2008 robUpdateLockingreport
rob, john

We've been working (nights) on getting the IFO locked this week. There's been fairly steady incremental progress each night, and tonight we managed to control CARM(MCL) using PO-DC, with the CARM(AO) path also on PO-DC. In the past, reaching this state has usually meant we're home free, as we could just crank the gain on the common mode servo and merrily reduce the CARM offset. Tonight, however, this state has been very twitchy, and efforts to ramp up the gain have been unsuccessful.

I've attached a diagram which I hope makes clear where we are in the stages of lock acquisition.
Attachment 1: lock_control_sequence.png
lock_control_sequence.png
  532   Thu Jun 12 15:09:33 2008 alanUpdateLockingreport
Rob: Awesome figure. As you can imagine, I have lots of questions, and hope that you will consider this figure to be the beginning, leading to ever-more detailed versions. But for now, I just want to ask whether you understand *what* is twitchy, and what the twitchiness does to prevent you from taking this further?
  533   Thu Jun 12 15:55:15 2008 robUpdateLockingreport

Quote:
Rob: Awesome figure. As you can imagine, I have lots of questions, and hope that you will consider this figure to be the beginning, leading to ever-more detailed versions. But for now, I just want to ask whether you understand *what* is twitchy, and what the twitchiness does to prevent you from taking this further?


I definitely don't understand what's twitchy, but I have suspicions. Tonight we'll try to start by revisiting the other loops (the non-CARM loops) and see how they're dealing with the changing power levels. It may be that the DARM loop is going unstable due to gain variations (due to either increasing power or to rotation of demod phase) or it could be the PODD (or SPOB) saturating with increased power in the recycling cavity. I just hope the glitchiness doesn't have a digital origin.
  568   Wed Jun 25 13:56:56 2008 JohnSummaryLockingTuesday night locking
Rob, John

Worked to try and reduce the CARM offset using the dc arm transmissions before changing to SPOB DC. This proved somewhat unsuccessful, the offset couldn't be reduced to more than five (arms storing 5x more power than single arm cavity lock).
  573   Thu Jun 26 12:30:40 2008 JohnSummaryLockingCARM on REFL_DC
Idea:

Try REFL_DC as the error signal for CARM rather than PO_DC.

Reasoning:

The PO signal is dominated by sideband light when the arms are detuned so that any misalignment in the recycling cavity introduces spurious signals. Also, the transfer function from coupled cavity excitation to REFL signal is not so steep and hence REFL should give a little more phase. Finally, the slope of the REFL signal should make it easier to hand over to RF CARM.

Conclusion:


The REFL signal showed no clear improvement over PO signals. We've gone back to PO.


During the night we also discovered that the LO for the MC loop is low.
  582   Fri Jun 27 14:36:39 2008 JohnSummaryLocking 
Rob, Yoichi, John

Some progress last night:

Switched back to SPOB_DC for CARM.

Shaped the MC LSC loop to reduce excess noise in the 20-30Hz band. Likely the most significant change.
Could this be due to fan noise from the laptop on the SPOB table?

Brought in the AO path earlier (at low gain).

Reduced offset to 6 and increased MC gain before handing off to SPOB. Ramped up AO and MC gain then switched off the
moving zero.

Looks like PD11 is the most promising candidate for RF CARM although the demod phase needs attention.
Attachment 1: gettingcloser.png
gettingcloser.png
  623   Wed Jul 2 13:56:10 2008 Rob, Yoichi, JohnUpdateLocking24.5 Hz resonance
Work continues on trying to reduce the CARM offset using dc signals from PO_DC. Got up to arm powers of
~35 last night.

We found that progress was stymied by an oscillation around 24 Hz. This oscillation was clearly visible
in the intensity of the light at REFL, PO and TrX.

Initially we suspected that this oscillation was due to an instability in the CARM loop. We attempted to
solve the problem by tuning the crossover frequncy of the AO and MC_L paths and shaping the MC_L loop to
reduce the impact of the 24 Hz noise.

After some quick tests we found that the 24 Hz signal was present even when dc CARM was used. It appears
that the peak is in fact due to a SOS mechanical resonance. We currently suspect a roll mode.

We're going to check that PRC, MICH and DARM have filters to attenuate the 24 Hz line. We'll also look at the
SUS_POS bandstop filters to see where they are centred.

The ISS was behaving strangely again. Constantly saturated at 5dB of gain. Someone needs to look a this.
Attachment 1: locking080702.png
locking080702.png
  630   Thu Jul 3 13:12:32 2008 Rob, Yoichi, JohnUpdateLockingMore oscillations
Bounce/ roll filters were added to the short degrees of freedom to reduce the effect of the 24Hz line seen on Tuesday night.

However last night saw the arrival of a new oscillation at ~34Hz. This may be the second harmonic of the MOS roll mode. Reducing the arm offset would cause this oscillation to ring up and break the lock (first plot). This effect was repeatable.

No signal was seen in the oplevs or osems which leads us to rule out alignment problems, at least for now.

Although one can clearly see DARM_ERR increasing as arm power increases adding a resonant gain in the DARM loop had no effect.

We also noticed that x arm transmission was significantly more noisy than the Y (second plot). And showed greater coherence with the increase in DARM noise. Investigations showed that the PD was not the source of the difference.

Turning up the MC gain seems to help a little.

We're now looking at POX as a candidate for RF_CARM (third plot).
Attachment 1: LOL.png
LOL.png
Attachment 2: NoisyX080702.png
NoisyX080702.png
Attachment 3: POXforCARM080702.png
POXforCARM080702.png
  632   Thu Jul 3 16:18:51 2008 robSummaryLockingspecgrams
I used ligoDV to make some spectrograms of DARM_ERR (1), QPDX (2), and QPDY (3). These show the massive instability from 30-40Hz growing in the XARM in the last two minutes of a reasonably high power lock (arm powers up to 30). It's strange that it only shows up in one arm.

CARM is on PO-DC, for both the MCL and the AO path.
DARM is on AS166Q.
Attachment 1: darm_specg.png
darm_specg.png
Attachment 2: qpdx_specg.png
qpdx_specg.png
Attachment 3: qpdy_specg.png
qpdy_specg.png
  651   Wed Jul 9 12:42:14 2008 JohnUpdateLockingHand off to RF CARM
Rob, Yoichi, John

Last night we were able to reduce the CARM offset to around 80. This was achieved by increasing the DARM gain and
switching to AS_I when AS_Q went bad. This is probably a temporary solution, we will probably switch to DC readout
for DARM as we bring the arms on resonance.

Having reduced the arm offset enough to get us into the linear region of the RF_CARM signal (POX_I) we worked on
analogue conditioning of this signal to allow us to hand over. Lock was maintained for over 20 minutes as we did
this work.

We were able to partially switch over both the frequency and length paths to this new signal before losing lock.
Attachment 1: LongLock080709.png
LongLock080709.png
  655   Thu Jul 10 14:59:01 2008 robUpdateLockingRF common mode at zero offset
rob, john, yoichi

Last night we succeeded in reducing the CARM offset to zero.

We handed off control of the common mode servo from PO-DC to POX-I.

We pushed the common mode servo bandwidth to ~19kHz. Without the boosts, it had ~80 degs of phase margin. Didn't measure it after engaging the boosts (Boost + 1 superboost). Trying to engage the second superboost stage broke the lock.

The process is fully scripted, and the script worked all the way through several times.

The DARM ugf was ~200Hz. The RSE peak could clearly be seen. No optical spring, as expected (we're locking in anti-spring mode).

Engaging test mass de-whitening filters did not work (broke the lock).

I'm attaching a lock control sequence diagram and a trend of the arm power during a scripted up-sequence. I think the script can be sped up significantly (especially the long ramp period).

Up next:

Calibrated DARM spectrum
Noise hunting (start with dewhites)
DC - Readout
Lock to the springy side.
Attachment 1: lock_control_sequence_worked.png
lock_control_sequence_worked.png
Attachment 2: trendpowerbuild.png
trendpowerbuild.png
  661   Fri Jul 11 23:55:25 2008 alanUpdateLockingRF common mode at zero offset

Quote:
rob, john, yoichi

Last night we succeeded in reducing the CARM offset to zero.



Congratulations! Well done! I look forward to hearing the details and further progress!
  675   Tue Jul 15 12:44:08 2008 JohnSummaryLockingDRFPMI with DC readout
Rob, John

Last night, despite suspect alignment, we were again able to reduce the CARM offset to zero using
the RF signal.We were also able to transfer to dc readout taking calibrated spectra in both states.
DC readout shows a marked improvement over RF above ~1kHz but introduces some noise around 100Hz.
Broadband sensitivity appears to be more than ten times worse than previously. The calibration
being used remains to be confirmed.

Engaging the ETMY dewhitening caused lock to be lost. We'll check this today. The OMC alignment loops
may also need some attention.

We looked at REFL_166 as a candidate for CARM, at present POX still looks better.

The DARM filters were modified to reduce excess noise around 3Hz. Updating filter coefficients does
not cause loss of lock.
  732   Thu Jul 24 03:08:20 2008 robUpdateLocking+f2 DRMI+2ARMS

rob, john, yoichi

Tonight we tried to move the 166MHz (f2) sideband frequency by changing the settings on the Marconi. Reducing the frequency by 4kHz reduced the amplitude of the 166MHz sidebands, but we were still able to lock the DRMI with the +-f2 sidebands by electronically compensating for the gain decrease, and also to lock the DRMI+2ARMs while resonating the -f2 sideband. No luck with the +f2.

Then we larkily tried increasing the frequency by 4kHz, which ~doubled the f2 sideband transmission through the MC. This means our frequencies/MC length have been mismatched for months. Apparently I explained the level of the f2 sidebands by just imagining that I'd (or someone) had set the modulation depth at that level some time in the past.

It's a miracle any locking worked at all in this state. Once this was done and we worked out a few kinks in the script, adjusting some gains to compensate, we managed to get the DRMI+2ARMS to lock a couple of times while resonating the +f2 sideband. It takes a while, but at least it happens. Tomorrow we'll measure the length of the mode cleaner properly and then try again. No need to vent just yet.
  848   Mon Aug 18 17:37:14 2008 robUpdateLockingrecovery progress

I removed the beam block after the PSL periscope and opened the PSL shutter.

There was no MC Refl beam on the camera, so I decided to trust the PSL launch
and aligned the MC to the PSL beam. Here are the old and new values for
the MC angle biases:
 __Epics_Channel_Name______   __OLD_____    ___New___
 C1:SUS-MC1_PIT_COMM          4.490900        3.246900 
 C1:SUS-MC1_YAW_COMM          0.105500	      -0.912500
 C1:SUS-MC2_PIT_COMM          3.809700	      3.658600 
 C1:SUS-MC2_YAW_COMM          -1.837100	      -1.217100
 C1:SUS-MC3_PIT_COMM          -0.614200	      -0.812200
 C1:SUS-MC3_YAW_COMM          -3.696800	      -3.303800

After this, the beam looks a *little low* going into the Faraday Isolator.
Nonetheless, after turning on the IFO input steering PZTs, I was able to
quickly steer the PRM get a beam on the REFL camera and into the REFL OSA.
The PRM optical lever beam is also striking the quad.

I then used the ETMX optical lever as a reference for realigning. After
steering around the input PZTs and ITMX, I saw some flashes in Xarm trans, then got
it locked and ran the alignment script ~5 times. The arm power went
up to 0.9, so I tweaked the MC1 to put the MC refl beam back on MCWFS.
The XARM power then went up to .96. Good enough for now.

Then I started to try and re-align the YARM. Since the oplevs for both ITMY
and the BS are untrustworthy, I first tried to get the beam bouncing off ITMX
and the BS back into the AS OSA, to try and recover some BS alignment. This
didn't work, as the AS OSA may not be a good reference anyways. After
wandering around in the dark for a little while, I decided to try an automated
scan of the alignment space. I used the trianglewave script to scan
the angle biases of BS, ITMY, & ETMY, then looked at the trend of the transmitted
power to find the gps time when there were flashes. I then used
time_machine_conlog to restore the biases to that time. This was close
enough to easily recover the alignment. After several rounds of aligning &
centering oplevs, things look good.

Also locked a PRM. Will work on the DRM tomorrow.

I'm leaving the optics in their "aligned" states over night, so they can
start their "training."

Note: The MC is not staying locked. Needs investigation.

For tomorrow:

lock up the DRM
fix the mode cleaner
re-align mode cleaner to optimize beam through Faraday
re-align all optics again (will be much easier than today)
re-align beam onto all PDs after good alignment of suspended optics is established.
Attachment 1: flatlissa.png
flatlissa.png
  862   Wed Aug 20 13:23:32 2008 robUpdateLockingDRMI locked

I was able to lock the DRMI this afternoon. All the optical levers have been centered.
  953   Wed Sep 17 12:58:12 2008 robUpdateLockingbad

Locking was pretty unsuccessful last night. All the subparts were locked (ARMs, PRM, DRM) and
aligned, but no DRMI+2ARMs locks. The alignment may have drifted significantly by the time I
got around to working the full shebang, however.

We should get back into the habit of clicking the
yellow "Restore last auto-alignment" button when we finish using the interferometer.
  985   Tue Sep 23 13:25:07 2008 robUpdateLockinga bit better
I've been spending time working on the short DOF loops (PRC,MICH,SRC) in an attempt to make the
initial stage of lock acquisition (the DRMI+2ARMs, no spring) better. This seems to have been
largely successful, as last night there were several locks of the DRMI+2ARMs with pretty short
wait times.

The output matrix for the short DOFs is a bit strange, though. The MICH->PRM element is about
3 times too small, which seems to indicate something broken in hardware. The MICH->SRM element
seems normal, though, which suggests the BS is isn't broken--either the PRM has had a sudden
actuation increase or it's a problem with the sensing.
  998   Fri Sep 26 16:08:39 2008 robUpdateLockingsome progress
There's been good progress in locking the last couple of nights. A lot of time was wasted before I found that
all the SUS{POS,PIT,YAW} damping gains on the SRM were set to 0.1 for some reason, which let it get rung up
just a bit during bang locking. After setting these gains to 0.5 (similar to PRM and BS), the initial lock
acquisition of DRMI+2ARMs (nospring) got much quicker. Then more time was wasted by sticky sliders on the
transmon QPD whitening gain, causing the Schmitt triggered HI/LO gain PD switch not to happen. This meant
that the arm power was not reported properly when the CARM offset was reduced, and so loop gain normalizations
were not working properly. After all this, by the end of the night last night, reduced the CARM offset such
that stored power in the arms was about half of the max. Should be able to get to full power with another
good night, and then back to springy locking.
  1009   Tue Sep 30 13:43:43 2008 robUpdateLockinglast night
Steady progress again in locking again last night. Initial acquisition of DRMI+2ARMs was working well.
Short DOF handoff, CARM->MCL, AO on PO_DC, and power ramping all worked repeatedly, in the cm_step script.
This takes us to the point where the common mode servo is handed off to an RF signal and the CARM offset
is reduced to zero. This last step didn't work, but it should just require some tweaking of the gains
during the handoff.
ELOG V3.1.3-