40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 63 of 349  Not logged in ELOG logo
ID Date Author Type Category Subject
  14458   Fri Feb 15 18:41:18 2019 ranaUpdateVACVac system is back up

If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?

Quote:

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)

  14457   Fri Feb 15 15:22:08 2019 gautamUpdateCDSc1rfm errors persist

I restarted c1scyc1rfm (so both sender and receiver models were cycled) and power-cycled the c1iscey and c1sus machines. The TRY PD is certainly seeing light - it is just not getting piped over to c1rfm. dmesg doesn't give any clues. I'm out of ideas.

P.S. The new reality seems to be that getting ITMY stuck in the event of a c1susaux reboot is inevitable. As is the practise for ITMX, I tried slowly ramping the PIT and YAW biases to 0 slowly - but in the process of ramping YAW to 0, the optic got stuck. I am ramping in steps of 0.1 (in units of the PIT/YAW sliders, waiting ~3 seconds between steps), I guess I can try ramping even more slowly.

Update: I power cycled the physical RFM switch. This necessitated reboot of all vertex FEs. But seems like things are back to normal now...

Note: to unstick ITMY, seems like the best approach is:

  1. Jiggle bias until SIDE shadow sensor is on average above it's half-light level. This is the critical step. A bias of +20000 cts on the fast SIDE output seems to help.
  2. Set YAW bias to -10, ramp down the BIAS in steps of 0.1, watching shadow sensor levels to ensure optic doesn't get stuck again.
  3. Hope for the best. Iterate if necessary.
Quote:

The pressure is still 2e-4 torr according to CC1 so I thought I'd give ASS debugging a go tonight. But the arm transmission signal isn't coming through to the LSC model from the end PDs - so a resurfacing of this problem. Rebooting the sender model, c1scy, did not fix the problem. Moreover, c1susaux is dead. The last time I rebooted it, ITMY got stuck so I'm not going to attempt a revival tonight.

Attachment 1: Screenshot_from_2019-02-15_15-21-47.png
Screenshot_from_2019-02-15_15-21-47.png
  14456   Fri Feb 15 11:58:45 2019 JonUpdateVACVac system is back up

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals) that could only be cleared by power cycling the units. After resetting the system, the main volume pressure dropped quickly and is now < 2e-5 torr, so normal operations can resume. For future reference, below is the procedure to safely reset these units from a trouble state.

Vacromag Reset Procedure

  • TP2 and TP3 can be left running, but isolate them by closing valves V4 and V5.
  • TP1 can also be left running, but manually flip the operation mode on the front of the controller from REMOTE to LOCAL. This prevents the pump from receiving a "stop" command when its control Acromag shuts down.
  • Close all the pneumatic valves in the system (they'll otherwise close automatically when their control Acromags shut down).
  • On c1vac, stop the modbusIOC service. Sometimes this takes ~1 min to actually terminate.
  • Turn off the Acromags by flipping the "24 V" on the back of the chassis.
  • Wait ~10 sec, then turn them back on.
  • Start the modbusIOC service. It may take up to ~1 min for all the readings on the MEDM screen to initialize.
  • Ensure that the rotation speed of TP1,2,3 are still all nominal.
  • If pumps are OK, open V4, V5, and V7, then open V1. This restores the system to the "Maximum pumping speed" state.
  • Flip the TP1 controller operation state back to REMOTE.
  14455   Thu Feb 14 23:14:12 2019 gautamUpdateCDSc1rfm errors

The pressure is still 2e-4 torr according to CC1 so I thought I'd give ASS debugging a go tonight. But the arm transmission signal isn't coming through to the LSC model from the end PDs - so a resurfacing of this problem. Rebooting the sender model, c1scy, did not fix the problem. Moreover, c1susaux is dead. The last time I rebooted it, ITMY got stuck so I'm not going to attempt a revival tonight.

  14454   Thu Feb 14 21:29:24 2019 gautamSummaryLoss MeasurementInferred Y arm loss

Summary:

From the measurements I have, the Y arm loss is estimated to be 58 +/- 12 ppm. The quoted values are the median (50th percentile) and the distance to the 25th and 75th quantiles. This is significantly worse than the ~25 ppm number Johannes had determined. The data quality is questionable, so I would want to get some better data and run it through this machinery and see what number that yields. I'll try and systematically fix the ASS tomorrow and give it another shot.

Model and analysis framework:

Johannes and I have cleaned up the equations used for this calculation - while we may make more edits, the v1 of the document lives here. The crux of it is that we would like to measure the quantity \kappa = \frac{P_L}{P_M}, where P_{L(M)} is the power reflected from the resonant cavity (just the ITM). This quantity can then be used to back out the round-trip loss in the resonant cavity, with further model parameters which are:

  1. ITM and ETM power transmissivities
  2. Modulation depths and mode-matching efficiency into the cavity
  3. The statistical uncertainty on the measurement of the quantity \kappa, call it \sigma_{\kappa}

If we ignore the 3rd for a start, we can calculate the "expected" value of \kappa as a function of the round-trip loss, for some assumed uncertainties on the above-mentioned model parameters. This is shown in the top plot in Attachment #1, and while this was generated using emcee, is consistent with the first order uncertainty propagation based result I posted in my previous elog on this subject. The actual samples of the model parameters used to generate these curves are shown in the bottom. What this is telling us is that even if we have no measurement uncertainty on \kappa, the systematic uncertainties are of the order of 5 ppm, for the assumed variation in model parameters.

The same machinery can be run backwards - assuming we have multiple measurements of \kappa, we then also have a sample variance, \sigma_{\kappa}. The uncertainty on the sample variance estimator is also known, and serves to quantify the prior distribution on the parameter \sigma_{\kappa} for our Monte-Carlo sampling. The parameter \sigma_{\kappa} itself is required to quantify the likelihood of a given set of model parameters, given our measurement. For the measurements I did this week, my best estimate of \kappa \pm \sigma_{\kappa} = 0.995 \pm 0.005. Plugging this in, and assuming uncorrelated gaussian uncertainties on the model parameters, I can back out the posterior distributions.

For convenience, I separate the parameters into two groups - (i) All the model parameters excluding the RT loss, and (ii) the RT loss. Attachment #2 and Attachment #3 show the priors (orange) and posteriors (black) of these quantities. 

Interpretations:

  1. This particular technique only gives us information about the RT loss - much less so about the other model parameters. This can be seen by the fact that the posteriors for the loss is significantly different from the prior for the loss, but not for the other parameters. Potentially, the power of the technique is improved if we throw other measurements at it, like ringdowns.
  2. If we want to reach the 5 ppm uncertainty target, we need to do better both on the measurement of the DC reflection signals, and also narrow down the uncertainties on the other model parameters.

Some assumptions:

So that the experts on MC analysis can correct me wheere I'm wrong.

  1. The prior distributions are truncated independent Gaussians - truncated to avoid sampling from unphysical regions (e.g. negative ITM transmission). I've not enforced the truncation analytically - i.e. I just assume a -infinity probability to samples drawn from the unphysical parts, but to be completely sure, the actual cavity equations enforce physicality independently (i.e. the MC generates a set of parameters which is input to another function, which checks for the feasibility before making an evaluation). One could argue that the priors on some of these should be different - e.g. uniform PDF for loss between some bounds? Jeffrey's prior for \sigma_{\kappa}?
  2. How reasonable is it to assume the model parameter uncertainties are uncorrelated? For exaple, \eta, \beta_1, \beta_2 are all determined from the ALS-controlled cavity scan
Attachment 1: modelPerturb.pdf
modelPerturb.pdf
Attachment 2: posterior_modelParams.pdf
posterior_modelParams.pdf
Attachment 3: posterior_Loss.pdf
posterior_Loss.pdf
  14453   Thu Feb 14 18:16:24 2019 JonUpdateVACVacromag failure

I sent Gautam instructions to first try stopping the modbus service, power cycling the Acromag chassis, then restarting the service. I've seen the Acromags go into an unresponsive state after a strong electrical transient or shorted signal wires, and the unit has to be power cycled to be reset.

If this doesn't resolve it, I'll come in tomorrow to help with the Acromag replacement. We have plenty of spares.

Quote:

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

 

  14452   Thu Feb 14 15:37:35 2019 gautamUpdateVACVacromag failure

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

Details:

  1. Chub alerted me he had changed the main N2 line pressure, but this did not show up in the trend data. In fact, the trend data suggested that all 3 N2 gauges had stopped logging data (they just held the previous value) since sometime on Monday, see Attachment #1.
  2. We verified that the gauges were being powered, and that the analog voltage output of the gauges made sense in the drill press room ---> So this suggested something was wrong at the Vacuum rack electronics rack.
  3. Went to the vacuum rack, saw no obvious indicator lights signalling a fault.
  4. So I restarted the modbus process on c1vac using sudo systemctl restart modbusIOC.service. The way Jon has this setup, this service controls all the sub-processes talking to gauges and TPs, so resatrting this master process should have brought everything back.
  5. This tripped the interlock, and all valves got closed.
  6. Once the modbus service restarted, most things came back normally. However, V1, V3, V4 and V5 readbacks were listed as "UNDEF".
  7. The way the interlock code works, it checks a valve state change request against the monitor channel, so all these valves could not be opened.
  8. We confirmed that the valves themselves were operational, by bypassing the itnerlock logic and directly actuating on the valve - but this is not a safe way of running overnight so we decided to shut everything down.
  9. We also confirmed that the problem is with one particular Acromag unit - switching the readback Dsub connector to another channel (e.g. V1 --> VM2) showed the expected readback.
  10. As a further check - I connected a windows laptop with the Acromag software installed, to the suspected XT1111 - it reported an error message saying "USB device may be damaged". Plugging into another XT111 in the crate, I was able to access the unit in the normal way.
  11. The phoenix connector architecture of the Acromags makes it possible to replace this single unit (we have spare XT1111 units) without disturbing the whole system - so barring objections, we plan to do this at 9am tomorrow. The replacement plan is summarized in Attachment #2.

Pressure of the main volume seems to have stabilized - see Attachment #3, so it should be fine to leave the IFO in this state overnight.

Questions:

  1. What caused the original failure of the writing to the ADC channels hooked up to the N2 gauges? There isn't any logging setup from the modbus processes afaik.
  2. What caused the failure of the XT1111? What is the failure mode even? Because some other channels on the same XT1111 are working...
  3. Was it user error? The only operation carried out by me was restarting the modbus services - how did this damage the readback channels for just four valves? I think Chub also re-arranged some wires at the end, but unplugging/re-connecting some cables shouldn't produce this kind of response...

The whole point of the upgrade was to move to a more reliable system - but seems quite flaky already.

Attachment 1: Screenshot_from_2019-02-14_15-40-36.png
Screenshot_from_2019-02-14_15-40-36.png
Attachment 2: IMG_7320.JPG
IMG_7320.JPG
Attachment 3: Screenshot_from_2019-02-14_20-43-15.png
Screenshot_from_2019-02-14_20-43-15.png
  14451   Wed Feb 13 02:28:58 2019 gautamSummaryLoss MeasurementY arm loss

Attachment #1 shows estimated systematic uncertainty contributions due to 

  1. ITM transmission by +/- 0.01 % about the nominal value of 1.384 %
  2. ETM transmission of +/- 3 ppm about the nominal value of 13.7 ppm
  3. Mode matching efficiency into the cavity by +/- 5% about the nominal value of 92%.

In all the measurements so far, the ratio seems to be < 1, so this would seem to set a lower bound on the loss of ~35 ppm. The dominant source of systematic uncertainty is the 5% assumed fudge in the mode-matching

To do: 

  1. Account for uncertainties on modulation depths
  2. To estimate if the amount of fluctuation we are seeing in the reflected signal even after normalizing by the MC transmission, get an estimate of statistical uncertainty in the reflected power due to 
    • Pointing jitter - is there some spec for the damped angular displacement of the TT1/TT2?
    • Cavity length in-loop residual

Bottom line: I think we need to have other measurements and simultaenously analyse the data to get a more precise estimate of the loss.

Attachment 1: systUnc.pdf
systUnc.pdf
  14450   Tue Feb 12 22:59:17 2019 gautamSummaryLoss MeasurementY arm loss

Summary:

There are still several data quality issues that can be improved. I think there is little point in reading too much into this until some of the problems outlined below are fixed and we get a better measurement.

Details:

  1. Mainly, we are plagued by the inability of the ASS system to get back to the good transmission levels - I haven't done a careful diagnosis of the servo, but the ITM PIT output always seems to run away. As a result, the later measurements are poor, as can be seen in Attachment #2.
  2. For this reason, we can't easily sample different spot positions on the ETM.
  3. Data processing:
    • Download AS reflection and MC transmission DQ channels
    • Take their ratio
    • Downsample to 4 Hz by repeated application of scipy.signal.decimate by a factor of 8 each time, thrice, with the filtfilt option enabled
  4. Attachment #1 and #2 are basically showing the same data - the former collects all locked (top left) and misaligned (top right) data segments and plots them with the corresponding TRY values in the bottom row. The second plot shows a pseudo-continuous time series (pseudo because the segments transitioning from locked to misaligned states have been excised).

As an interim fix, I'm going to try and use the Oplevs as a DC reference, and run the dither alignment from zero each time, as this prevents the runaway problem at least. Data run started at 11:20 pm.

Attachment 1: segmented.pdf
segmented.pdf
Attachment 2: consolidated.pdf
consolidated.pdf
  14449   Tue Feb 12 18:00:32 2019 gautamSummaryLoss MeasurementLoss measurement setup

Another arm loss measurement started at 6pm.

  14448   Mon Feb 11 19:53:59 2019 gautamSummaryLoss MeasurementLoss measurement setup

To measure the Y-arm loss, I decided to start with the classic reflectivity method. To prepare for this measurement, I did the following:

  1. Placed a PDA 520 in the AS beam path on the AS table.
  2. Centered AS beam on above PDA 520.
  3. Monitored signal from PDA520 and the MC transmission simultaneously in the single-bounce from ITMY config (i.e. all other optics were misaligned). Convinced myself that variations in the two signals were correlated, thus ruling out in this rough test any interference from ghost beams from ITMX / PRM etc.
  4. For the DAQ, I decided to use the two ALS Y arm channels in 1Y4, mainly because we have some whitening electronics available there - the OMC model would've been ideal but we don't have free whitening channels available there. So I ran long BNCs to the rack, labelled them.
  5. It'd be nice to have these signals logged to frames, so I added DQ-channels for the IN1 points of the BEATY_FINE filters, recording at 2048 Hz for now. Of course this necessitated restart of the c1lsc model, which caused all the vertex FEs to crash, but the reboot script brought everything back smoothly.
  6. Not sure what to make of the shape of the spectrum of the AS photodiode, see Attachment #1 - looks like some kind of scattering shelf but I checked the centering on the PD itself, looks good. In any case, with the whitening gains I'm using, seems like both channels are measuring above ADC noise.
  7. Found that the existing misalignment to the ETMY does not eliminate signatures of cavity flash in the AS photodiode. So I increased the amount of misalignment until I saw no evidence of flashes in the reflected photodiode.
  8. Johannes' old scripts didn't work out of the box - so I massaged it into a form that works.
  9. Re-centered Oplevs to try and keep them as well centered in the linear range as possible, maybe the DC position info from the Oplevs is useful in the analysis.

I'm running a measurement tonight, starting now (~1130PM), should be done in ~1hour, may need to do more data-quality improvements to get a realistic loss number, but I figured I'd give this a whirl.

I'm rather pleased with an initial look at the first align/misalign cycle, at least there is discernable contrast between the two states - Attachment #2. The data is normalized by MC transmission, and then sig.decimated by x512 (8**3).

Attachment 1: DQcheck.pdf
DQcheck.pdf
Attachment 2: initialData.pdf
initialData.pdf
  14447   Mon Feb 11 16:38:34 2019 gautamUpdateLSCETMY OL calibration updated

Since we changed the HeNe, I updated the calibration factors, and accepted the changes in the SDF.

DOF OLD [urad/ct] NEW [urad/ct]
PITCH 140 176
YAW 143

193

Attachment 1: OL_calib_ETMY_PERROR.pdf
OL_calib_ETMY_PERROR.pdf
Attachment 2: OL_calib_ETMY_YERROR.pdf
OL_calib_ETMY_YERROR.pdf
  14446   Mon Feb 11 15:41:49 2019 gautamUpdateLSCTRY 60 Hz solved - but clipping persists

Rich came by the 40m to photocopy some pages from Hobbs, and saw me working on the 60 Hz hunting. As I suspected, the problem was being generated in the D040060. This board receives the photodiode signal single-ended, but has a different power ground than the photodiode (even though the PD is plugged into a power strip that claims to come from 1Y4). The mechanism is not entirely clear - the presence of these 60 Hz features seemed to be dependent on the light level on the TRY photodiode (i.e. they were absent when the PSL shutter is closed, and were more prominent when TRY was 0.9 rather than 0.5) but the PD certainly wasn't saturated - the DC signal was only ~100 mV when viewed on a scope. In any case, Rich suggested the simplest test would be to ground the BNC shield bringing TRY to the rack, to the local ground on the board, which I did using a crocodile clip. This did the trick, the TRY signal RMS is now dominated by the ~1 Hz seismic-driven variation.

 On a more pessimistic note - it looks like the elliptical reflector moving did not work, and the clipping in the Y arm persists no. I am able to recover TRY~1 with the yaw offset on the ETM (which is still lower than the 1.06-1.07 Koji reported in Aug 2018, but I can believe that being down to the MC transmission being a few % lower at 15000cts rather than 15500), while the maximum I see without it is ~0.9. This is puzzling, because when the chamber was open, we saw that there was ~1.5" clearance between the edge of the reflector and the beam on an IR card. I suppose the input pointing could have been off by a small amount. So one of the primary vent objectives wasn't acheieved... But I will push ahead with the loss measurement.

  14445   Fri Feb 8 20:48:52 2019 gautamUpdateLSCIFO recovery

Several housekeeping tasks were carried out today in preparation for the Y-arm loss measurement.

  1. The mess around the OMC rack was cleared a bit. The vertex laptop paola now lives there, instead of on the ITMY optical table.
  2. Centering of beam on AS photodiodes on AS table (starting from the first optic in this path at the exit point from the vacuum), adjusted AS camera to bring the spot roughly to the center.
  3. POX/POY locking was restored, GTRY/GTRX levels are healthy. TRY was centered on the Thorlabs PD by triggering the LSC lock on AS110.
  4. Oplevs on all four TMs and BS were centered for post-vent alignment.
  5. ETMY OL transfer function was checked since we have swapped the HeNe during the vent, 4.5 Hz UGF for both DoFs and ~30 deg phase margin. The calibration of the error point to urad needs to be double checked.
  6. There are some huge 60 Hz harmonics in the TRY signal - hunting down the source of this. The one thing I can think of that was changed is that we plugged the c1auxey eurocrate into the ethernet powerstrip, I wonder if this created some kind of ground loop.
    • I checked the signal from the PD with a battery powered scope, no evidence of any 60 Hz in the time domain or scope FFT (Attachment #1, FFT in red and time domain signal in green can be seen).
    • Restored the power of c1auxey eurocrate to its original socket in the back of 1Y4 - harmonics still present --> points to the problem being in the whitening board / ADC electronics?
    • The harmonics only seem to show up when TRY > ~0.5
    • Some elog hunting revealed that this signal is being digitized through a modified D990399. So somehow the signal pollution is happening inside this board? Because from the output of this board, the signal is going straight into the ADC.
    • To confirm, I will temporarily hijack another ADC channel and look at the spectrum. There is apparently some kind of daughter board (D040060), but how 60 Hz is coupling at this stage is unknown to me.
  7. The ASS system for both arms still isn't working properly, to be investigated. The dirty TRY signal probably isn't helping the situation.
Attachment 1: IMG_7307.JPG
IMG_7307.JPG
  14444   Fri Feb 8 20:35:57 2019 gautamSummaryTip-TIltCoating spec

[Attachment #1]: Computed spectral power transmissivity (according to my model) for the coating design at a few angles of incidence. Behavior lines up well with what FNO measured, although I get a transmission that is slightly lower than measured at 45 degrees. I suspect this is because of slight changes in the dispersion relation assumed and what was used for the coating in reality.

[Attachment #2]: Similar information as Attachment #1, but with the angle of incidence as the independent parameter in a continuous sweep. 

Conclusion: The coating behaves in a way that is in reasonable agreement with our model. At 41.1 degrees, which is what the PR3 angle of incidence will be, T<50 ppm, which was what we specified. The larger range of angles was included because originally, we thought of using this optic as a substitute for SR3 as well. But I claim that for the shorter SRC (signal recycling as opposed to RSE), we will not end up using the new optic, but rather go for the G&H mirror. In any case, as Koji pointed out, ~50 ppm extra loss in the RC will not severely limit the recycling gain. Such large variation was not seen in the MC analysis because we only varied the angle of incidence by +/- 0.5 degrees about the nominal design value of 41.1 degrees.

Attachment 1: specRefl.pdf
specRefl.pdf
Attachment 2: AoIscan.pdf
AoIscan.pdf
  14443   Fri Feb 8 02:00:34 2019 gautamUpdateSUSITMY has tendency of getting stuck

As it turns out, now ITMY has a tendency to get stuck. I found it MUCH more difficult to release the optic using the bias jiggling technique, it took me ~ 2 hours. Best to avoid c1susaux reboots, and if it has to be done, take precautions that were listed for ITMX - better yet, let's swap out the new Acromag chassis ASAP. I will do the arm locking tests tomorrow.

Attachment 1: Screenshot_from_2019-02-08_02-04-22.png
Screenshot_from_2019-02-08_02-04-22.png
  14442   Fri Feb 8 00:20:56 2019 gautamSummaryTip-TIltFive FiveNine Optics Optics delivered

They have been stored on the 3rd shelf from top in the clean optics cabinet at the south end. EX

Quote:

5 PR3/SR3 optics from FiveNine Optics were delivered. The data sheets were scanned and uploaded to the following wiki page. https://wiki-40m.ligo.caltech.edu/Aux_Optics

  14441   Thu Feb 7 19:34:18 2019 gautamUpdateSUSETMY suspension oddness

I did some tests of the electronics chain today.

  1. Drove a sine-wave using awggui to the UL-EXC channel, and monitored using an o'scope and a DB25 breakout board at J1 of the satellite box, with the flange cable disconnected - while driving 3000 cts amplitude signal, I saw a 2 Vpp signal on the scope, which is consistent with expectations.
  2. Checked resistances of the pin pairs corresponding to the OSEMs at the flange end using a breakout board - all 5 pairs read out ~16-17 ohms.
  3. Rana pointed out that the inductance is the unambiguous FoM here: all coils measured between 3.19 and 3.3 mH according to the LCR meter...

Hypothesising a bad connection between the sat box output J1 and the flange connection cable. Indeed, measuring the OSEM inductance from the DSUB end at the coil-driver board, the UL coil pins showed no inductance reading on the LCR meter, whereas the other 4 coils showed numbers between 3.2-3.3 mH. Suspecting the satellite box, I swapped it out for the spare (S/N 100). This seemed to do the trick, all 5 coil channels read out ~3.3 mH on the LCR meter when measured from the Coil driver board end. What's more, the damping behavior seemed more predictable - in fact, Rana found that all the loops were heavily overdamped. For our suspensions, I guess we want the damping to be critically damped - overdamping imparts excess displacement noise to the optic, while underdamping doesn't work either - in past elogs, I've seen a directive to aim for Q~5 for the pendulum resonances, so when someone does a systematic investigation of the suspensions, this will be something to look out for.. These flaky connectors are proving pretty troublesome, let's start testing out some prototype new Sat Boxes with a better connector solution - I think it's equally important to have a properly thought out monitoring connector scheme, so that we don't have to frequently plug-unplug connectors in the main electronics chain, which may lead to wear and tear.

The input and output matrices were reset to their "naive" values - unfortunately, two eigenmodes still seem to be degenerate to within 1 mHz, as can be seen from the below spectra (Attachment #1). Next step is to identify which modes these peaks actually correspond to, but if I can lock the arm cavities in a stable way and run the dither alignment, I may prioritize measurement of the loss. At least all the coils show the expected 1/f**2 response at the Oplev error point now. The coil output filter gains varied by ~ factor of 2 among the 4 coils, but after balancing the gains, they show identical responses in the Oplev - Attachment #2.

Attachment 1: ETMY_sensors.pdf
ETMY_sensors.pdf
Attachment 2: postDiag.pdf
postDiag.pdf
  14440   Thu Feb 7 19:28:46 2019 gautamUpdateVACIFO recovery

[rana, gautam]

The full 1 W is again being sent into the IMC. We have left the PBS+HWP combo installed as Rana pointed out that it is good to have polarization control after the PMC but before the EOM. The G&H mirror setup used to route a pickoff of the post-EOM beam along the east edge of the PSL table to the AUX laser beat setup was deemed too flaky and has been bypassed. Centering on the steering mirror and subsequently the IMC REFL photodiode was done using an IR viewer - this technique allows one to geometrically center the beam on the steering mirror and PD, to the resolution of the eye, whereas the voltage maximization technique using the monitor port and an o'scope doesn't allow the former. Nominal IMC transmission of ~15,000 counts has been recovered, and the IMC REFL level is also around 0.12, consistent with the pre-vent levels.

  14439   Thu Feb 7 17:27:53 2019 KojiSummaryTip-TIltFive FiveNine Optics Optics delivered

5 PR3/SR3 optics from FiveNine Optics were delivered. The data sheets were scanned and uploaded to the following wiki page. https://wiki-40m.ligo.caltech.edu/Aux_Optics

  14438   Thu Feb 7 13:55:25 2019 gautamUpdateVACRGA turned on

[chub, steve, gautam]

Steve came by the lab today. He advised us to turn the RGA on again, now that the main volume pressure is < 20 uTorr. I did this by running the RGAset.py script on c0rga - the temperature of the unit was 22C in the morning, after ~3 hours of the filament being turned on, the temperature has already risen to 34 C. Steve says this is normal. We also opened VM1 (I had to edit the interlocks.yaml to allow VM1 to open when CC1 < 20uTorr instead of 10uTorr), so that the RGA volume is exposed to the main volume. So the nightly scans should run now, Steve suggests ignoring the first few while the pumpdown is still reaching nominal pressure. Note that we probably want to migrate all the RGA stuff to the new c1vac machine.

Other notes from Steve:

  • RP1 and RP3 should have their oil fully changed (as opposed to just topped up)
  • VABSSCI adn VABSSCO are NOT vent valves, they are isolating the annuli of the IOO and OMC chambers from the BS chamber annuli. So next time we vent, we should fix this!
  • Leak rate of 3-5 mTorr/day is "normal" once the system has been pumped for a few days. Steve agrees that our observations of the main volume pressure increase is expected, given that we were at atmosphere.
  • Regarding the upcoming CES construction
    • Steve recommends keeping the door along the east arm, as it is useful for bringing equipment into the lab (end door access is limited because of end optical tables)
    • Particle counter data logging should be resumed before the construction starts, so that we can monitor if the lab is getting dirtier
  • OSEM filters (new ones, i.e. made according to spects in D000209) are in the Clean Cabinet (EX). They are individually packaged in little capsules, see Attachment #1. So the ones I installed were actually a 2002 vintage. We have 50pcs, enough to install new ones on all the core optics + spares.
  14437   Wed Feb 6 10:07:23 2019 ChubUpdate pre-construction inspection

The Central Plant building will be undergoing seismic upgrades in the near future.  The adjoining north wall along the Y arm will be the first to have this work done, from inside the Central Plant.  Project manager Eugene Kim has explained the work to me and also noted our concerns.  He assured me that the seismic noise from the construction will be minimized and we will always be contacted when the heaviest construction is to be done.

Tomorrow at 11am, I will bring Mr. Kim and a few others from the construction team to look at the wall from inside the lab.  If you have any questions or concerns that you want to have addressed, please email them to me or contact Mr. Kim directly at x4860 or through email at eugene.kim@caltech.edu . 

  14436   Tue Feb 5 19:30:14 2019 gautamUpdateVACMain volume at 20 uTorr

Pumpdown looks healthy, so I'm leaving the TPs on overnight. At some point, we should probably get the RGA going again. I don't know that we have a "reference" RGA trace that we can compare the scan to, should check with Steve. The high power (1 W) beam has not yet been sent into the vacuum, we should probably add the interlock condition that shuts off the PSL shutter before that.

Attachment 1: PD83.png
PD83.png
  14435   Tue Feb 5 10:22:03 2019 chubUpdate oil added to RP-1 & 3

I added lubricating oil to roughing pumps RP1 and RP3 yesterday and this morning.  Also, I found a nearly full 5 gallon jug of grade 19 oil in the lab.  This should set us up for quite a while.  If you need to add oil the the roughing pumps, use the oil in the quart bottle in the flammables cabinet.  It is labeled as Leybold HE-175 Vacuum Pump Oil.  This bottle is small enough to fill the pumps in close quarters.

  14434   Tue Feb 5 10:11:30 2019 gautamUpdateVACleak tests complete, pumpdown 83 resumed

I guess we forgot to close V5, so we were indeed pumping on the ITMY and ETMY annuli, but the other three were isolated suggest a leak rate of ~200-300 mtorr/day, see Attachment #1 (consistent with my earlier post).

As for the main volume - according to CC1, the pressure saturates at ~250 uTorr and is stable, while the Pirani P1a reports ~100x that pressure. I guess the cold-cathode gauge is supposed to be more accurate at low pressures, but how well do we believe the calibration on either gauge? Either ways, based on last night's test (see Attachment #2), we can set an upper limit of 12 mtorr/day. This is 2-3x the number Steve said is normal, but perhaps this is down to the fact that the outgassing from the main volume is higher immediately after a vent and in-chamber work. It is also 5x lower rate of pressure increase than what was observed on Feb 2.

I am resuming the pumping down with the turbo-pumps, let's see how long we take to get down to the nominal operating pressure of 8e-6 torr, it ususally takes ~ 1 week. V1, VASV, VASE and VABS were opened at 1030am PST. Per Chub's request (see #14435), I ran RP1 and RP3 for ~30 seconds, he will check if the oil level has changed.

Quote:
 

Let's leave things in this state overnight - V1 and V5 closed so that neither the main volume nor the annuli are being pumped, and get some baseline numbers for what the outgassing rate is.

Attachment 1: Annuli.png
Annuli.png
Attachment 2: MainVol.png
MainVol.png
  14433   Mon Feb 4 20:13:39 2019 gautamUpdateSUSETMY suspension oddness

I looked at the free-swinging sensor data from two nights ago, and am struggling with the interpretation. 

[Attachment #1] - Fine resolution spectral densities of the 5 shadow sensor signals (y-axis assumes 1ct ~1um). The puzzling feature is that there are only 3 resonant peaks visible around the 1 Hz region, whereas we would expect 4 (PIT, YAW, POS and SIDE). afaik, Lydia looked into the ETMY suspension diagonalization last, in 2016. Compared to her plots (which are in the Euler basis while mine are in the OSEM basis), the ~0.73 Hz peak is nowhere to be seen. I also think the frequency resolution (<1 mHz) is good enough to be able to resolve two closely spaced peaks, so it looks like due to some reason (mechanical or otherwise), there are only 3 independent modes being sensed around 1 Hz.

[Attachment #2] - Koji arrived and we looked at some transfer functions to see if we could make sense of all this. During this investigation, we also think that the UL coil actuator electronics chain has some problem. This test was done by driving the individual coils and looking for the 1/f^2 pendulum transfer function shape in the Oplev error signals. The ~ 4dB difference between UR/LL and LR is due to a gain imbalance in the coil output filter bank, once we have solved the other problems, we can reset the individual coil balancing using this measurement technique.

[Attachment #3] - Downsampled time-series of the data used to make Attachment #1. The ringdown looks pretty clean, I don't see any evidence of any stuck magnets looking at these signals. The X-axis is in kilo-seconds.

We found that the POS and SIDE local damping loops do not result in instability building up. So one option is to use only Oplevs for angular control, while using shadow-sensor damping for POS and SIDE.

Attachment 1: ETMY_sensors_1_Feb_2019_2230_PST.pdf
ETMY_sensors_1_Feb_2019_2230_PST.pdf
Attachment 2: ETMY_UL.pdf
ETMY_UL.pdf
Attachment 3: ETMY_sensors_timeDomain.pdf
ETMY_sensors_timeDomain.pdf
  14432   Mon Feb 4 12:23:24 2019 gautamUpdateVACpumpdown 83 - leak tests

[koji, gautam]

As planned, we valved off the main volume and the annuli from the turbo-pumps at ~730 PM PST. At this time, the main volume pressure was 30 uTorr. It started rising at a rate of ~200 uTorr/hr, which translates to ~5 mtorr/day, which is in the ballpark of what Steve said is "normal". However, the calibration of the Hornet gauge seems to be piecewise-linear (see Attachment #1), so we will have to observe overnight to get a better handle on this number.

We decided to vent the IY and EY chamber annular volumes, and check if this made anu dramatic changes in the main volume pressure increase rate, presumably signalling a leak from the outside. However, we saw no such increase - so right now, the working hypothesis is still that the main volume pressure increase is being driven by outgassing of something from the vacuum.

Let's leave things in this state overnight - V1 and V5 closed so that neither the main volume nor the annuli are being pumped, and get some baseline numbers for what the outgassing rate is.

Attachment 1: PD83.png
PD83.png
  14431   Sun Feb 3 20:52:34 2019 KojiUpdateVACovernight leak rate

We can pump down (or vent) annuli. If this is the leak between the main volume and the annuli, we will be able to see the effect on the leak rate. If this is the leak of an  outer o-ring, again pumping down (or venting) of the annuli should temporarily decrease (or increase) the leak rate..., I guess. If the leak rate is not dependent on the pressure of the annuli, we can conclude that it is internal outgassing.

  14430   Sun Feb 3 15:15:21 2019 gautamUpdateVACovernight leak rate

I looked into this a bit today. Did a walkthrough of the lab, didn't hear any obvious hissing (makes sense, that presumably would signal a much larger leak rate).

Attachment #1: Data from the 30 ksec we had the main vol valved off on Jan 10, but from the gauges we have running right now (the CC gauges have not had their HV enabled yet so we don't have that readback).

Attachment #2: Data from ~150 ksec from Friday night till now.

Interpretation: The number quoted from Jan 10 is from the cold-cathode gauge (~20 utorr increase). In the same period, the Pirani gauge reports a increase of ~5 mtorr (=250x the number reported by the cold-cathode gauge). So which gauge do we trust in this regime more? Additionally, the rate at which the annuli pressures are increasing seem consistent between Jan 10 and now, at ~100 mtorr every 30 ksec.

I don't think this is conclusive, but at least the leak rates between Jan 10 and now don't seem that different for the annuli pressures. Moreover, for the Jan 10 pumpdown, we had the IFO at low pressure for several days over the chirstmas break, which presumably gave time for some outgassing which was cleaned up by the TPs on Jan 10, whereas for this current pumpdown, we don't have that luxury.

Do we want to do a systematic leak check before resuming the pumpdown on Monday? The main differences in vacuum I can think of are

  1. Two pieces of Kapton tape are now in the EY chamber.
  2. Possible resiudue from cleaning solvents in IY and EY chambers are still outgassing.

This entry by Steve says that the "expected" outgassing rate is 3-5 mtorr per day, which doesn't match either the current observation or that from Jan 10.

Attachment 1: Jan10_data.png
Jan10_data.png
Attachment 2: Feb1_data.png
Feb1_data.png
  14429   Sat Feb 2 21:53:24 2019 KojiUpdateVACovernight leak rate

The pressure of the main volume increased from ~1mtorr to 50mtorr for the past 24 hours (86ksec). This rate is about x1000 of the reported number on Jan 10. Do we suspect vacuum leak?

Quote:

Overnight, the pressure increased from 247 uTorr to 264 uTorr over a period of 30000 seconds. Assuming an IFO volume of 33,000 liters, this corresponds to an average leak rate of ~20 uTorr L / s.

 

Attachment 1: Screen_Shot_2019-02-02_at_21.49.33.png
Screen_Shot_2019-02-02_at_21.49.33.png
  14428   Fri Feb 1 21:52:57 2019 gautamUpdateSUSPumpdown 83 underway

[jon, koji, gautam]

  1. IFO is at ~1 mtorr, but pressure is slowly rising because of outgassing presumably (we valved off the turbos from the main volume)
  2. Everything went smooth -
    • 760 torr to 500 mtorr took ~7 hours (we deliberately kept a slow pump rate)
    • TP3 current was found to rise above 1 A easily as we opened RV2 during the turbo pumping phase, particularly in going from 500 mtorr to 10 mtorr, so we just ran TP2 more aggressively rather than change the interlock condition.
    • The pumpspool is isolated from the main volume - TP1-3 are running (TP2 and TP3 are on Standby mode) but are only exposed to the small pumpspool volume and RGA volume).
    • RP1 and RP3 were turned off, and the manual roughing line was disconnected.
    • We will resume the pumping on Monday.

I'm leaving all suspension watchdogs tripped over the weekend as part of the suspension diagonalization campaign...

  14427   Fri Feb 1 14:44:14 2019 gautamUpdateSUSY arm FC cleaning and reinstall

[Attachment #1]: ITMY HR face after cleaning. I determined this to be sufficiently clean and re-installed the optic.

[Attachment #2]: ETMY HR face after cleaning. This is what the HR face looks like after 3 rounds of First-Contact application. After the first round, we noticed some arc-shaped lines near the center of the optic's clear aperture. We were worried this was a scratch, but we now believe it to be First-Contact residue, because we were able to remove it after drag wiping with acetone and isopropanol. However, we mistrust the quality of the solvents used - they are not any special dehydrated kind, and we are looking into acquiring some dehydrated solvents for future cleaning efforts.

[Attachment #3]: Top view of ETMY cage meant to show increased clearance between the IFO axis and the elliptical reflector.

Many more photos (including table leveling checks) on the google-photos page for this vent. The estimated time between F.C. peeling and pumpdown is ~24 hours for ITMY and ~15 hours for ETMY, but for the former, the heavy doors were put on ~1 hour after the peeling.

The first task is to fix the damping of ETMY.

Attachment 1: IMG_5974.JPG
IMG_5974.JPG
Attachment 2: IMG_5986.JPG
IMG_5986.JPG
Attachment 3: IMG_5992.JPG
IMG_5992.JPG
  14426   Fri Feb 1 13:16:50 2019 gautamUpdateSUSPumpdown 83 underway

[chub, bob, gautam]

  1. Steps described in previous elog were carried out
  2. EY heavy door was put on at about 1130am.
  3. Pumpdown commenced at ~noon. We are going down at ~3 torr/min.
  14425   Fri Feb 1 01:24:06 2019 gautamUpdateSUSAlmost ready for pumpdown tomorrow

[koji, chub, jon, rana, gautam]

Full story tomorrow, but we went through most of the required pre close-up checks/tasks (i.e. both arms were locked, PRC and SRC cavity flashes were observed). Tomorrow, it remains to 

  1. Confirm clearance between elliptical reflector and ETMY
  2. Confirm leveling of ETMY table
  3. Take pics of ETMY table
  4. Put heavy door on ETMY chamber
  5. Pump down

The ETMY suspension chain needs to be re-characterized (neither the old settings, nor a +/- 1 gain setting worked well for us tonight), but this can be done once we are back under vacuum.

  14424   Wed Jan 30 19:25:40 2019 gautamUpdateSUSXarm cavity alignment

Squishing cables at the ITMX satellite box seems to have fixed the wandering ITM that I observed yesterday - the sooner we are rid of these evil connectors the better.

I had changed the input pointing of the green injection from EX to mark a "good" alignment of the cavity axis, so I used the green beam to try and recover the X arm alignment. After some tweaking of the ITM and ETM angle bias voltages, I was able to get good GTRX values [Attachment #1], and also see clear evidence of (admittedly weak) IR resonances in TRX [Attachment #2]. I can't see the reflection from ITMX on the AS camera, but I suspect this is because the ITMY cage is in the way. This will likely have to be redone tomorrow after setting the input pointing for the Y arm cavity axis, but hopefully things will converge faster and we can close up sooner. Closing the PSL shutter for now...


I also rebooted the unresponsive c1susaux to facilitate the alignment work tomorrow.

Attachment 1: Xarm.png
Xarm.png
Attachment 2: Xarm_IR.png
Xarm_IR.png
  14423   Wed Jan 30 11:54:24 2019 gautamUpdateSUSMore alignment prep

[chub, gautam]

  1. ETMY cage was wiped down
    • Targeted potential areas where dust could drift off from and get attracted to a charged HR surface
    • These areas were surprisingly dusty, even left a grey mark on the wipe [Attachment #1] - we think we did a sufficiently thorough job, but unclear if this helps the loss numbers
    • More pictures are on gPhoto
  2. Filters on SD and LR OSEMs were replaced - the open shadow sensor voltages with filters in/out are consistent with the T>95% coating spec.
  3. IPANG beam position was checked 
    • It is already too high, missing the first steering optic by ~0.5 inch, not the greatest photo but conclusion holds [Attachment #2].
    • I think we shouldn't worry about it for this pumpdown, we can fix it when we put in the new PR3.
  4. Cage wiping procedure was repeated on ITMY
    • The cage was much dustier than ETMY
    • However, the optic itself (barrel and edge of HR face) was cleaner
    • All accessible areas were wiped with isopropanol
    • Before/after pics are on gPhoto (even after cleaning, there are some marks on the suspension that looks like dust, but these are machining marks)

Procedure tomorrow [comments / suggestions welcome]:

  1. Start with IY chamber
    • Peel first contact with TopGun jet flowing
    • Inspect optic face with green flashlight to check for residual First Contact
    • Replace ITMY suspension cage in its position, clamp it down
    • Release ITMY from its EQ stops
    • Replace OSEMs in ITMY cage, best effort to recover previous alignment of OSEMs in their holders (I have a photo before removal of OSEMs), which supposedly minimized the coupling of the B-R modes into the shadow sensor signals
    • Best effort to have shadow sensor PD outputs at half their fully open voltages (with DC bias voltage applied)
    • Quick check that we are hitting the center of the ITM with the alignment tool
    • Check that the Oplev HeNe is reasonably centered on steering mirrors
    • Tie down OSEM cabling to the ITMY cage with clean copper wire
    • Replace the OSEM wiring tower
    • Release the SRM from its EQ stops
    • Check table leveling
    • Take pictures of everything, check that we have not left any tools inside the chamber
    • Heavy doors on
  2. Next, EY chamber
    • Repeat first seven bullets from the IY chamber, :%s/ITMY/ETMY/g
    • Confirm sufficient clearance between IFO beam axis and the elliptical reflector
    • Check Oplev beam path
    • Check table leveling
    • Take pictures of everything, check that we have not left any tools inside the chamber
    • Heavy doors on
  3. IFO alignment checks - basically follow the wiki, we want to be able to lock both arms (or at least see TEM00 resonances), and see that the PRC and SRC mode flashes look reasonable.
  4. Tighten all heavy doors up
  5. Pump down

All photos have been uploaded to google photos.

Attachment 1: IMG_5958.JPG
IMG_5958.JPG
Attachment 2: IMG_5962.JPG
IMG_5962.JPG
  14422   Tue Jan 29 22:12:40 2019 gautamUpdateSUSAlignment prep

Since we may want to close up tomorrow, I did the following prep work:

  1. Cleaned up Y-end suspension eleoctronics setup, connected the Sat Box back to the flange
    • The OSEMs are just sitting on the table right now, so they are just seeing the fully open voltage
    • Post filter insertion, the four face OSEMs report ~3-4% lower open-voltage values compared to before, which is compatible with the transmission spec for the filters (T>95%)
    • The side OSEM is reporting ~10% lower - perhaps I just didn't put the filter on right, something to be looked at inside the chamber
  2. Suspension watchdog restoration
    • I'd shutdown all the watchdogs during the Satellite box debacle
    • However, I left ITMY, ETMY and SRM tripped as these optics are EQ-stopped / don't have the OSEMs inserted.
  3. Checked IMC alignment
    • After some hand-alignment of the IMC, it was locked, transmission is ~1200 counts which is what I remember it being
  4. Checked X-arm alignment
    • Strictly speaking, this has to be done after setting the Y-arm alignment as that dictates the input pointing of the IMC transmission to the IFO, but I decided to have a quick look nevertheless
    • Surprisingly, ITMX damping isn't working very well it seems - the optic is clearly swinging around a lot, and the shadow sensor RMS voltage is ~10s of mV, whereas for all the other optics, it is ~1mV.
    • I'll try the usual cable squishing voodoo

Rather than try and rush and close up tomorrow, I propose spending the day tomorrow cleaning the peripheral areas of the optic, suspension cage, and chamber. Then on Thursday morning, we can replace the Y-arm optics, try and recover the cavity alignment, and then aim for a Thursday afternoon pumpdown. The main motivation is to reduce the time the optics spend in air after F.C. peeling and going to vacuum.

  14421   Tue Jan 29 17:19:16 2019 gautamUpdateElectronicsSatellite box S/N 105 repaired

[chub, koji, gautam]

Attachment #1 shows the signal routing near the Satellite box. Somehow, the female 64 pin IDC connector that brings the signals from the coil driver board wasn't mating well with the mail connector on the Satellite box front panel. This is a connector specific problem - plugging the female end into one of the male connectors inside the Satellite box yielded signal continuity. The problem was resolved by re-making both connections -by driving the EPICS bias slider through its full range, we were able to see the full voltage swing at the DB connectors going to the flange

This kind of flakiness could be all around the lab, and could be responsible for many of the suspension "mysteries". To re-iterate, the problem seems to be the way the female sockets of the connector mates with the male pins - while the actual crimping points may look secure, there may not be signal continuity.

Now that this problem is resolved, tomorrow we will recover the cavity alignment and possibly start a pumpdown.


Unrelated to this work - the spare satellite box (S/N #100), which had a note on it that said "low voltages", was tested. The "low voltages" referred to the OSEM shadow sensor voltages being low when the LED was completely unobscured. The reason was that the mod to increase the drive current to 25 mA had not yet been implemented on this unit. I added the appropriate 806 ohm resistors, and verified that the voltages were correct, so now we have a working spare. It is stored in the "photodiode" cabinet along the east arm, together with the tester boxes. 

Attachment 1: IMG_7301.JPG
IMG_7301.JPG
  14420   Tue Jan 29 16:12:21 2019 ChubUpdate  

The foam in the cable tray wall passage had been falling on the floor in little bite-sized pieces, so I investigated and found a fiber cable that had be chewed/clawed through.  I didn't find any droppings anywhere in the 40m, but I decided to bait an un-set trap and see if we'd find activity around it. There has been none so far.  If there is still none tomorrow, I will move the trap and keep looking for signs of rodentia.  At the moment, the trap is in a box in front of the double doors at the north end of the control room.  Next it will be place in the IFO room, up in the cable tray. 

gautam: the fiber that was damaged was the one from the LSC rack FiBox to the control room FiBox. So no DAFI action for a bit...

  14419   Fri Jan 25 16:14:51 2019 gautamUpdateVACVacuum interlock code, N2 warning

I reset the remote of this git repo to the 40m version instead of Jon's personal one, to ensure consistency between what's on the vacuum machine and in the git repo. There is now a N2 checker python mailer that will email the 40m list if all the tank pressures are below 600 PSI (>12 hours left for someone to react before the main N2 line pressure drops and the interlocks kick in). For now, the script just runs as a cron job every 3 hours, but perhaps we should integrate it with the interlock process?

Quote:

All the python code running on c1vac is archived to the git repo: 

https://git.ligo.org/40m/vacpython

  14418   Fri Jan 25 12:49:53 2019 gautamUpdateElectronicsEthernet Power Strip IP conflict resolved

To avoid the annoying excercise of having to manually toggle the illuminators, I solved the IP conflict. Made a wiki page for the ethernet power strips since the documentation was woeful (the way the power strips are mounted in the racks, you can't even see the manufacturer/model/make). All chamber illuminators can now be turned on/off by the MEDM scripts yes. Note that there is a web interface available too, which can be useful in case of some python socket issues. The main lesson is: avoid using the "reset" button on the power strips, it destroys the static IP config.


Unrelated to this work: The EY laptop, asia, won't boot up anymore, with a "Fan Error" message being the red flag. I've temporarily recommissioned the vacuum rack laptop, belladonna, to be the EY machine for this vent. Can we get 3 netbooks that actually work and don't need to be tethered to a power strip for the VEA?

  14417   Thu Jan 24 22:55:50 2019 gautamUpdateElectronicsSatellite box S/N 102 investigation

I had taken Satellite box S/N 102, from the SRM suspension, down to the Y-end as part of debugging. However, at some point, I stopped getting readbacks from the shadow sensor PDs, even with the Sat. Box tester hooked up (so as to rule out anything funky with the actual OSEMs). Today evening, I did a more systematic investigation. Schematic with component references is here.

  1. Used mini-grabbers and a bench power supply to connect +/-24V to C57 and C58.
  2. Checked that all ICs were getting +/- 15 V to the supply pins.
  3. Debugged individual channels, checking voltages at various nodes
    • Found that the "PD K" bias voltage was anomalosly low.
    • Found that the inverting input of U3C wasn't ground.
    • The above findings are summarized in Attachment #2.
    • This suggested something was wrong with the Quad OpAmp LT1125 IC, so I elected to switch it out.
    • During the desoldering process, the pads for the "NC" pins came off (Attachment #1) - this has happened to me before on these old boards. I don't think I applied excess heat during the desoldering (I used 650F).
    • Replaced the IC, and measured the expected 10V at the "PD K" node.
  4. I then connected the tester box and verified all the shadow sensor channels (LED + PD) work as expected, using the front panel J3 and the "octopus cable".
  5. It remains to verify that the coil driver signals get correctly routed through the Satellite box before giving this box a pass certification.

The question remains as to what caused this failure mode - I can't think of why that particular IC was damaged during the Satellite box swapping process - is this indicative of some problem elsewhere in the ETMY OSEM/coil driver electronics chain?

Attachment 1: IMG_7294.JPG
IMG_7294.JPG
Attachment 2: D961289-B2.pdf
D961289-B2.pdf
  14416   Thu Jan 24 15:32:31 2019 gautamUpdateSUSY arm cavity side first contact applied

EY:

  • A clean cart was setup adjacent to the HEPA-enclosed mini cleanroom area (it cannot be inside the mini cleanroom, because of lack of space). 
  • The FC tools (first contact, acetone, beakers, brushes, PEEK mesh, clean scissors, clean tweezers, Canon camera, green flashlight) were laid out on this cart for easy access.
  • I inspected the optic - the barrel had a few specks of dust, and the outer 1.5" annular region of the HR face looked to have some streak marks
    • I was advised not to pre-wipe the HR side with any solvents
    • The FC was only applied to the centran ~1-1.5" of the optic
  • After applying the FC, I spent a few minutes inspecting the status of the OSEMs 
    • Three out of the four face OSEMs, as well as the side OSEM, did not have a filter in
    • I inserted filters into them.
  • Closed up the chamber with light door, left HEPA unit on and the mini cleanroom setup intact for now. We will dismantle everything after the pumpdown.

IY:

  • Similar setup to EY was implemented
  • Removed side OSEM from ITMY.
  • Double-checked that EQ stops were engaged.
  • Moved the OSEM cable tower to free up some space for accommodating ITMY.
  • Undid the clamps of ITMY, moved it to the NE corner of the IY table.
  • Inspected the optic - it was much cleaner than the 2016 inspection, although the barrel did have some specks of dust.
  • Once again, I applied first contact to the central ~1.5" of the HR surface.
  • Checked status of filters on OSEMs - this time, only the UL coil required a filter.
    • Attachment #3 shows the sensor voltage DC level before and after the insertion of the filter. There is ~0.1% change.
    • The filters were found in a box that suggests they were made in 2002 - but Steve tells me that it is just stored in a box with that label, and that since there are >100 filters inside that box, he thinks they are the new ones we procured in 2016. The coating specs and type of glass used are different between the two versions.

The attached photo shows the two optics with FC applied.

My original plan was to attempt to close up tomorrow. However, we are still struggling with Satellite box issues. So rather than rush it, we will attempt to recover the Y arm cavity alignment on Monday, satellite box permitting. The main motivation is to reduce the deadtime between peeling off the F.C and starting the pumpdown. We will start working on recovering the cavity alignment once the Sat box issues are solved.

Attachment 1: Yarm_FC.pdf
Yarm_FC.pdf
Attachment 2: OSEMfilter.png
OSEMfilter.png
  14415   Wed Jan 23 23:12:44 2019 gautamUpdateSUSPrep for FC cleaning

In preparation for the FC cleaning, I did the following:

  1. Set up mini-cleanroom at EY - this consists of the mobile HEPA unit put up against the chamber door, with films draped around the setup.
  2. After double-checking the table leveling, I EQ-stopped ETMY and moved it to the NE corner of the EY table, where it will be cleaned.
  3. Checked leveling of IY table - see Attachment #1.
  4. Took pictures of IY table, OSEM arrangement on ITMY.
  5. EQ-stopped ITMY and SRM.
  6. Removed the face OSEMs from ITMY (this required clipping off the copper wire used to hold the OSEM wires against the suspension cage). The side OSEM has not yet been removed because I left the allen key that is compatible with that particular screw inside the EY chamber. 
  7. To position ITMY at the edge of the IY table where we can easily clean it, we will need to move the OSEM cabling tower as we did last time. I've taken photos of its current position for now.

Tomorrow, I will start with the cleaning of ETMY HR. While the FC is drying, I will position ITMY at the edge of the IY cable for cleaning (Chub will setup the mini-cleanroom at the IY table). The plan is to clean both HR surfaces and have the optics back in place by tomorrow evening. By my count, we have done everything listed in the IY and EY chambers. I'd like to minimize the time between cleaning and pumpdown, so if all goes well (Sat Box problems notwithstanding), we will check the table leveling on Friday morning, and put on the heavy doors and at least rough the main volume down to 1 torr on Friday.

Attachment 1: IY_level_before.pdf
IY_level_before.pdf
  14414   Wed Jan 23 18:11:56 2019 gautamUpdateElectronicsEthernet Power Strip IP conflict

For the last week, I noticed that I was unable to turn the EY chamber illuminator on using the remote python scripts. This was turning out to be really annoying, having to turn the light on/off manually. Today, I looked into the problem and found that there is a conflict in the IP addresses of the EY Ethernet Strip (which Chas assigned a static IP but did not include detailed procedures forno) and the vertex area laptop, paola. The failure of the python control of the power strip coincided exactly with when Chub and I turned on paola for working at the IY chamber - but how was I supposed to know these events are correlated? I tried shutting down paola , power cycling the Ethernet power strip, and restarting the bind9 services on chiara, but remote control of the ethernet power strip remains elusive. I suspect reconfiguring the static IP for the Ethernet switch will require some serial port enabled device...

  14413   Wed Jan 23 12:39:18 2019 gautamUpdateSUSEY chamber work

While Chub is making new cables for the EY satellite box...

  1. I removed the unused optic on the NW corner of the EY table. It is stored in a clean Al-foil lined plastic box, and will be moved to the clean hardware section of the lab (along the South arm, south of MC2 chamber).
  2. Checked table leveling - Attachment #1, looked good, and has been stable over the weekend.
  3. I moved the two oversized washers on the reflector, which I believe are only used because the screw is long and wouldn't go in all the way otherwise. As shown in Attachment #2, this reduces the risk of clipping the main IFO beam axis.
  4. Yesterday, I pulled up the 40m CAD drawing, and played around with a rectangular box that approximates the extents of the elliptical reflector, to see what would be a good place to put it. I chose to go ahead with Attachment #3. Also shown is the eventually realized layout. Note that we'd actually like the dimension marked ~7.6 inches to be more like 7.1 inches, so the optic is actually ~0.5 inch ahead of the second focus of the ellipse, but I think this is good enough. 
  5. Attachment #4 shows the view of the optic as seen from the aperture on the back of the elliptical reflector. Looks good to me.
  6. Having positioned the reflector, I then inserted the heater into the aperture such that it is ~2/3rds the way in, which was the best position found by Annalisa last summer. I then ran 0.9 A of current through the heater for ~ 5 minutes. Attachment #5 shows the optic as seen with the FLIR with no heating, and after 5 minutes of heating. I'd say this is pretty unambiguous evidence that we are indeed heating the mirror. The gradient shown is significantly less pronounced than in Annalisa's simulations (~3K as opposed to 10K), but maybe the FLIR calibration isn't so great.
  7. For completeness, Attachment #6 shows the leveling of the table after this work. Nothing has chanegd significantly.

While the position of the reflector could possibly be optimized further, since we are already seeing a temperature gradient on the optic, I propose pushing on with other vent activities. I'm almost certain the current positioning places the optic closer to the second focus, and we already saw shifts of the HOM resonances with the old configuration, so I'd say we run with this and revisit if needed.

If Chub gives the Sat. Box the green flag, we will work on F.C.ing the mirrors in the evening, with the aim of closing up tomorrow/Friday. 

All raw images in this elog have been uploaded to the 40m google photos.

Attachment 1: leveling.pdf
leveling.pdf
Attachment 2: IMG_5930.jpg
IMG_5930.jpg
Attachment 3: Ellipse_layout.pdf
Ellipse_layout.pdf
Attachment 4: IMG_5932.jpg
IMG_5932.jpg
Attachment 5: hotMirror.pdf
hotMirror.pdf
Attachment 6: EY_leveling_after.pdf
EY_leveling_after.pdf
  14412   Tue Jan 22 20:45:21 2019 gautamUpdateVACNew N2 setup

The N2 ran out this weekend (again no reminder email, but I haven't found the time to setup the Python mailer yet). So all the valves Steve and I had opened, closed (rightly so, that's what the interlocks are supposed to do). Chub will post an elog about the new N2 valve setup in the Drill-press room, but we now have sufficient line pressure in the N2 line again. So Chub and I re-opened the valves to keep pumping on the RGA.

  14411   Tue Jan 22 20:36:53 2019 gautamUpdateSUSETMY OSEMs faulty

Short update on latest Satellite box woes.

  1. I checked the resistance of all 5 OSEM coils on ETMY using a DB25 breakout board and a multimeter - all were between 16-17 ohms (mesured from the cable to the Vacuum flange), which I think is consistent with the expected value.
  2. Checked the bias voltage (aka slow path) from the coil driver board was reaching the coils
    • The voltages were indeed being sent out of the coil driver board - I confirmed by driving a slow sine wave and measuring at the output of the coil driver board, with all the fast outputs disabled.
    • The voltage is arriving at the 64 pin IDC connector at the Satellite box - Chub and I verified this using some mini-grabbers and leads from wirewound resistors (we don't have a breakout board for this kind of connector, would be handy to get some!)
    • However, the voltages are not being sent out through the DB25 connectors on the side of the Satellite box, at least for the LL and UR channels. UL seems to work okay.
    • This behavior is consistent with the observation that we had to apply way larger bias voltages to get the cavity axis to line up than was the nominal values - if one or more coils weren't getting their signals, it would also explain the large PIT->YAW coupling I observed using the Oplev spot and the slow bias alignment EPICS sliders.
    • This behavior is puzzling - the Sat box is just supposed to be a feed-through for the coil driver signals, and we measured resistances between the 64 pin IDC connector and the corresponding DB25 pins, and measured in the range of 0.2-0.3 ohms. However, the voltage fails to make it through - not sure what's going on here.. We will investigate further on the electronics bench.

What's more - I did some Sat box switcheroo, swapping the SRM and ETM boxes back and forth in combination with the tester box. In the process, I seem to have broken the SRM sat box - all the shadow sensors are reporting close to 0 volts, and this was confirmed to be an electronic problem as opposed to some magnet skullduggery using the tester box. Once we get to the bottom of the ETMY sat box, we will look at SRM. This is more or less the last thing to look at for this vent - once we are happy the cavity axis can be recovered reliably, we can freeze the position of the elliptical reflector and begin the F.C.ing.

  14410   Sun Jan 20 23:41:00 2019 JonOmnistructureVACNotes on vac serial comm, adapter wiring

I've attached my handwritten notes covering all the serial communications in the vac system, and the relevant wiring for all the adapters, etc. I'll work with Chub to produce a final documentation, but in the meantime this may be a useful reference.

Attachment 1: Jon_wiring_notes.tar.gz
  14409   Sat Jan 19 15:33:18 2019 gautamUpdateSUSETMY OSEMs faulty

After diagnosis with the tester box, as I suspected, the fully open DC voltages on the two problematic channels, LL and UR, were restored once I replaced the LM6321 ICs in those two channel paths. However, I've been puzzled by the inability to turn on the Oplev loops on ETMY. Furthermore, the DC bias voltages required to get ETMY to line up with the cavity axis seemed excessively large, particularly since we seemed to have improved the table levelling.

I suspected that the problem with the OSEMs hasn't been fully resolved, so on Thursday night, I turned off the ETMY watchdog, kicked the optic, and let it ringdown. Then I looked at the time-series (Attachment #1) and spectra (Attachment #2) of the ringdowns. Clearly, the LL channel seems to saturate at the lower end at ~440 counts. Moreover, in the time domain, it looks like the other channels see the ringdown cleanly, but I don't see the various suspension eigenmodes in any of the sensor signals. I confirmed that all the magnets are still attached to the optic, and that the EQ stops are well clear of the optic, so I'm inclined to think that this behavior is due to an electrical fault rather than a mechanical one.

For now, I'll start by repeating the ringdown with a switched out Satellite Box (SRM) and see if that fixes the problem. 

Quote:

While restoring OSEMs on ETMY, I noticed that the open voltages for the UR and LL OSEMs had significantly (>30%) changed from their values from ~2 years ago. The fact that it only occurred in 2 coils seemed to rule out gradual wear and tear, so I looked up the trends from Nov 25 - Nov 28 (Sundance visited on Nov 26 which is when we removed the cage). Not surprisingly, these are the exact two OSEMs that show a decrease in sensor voltage when the OSEMs were pulled out. I suspect that when I placed them in their little Al foil boats, I shorted out some contacts on the rear (this is reminiscent of the problem we had on PRM in 2016). I hope the problem is with the current buffer IC in the satellite box and not the physical diode, I'll test with the tester box and evaluate the problem further.

Attachment 1: Screen_Shot_2019-01-19_at_3.32.35_PM.png
Screen_Shot_2019-01-19_at_3.32.35_PM.png
Attachment 2: ETMY_sensors_1231832635.pdf
ETMY_sensors_1231832635.pdf
ELOG V3.1.3-