40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 1 of 323  Not logged in ELOG logo
ID Date Author Type Category Subject
  16266   Thu Jul 29 14:51:39 2021 PacoUpdateOptical LeversRecenter OpLevs

[yehonathan, anchal, paco]

Yesterday around 9:30 pm, we centered the BS, ITMY, ETMY, ITMX and ETMX oplevs (in that order) in their respective QPDs by turning the last mirror before the QPDs. We did this after running the ASS dither for the XARM/YARM configurations to use as the alignment reference. We did this in preparation for PRFPMI lock acquisition which we had to stop due to an earthquake around midnight

  16265   Wed Jul 28 20:20:09 2021 YehonathanUpdateGeneralThe temperature sensors and function generator have arrived in the lab

I put the temperature sensors box on Anchal's table (attachment 1) and the function generator on the table in front of the c1auxey Acromag chassis (attachment 2).

 

Attachment 1: 20210728_201313.jpg
20210728_201313.jpg
Attachment 2: 20210728_201607.jpg
20210728_201607.jpg
  16264   Wed Jul 28 17:10:24 2021 AnchalUpdateLSCSchnupp asymmetry

[Anchal, Paco]

I redid the measurement of Schnupp asymmetry today and found it to be 3.8 cm \pm 0.9 cm.


Method

  • One of the arms is misalgined both at ITM and ETM.
  • The other arm is locked and aligned using ASS.
  • The SRCL oscillator's output is changed to the ETM of the chosen arm.
  • The AS55_Q channel in demodulation of SRCL oscillator is configured (phase corrected) so that all signal comes in C1:CAL-SENSMAT_SRCL_AS55_Q_DEMOD_I_OUT.
  • The rotation angle of AS55 RFPD is scanned and the C1:CAL-SENSMAT_SRCL_AS55_Q_DEMOD_I_OUT is averaged over 10s after waiting for 5s to let the transients pass.
  • This data is used to find the zero crossing of AS55_Q signal when light is coming from one particular arm only.
  • The same is repeated for the other arm.
  • The difference in the zero crossing phase angles is twice the phase accumulated by a 55 MHz signal in travelling the length difference between the arm cavities i.e. the Schnupp Asymmetry.

I measured a phase difference of 5 \pm1 degrees between the two paths.

The uncertainty in this measurement is much more than gautam's 15956 measurement. I'm not sure yet why, but would look into it.

 

Quote:

I used the Valera technique to measure the Schnupp asymmetry to be \approx 3.5 \, \mathrm{cm}, see Attachment #1. The data points are points, and the zero crossing is estimated using a linear fit. I repeated the measurement 3 times for each arm to see if I get consistent results - seems like I do. Subtle effects like possible differential detuning of each arm cavity (since the measurement is done one arm at a time) are not included in the error analysis, but I think it's not controversial to say that our Schnupp asymmetry has not changed by a huge amount from past measurements. Jamie set a pretty high bar with his plot which I've tried to live up to. 

 

Attachment 1: Lsch.pdf
Lsch.pdf
  16263   Wed Jul 28 12:47:52 2021 YehonathanUpdateCDSOpto-isolator for c1auxey

To simulate a differential output I used two power supplies connected in series. The outer connectors were used as the outputs and the common connector was connected to the ground and used as a reference. I hooked these outputs to one of the differential analog channels and measured it over time using Striptool. The setup is shown in attachment 3.

I tested two cases: With reference disconnected (attachment 1), and connected (attachment 2). Clearly, the non-referred case is way too noisy.

Attachment 1: SUS-ETMY_SparePDMon0_NoRef.png
SUS-ETMY_SparePDMon0_NoRef.png
Attachment 2: SUS-ETMY_SparePDMon0_Ref_WithGND.png
SUS-ETMY_SparePDMon0_Ref_WithGND.png
Attachment 3: DifferentialOutputTest.png
DifferentialOutputTest.png
  16262   Wed Jul 28 12:00:35 2021 YehonathanUpdateBHDSOS assembly

After receiving two new tubes of EP-30 I resumed the gluing activities. I made a spreadsheet to track the assemblies that have been made, their position on the metal sheet in the cleanroom, their magnetic field, and the batch number.

I made another batch of 6 magnets yesterday (4th batch), the assembly from the 2nd batch is currently being tested for bonding strength.

One thing that we overlooked in calculating the amount of glue needed is that in addition to the minimum 8gr of EP-30 needed for every gluing session, there is also 4gr of EP-30 wasted on the mixing tube. So that means 12gr of EP-30 are used in every gluing session. We need 5 more batches so at least 60gr of EP-30 is needed. Luckily, we bought two tubes of 50gr each.

  16261   Tue Jul 27 23:04:37 2021 AnchalUpdateLSC40 meter party

[ian, anchal, paco]

After our second attempt of locking PRFPMI tonight, we tried to resotre XARM and YARM locks to IR by clicking on IFO_CONFIGURE>Restore XARM (POX) and IFO_CONFIGURE>Restore YARM (POY) but the arms did not lock. The green lasers were locked to the arms at maximum power, so the relative alignments of each cavity was ok. We were also able to lock PRMI using IFO_CONFIGURE>Restore PRMI carrier.

This was very weird to us. We were pretty sure that the aligment is correct, so we decided to cehck the POX POY signal chain. There was essentially no signal coming at POX11 and there was a -100 offset on it. We could see some PDH signal on POY11 but not enough to catch the locks.

We tried running IFO_CONFIGURE>LSC OFFSETS to cancel out any dark current DC offsets. The changes made by the script are shown in attachment 1.

We went to check the tables and found no light visible on beam finder cards on POX11 or POY11. We found that ITMX was stuck on one of the coils. We unstuck it using the shaking method. The OPLEVs on ITMX after this could not be switched on as the OPLEV servo were railing to limits. But when we ran Restore XARM (POX) again, they started working fine. Something is done by this script that we are not aware of.

We're stopping here. We still can not lock any of the single arms.


Wed Jul 28 11:19:00 2021 Update:

[gautam, paco]

Gautam found that the restoring of POX/POY failed to restore the whitening filter gains in POX11 / POY11. These are meant to be restored to 30 dB and 18 dB for POX11 and POY11 respectively but were set to 0 dB in detriment of any POX/POY triggering/locking. The reason these are lowered is to avoid saturating the speakers during lock acquisition. Yesterday, burt-restore didn't work because we restored the c1lscepics.snap but said gains are actually in c1lscaux.snap. After manually restoring the POX11 and POY11 whitening filter gains, gautam ran the LSCOffsets script. The XARM and YARM were able to quickly lock after we restored these settings.

The root of our issue may be that we didn't run the CARM & DARM watch script (which can be accessed from the ALS/Watch Scripts in medm). Gautam added a line on the Transition_IR_ALS.py script to run the watch script instead.

Attachment 1: Screenshot_2021-07-27_22-19-58.png
Screenshot_2021-07-27_22-19-58.png
  16260   Tue Jul 27 20:12:53 2021 KojiUpdateBHDSOS assembly

1 or 2. The stained ones are just fine. If you find the vented 1/4-20 screws in the clean room, you can use them.

For the 28 screws, yeah find some spares in the clean room (faster), otherwise just order.

  16259   Tue Jul 27 17:14:18 2021 YehonathanUpdateBHDSOS assembly

Jordan has made 1/4" tap holes in the lower EQ stop holders (attachment). The 1/4" stops (schematics) fit nicely in them. Also, they are about the same length as the small EQ stops, so they can be used.

However, counting all the 1/4"-3/4" vented screws we have shows that we are missing 2 screws to cover all the 7 SOSs. We can either:

1. Order new vented screws.

2. Use 2 old (stained but clean) EQ stops.

3. Screw holes into existing 1/4"-3/4" screws and clean them.

4. Use small EQ stops for one SOS.

etc.

Also, I found a mistake in the schematics of the SOS tower. The 4-40 screws used to hold the lower EQ stop holders should be SS and not silver plated as noted. I'll have to find some (28) spares in the cleanroom or order new ones.

 

Attachment 1: 20210727_154506.png
20210727_154506.png
  16257   Mon Jul 26 17:34:23 2021 PacoUpdateLoss MeasurementLoss measurement

[gautam, yehonathan, paco]

We went back to the loss data from last week and more carefully estimated the ARM loss uncertainties.

Before we simply stitched all N=16 repetitions into a single time-series and computed the loss: e.g. see Attachment 1 for such a YARM loss data. The mean and stdev for this long time series give the quoted loss from last time. We knew that the uncertainty was most certainly overestimated, as different realizations need not sample similar alignment conditions and are sensitive to different imperfections (e.g. beam angular motion, unnormalizable power fluctuations, etc...).


Today we analyzed the individual locked/misaligned cycles individually. From each cycle, it is possible to obtain a mean value of the loss as well as a std dev *across the duration of the trace*, but because we have a measurement ensemble, it is also possible to obtain an ensemble averaged mean and a statistical uncertainty estimate *across the independent cycle realizations*. While the mean values don't change much, in the latter estimate we find a much smaller statistical uncertainty. We obtain an XARM loss of 37.6 \pm 2.6 ppm and a YARM loss of 38.9 \pm 0.6 ppm. To make the distinction more clear, Attachment 2 and  Attachment 3 the YARM and XARM loss measurement ensembles respectively with single realization (time-series) standard deviations as vertical error bars, and the 1 sigma statistical uncertainty estimate filled color band. Note that the XARM loss drifts across different realizations (which happen to be ordered in time), which we think arise from inconsistent ASS dither alignment convergence. This is yet to be tested.


For budgeting the excessive uncertainties from a single locked/misaligned cycle, we could look at beam pointing, angular drift, power, and systematic differences in the paths from both reflection signals. We should be able to estimate the power fluctuations by looking at the recorded arm transmissions, the recorded MC transmission, PD technical noise, etc... and we might be able to correlate recorded oplev signals with the reflection data to identify angular drift. We have not done this yet.

Attachment 1: LossMeasurement_RawData.pdf
LossMeasurement_RawData.pdf
Attachment 2: YARM_loss_stats.pdf
YARM_loss_stats.pdf
Attachment 3: XARM_loss_stats.pdf
XARM_loss_stats.pdf
  16256   Sun Jul 25 20:41:47 2021 ranaUpdateLoss MeasurementLoss measurement

What are the quantitative root causes for why the statistical uncertainty is so large? Its larger than 1/sqrt(N)

  16255   Sun Jul 25 18:21:10 2021 KojiUpdateGeneralCanon camera / small silver tripod / macro zoom lens / LED ring light returned / Electronics borrowed

Camera and accesories returned

One HAM-A coildriver and one sat amp borrowed -> QIL

https://nodus.ligo.caltech.edu:8081/QIL/2616

 

  16254   Thu Jul 22 16:06:10 2021 PacoUpdateLoss MeasurementLoss measurement

[yehonathan, anchal, paco, gautam]

We concluded estimating the XARM and YARM losses. The hardware configuration from yesterday remains, but we repeated the measurements because we realized our REFL55_I_ERR and REFL55_Q_ERR signals representing the PD520 and MC_TRANS were scaled, offset, and rotated in a way that wasn't trivially undone by our postprocessing scripts... Another caveat that we encountered today was the need to add a "macroscopic" misalignment to the ITMs when doing the measurement to avoid any accidental resonances.

The final measurements were done with 16 repetitions, 30 second duration, and the logfiles are under scripts/lossmap_scripts/armLoss/logs/20210722_1423.txt and scripts/lossmap_scripts/armLoss/logs/20210722_1513.txt

Finally, the estimated YARM loss is 39\pm7 ppm, while the estimated XARM loss is 38\pm8 ppm. This is consistent with the inferred PRC gain from Monday and a PRM loss of ~ 2%.


Future measurements may want to look into slow drift of the locked vs misaligned traces (systematic errors?) and a better way of estimating the statistical uncertainty (e.g. by splitting the raw time traces into short segments)

  16253   Wed Jul 21 18:08:35 2021 yehonathanUpdateLoss MeasurementLoss measurement

{Gautam, Yehonathan, Anchal, Paco}

We prepared for the loss measurement using DC reflection method. We did the following changes:

1. REFL55_Q was disconnected and replaced with MC_T cable coming from the PD on the MC2 table. The cable has a red tag on it. Consequently we lost the AS beam. We realigned the optics and regained arm locks. The spot on the AS QPD had to be corrected.

2. We tried using AS55 as the PD for the DC measurement but we got ratios of ~ 0.97 which implies losses of more than 100 ppm. We decided to go with the traditional PD520 used for these measurements in the past.

3. We placed the PD520 used for loss measurements in front of the AS55 PD and optimized its position.

4. AS110 cable was disconnected from the PD and connected to PD520 to be used as the loss measurement cable.

5. In 1Y2 rack, AS110 PD cable was disconnected, REFL55_I was disconnected and AS110 cable was connected to REFL55_I channel.

So for the test, the MC transmission was measured at REFL55_Q and the AS DC was measured at REFL55_I.

We used the scripts/lossmap_scripts/armLoss/measArmLoss.py script. Note that this script assumes that you begin with the arm locked.

We are leaving the IFO in the configuration described above overnight and we plan to measure the XARM loss early AM. After which we shall restore the affected electrical and optical paths.


We ran the /scripts/lossmap_scripts/armLoss/measureArmLoss.py script in pianosa with 25 repetitions and a 30 s "duty cycle" (wait time) for the Y arm. Preliminary results give an estimated individual arm loss of ~ 30 ppm (on both X/Y arms) but we will provide a better estimate with this measurement. 

  16252   Wed Jul 21 14:50:23 2021 KojiUpdateSUSNew electronics

Received:

Jun 29, 2021 BIO I/F 6 units
Jul 19, 2021 PZT Drivers x2 / QPD Transimedance amp x2

 

Attachment 1: P_20210629_183950.jpeg
P_20210629_183950.jpeg
Attachment 2: P_20210719_135938.jpeg
P_20210719_135938.jpeg
  16251   Mon Jul 19 22:16:08 2021 pacoUpdateLSCPRFPMI locking

[gautam, paco]

Gautam managed to lock PRFPMI a little before ~ 22:00 local time. The ALS to RF handoff logic was found to be repeatable, which enabled us to lock a total of 4 times this evening. Under this nominal state, we can work on PRFPMI to narrow down less known issues and carry out systematic optimization. The second time we achieved lock, we ran sensing lines before entering the ASC stage (which we knew would destroy the lock), and offline analysis of the sensing matrix is pending (gpstime = 1310792709 + 5 min).

Things to note:

(a) there is an unexpected offset suggesting that the ALS and RF disagreed on what the lock setpoint should be, and it is still unclear where the offset is coming from.

(b) the first time the lock was reached, the ASC up stage destroyed it, suggesting these loops need some care (we were able to engage the ASC loops at low gains (0.2 instead of 1) but as soon as we enabled some integrators this consistently destroyed the lock

(c) gautam had (burt) restored to the settings from back in March when the PRFPMI was last locked, suggesting there was a small but somehow significant difference in the IFO that helped today relative to last week


Take home message--> The mere fact that we were able to lock PRFPMI rules out the considerably more serious problems with the signal chain electronics or processing. This should also be a good starting point for further debugging and optimization.


gautam: the circulating power, when the ASC was tweaked, hit 400 (normalized to single arm locked with a misaligned PRM) suggesting a recycling gain of 22.5, and an average arm loss of ~30ppm round trip (assuming 2% loss in the PRC). 

  16250   Sat Jul 17 00:52:33 2021 KojiUpdateGeneralCanon camera / small silver tripod / macro zoom lens / LED ring light borrowed -> QIL

Canon camera / small silver tripod / macro zoom lens / LED ring light borrowed -> QIL

Attachment 1: P_20210716_213850.jpg
P_20210716_213850.jpg
  16249   Fri Jul 16 16:26:50 2021 gautamUpdateComputersDocker installed on nodus

I wanted to try hosting some docker images on a "private" server, so I installed Docker on nodus following the instructions here. The install seems to have succeeded, and as far as I can tell, none of the functionality of nodus has been disturbed (I can ssh in, access shared drive, elog seems to work fine etc). But if you find a problem, maybe this action is responsible. Note that nodus is running Scientific Linux 7.3 (Nitrogen).

  16248   Thu Jul 15 14:25:48 2021 PacoUpdateLSCCM board

[gautam, paco]

We tested the CM board by implementing the high bandwidth IR lock (single arm). In preparation for this test we temporarily connected the POY11_Q_MON output to the CM board IN1 input and checked the YARM POY transfer function by running the AA_YARM_TEMPLATE under users/Templates/LSC/LSC_loops/YARM_POY/. We made sure the YARM dither optimized TRY so as to maximize the optical gain stage. Then we proceeded as follows:

  • From the LSC --> CM Servo screen, we controlled the REFL 1 Gain (dB) slider (nominal +25) and MC Servo IN2 Gain (dB) slider (nominal -32 dB) to transfer the low bandwidth (digital) control to the high bandwidth (analog) control of the YARM.
  • During this game, we monitored the C1:LSC-POY11_I_ERR_DQ & C1:LSC-CM_SLOW_OUT_DQ error signal channels for saturation, oscillations, or stability.
  • Once a set of gains was successful in maintaining a stable lock, we measured the OLTF using SR 785 to track the UGF as we mix the two paths.
  • Once the gains have increased, a boost and super-boost stages may be enabled as well.

Ultimately, our ability to progressively increase the control bandwidth of the YARM is a proxy that the CM board is working properly. Attachment 1 shows the OLTF progression as we increased the loop's UGF. Note how as we approached the maximum measured UGF of ~ 22 kHz, our phase margin decreased signifying poor stability.


At the end of this measurement, at about ~ 15:45 I restored the CM board IN1 input and disconnected the POY11_Q_MON

gautam: the conclusion here is that the CM board seems to work as advertised, and it's not solely responsible for not being able to achieve the IR handoff. 

Attachment 1: high_BW_TFs.pdf
high_BW_TFs.pdf
  16247   Wed Jul 14 20:42:04 2021 gautamUpdateLSCLocking

[paco, gautam]

we decided to give the PRFPMI lock a go early-ish. Summary of findings today eve:

  1. Arms under ALS control display normal noise and loop UGFs.
  2. PRMI took longer than usual to lock (when arms are held off resonance) - could be elevated sesimic, but warrants measuring PRMI loop TFs to rule out any funkiness. MICH loop also displayed some saturation on acquisition, but after the boosts and other filters were turned on, the lock seemed robust and the in-loop noise was at the usual levels.
  3. We are gonna do the high bandwidth single arm locking experiments during daytime to rule out any issues with the CM board.

The ALS--> IR CARM handoff is the problematic step. In the past, getting over this hump has just required some systematic loop TF measurements / gain slider readjustments. We will do this in the next few days. I don't think the ALS noise is any higher than it used to be, and I could do the direct handoff as recently as March, so probably something minor has changed.

  16246   Wed Jul 14 19:21:44 2021 KojiUpdateGeneralBrrr

Jordan reported on Jun 18, 2021:
"HVAC tech came today, and replaced the thermostat and a coolant tube in the AC unit. It is working now and he left the thermostat set to 68F, which was what the old one was set to."

  16245   Wed Jul 14 16:19:44 2021 gautamUpdateGeneralBrrr

Since the repair work, the temperature is significantly cooler. Surprisingly, even at the vertex (to be more specific, inside the PSL enclosure, which for the time being is the only place where we have a logged temperature sensor, but this is not attributable to any change in the HEPA speed), the temperature is a good 3 deg C cooler than it was before the HVAC work (even though Koji's wind vane suggest the vents at the vertex were working). The setpoint for the entire lab was modified? What should the setpoint even be?

Quote:
 

- I went to the south arm. There are two big vent ducts for the outlets and intakes. Both are not flowing the air.
  The current temp at 7pm was ~30degC. Max and min were 31degC and 18degC.

- Then I went to the vertex and the east arm. The outlets and intakes are flowing.

Attachment 1: rmTemp.pdf
rmTemp.pdf
  16244   Mon Jul 12 18:06:25 2021 YehonathanUpdateCDSOpto-isolator for c1auxey

I edited /cvs/cds/caltech/target/c1auxey1/ETMYaux.db (after creating a backup) and added the spare coil driver channels.

I tested those channels using caget while fixing wiring issues. The tests were all succesful. The digital output channel were tested using the Windows machine since they are locked by some EPICs mechanism I don't yet understand.

One worrying point is I found that the differential analog inputs to be unstable unless I connected a reference to some stable voltage source unlike previous tests showed. It was unstable (but less) even when I connected the ref to the ground connectors on the power supplies on the workbench. This is really puzzling.

When I say unstable I mean that most of the time the voltage reading shows the right value, but occasionly there is a transient sharp volage drop of the order of 0.5V. I will do a more quantitative analysis tomorrow.

 

  16243   Fri Jul 9 18:35:32 2021 YehonathanUpdateCDSOpto-isolator for c1auxey

Following Koji's channel list review, we made changes to the wiring spreadsheet.

Today, I made the changes real in the Acromag chassis. I went through the channel list one by one and made sure it is wired correctly. Additionally, since we now need all the channels the existing isolators have, I replaced the isolator with the defective channel with a new one.

The things to do next:

1. Create entries for the spare coil driver and satellite box channels in the EPICs DB.

2. Test the spare channels.

  16242   Fri Jul 9 15:39:08 2021 AnchalSummaryALSSingle Arm Actuation Calibration with IR ALS Beat [Correction]

I did this analysis again by just doing demodulation go 5s time segments of the 60s excitation signal. The major difference is that I was not summing up the sine-cosine multiplied signals, so the error associated was a lot more. If I simply multpy the whole beatnote signal with digital LO created at excitation frequency, divide it up in 12 segments of 5 s each, sum them up individually, then take the mean and standard deviation, I get the answer as:
\frac{6.88 \pm 0.05}{f^2} nm/ctsas opposed to \frac{7.32 \pm 0.03}{f^2} nm/ctsthat was calculated using MICH signal earlier by gautum in 13984.

Attachment 1 shows the scatter plot for the complex calibration factors found for the 12 segments.

My aim in the previous post was however to get a time series of the complex calibration factor from which I can take a noise spectral density measurement of the calibration. I'll still look into how I can do that. I'll have to add a low pass filter to integrate the signal. Then the noise spectrum up to the low pass pole frequency would be available. But what would this noise spectrum really mean? I still have to think a bit about it. I'll put another post soon.

Quote:

We attempted to simulate "oscillator based realtime calibration noise monitoring" in offline analysis with python. This helped us in finding about a factor of sqrt(2) that we were missing earlier in 16171. we measured C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ when X-ARM was locked to main laser and Xend green laser was locked to XARM. An excitation signal of amplitude 600 was setn at 619 hz at C1:ITMX_LSC_EXC.

Signal analysis flow:

  • The C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ is calibrated to give value of beatntoe frequency in Hz. But we are interested in the fluctuations of this value at the excitation frequency. So the beatnote signal is first high passed with 50 hz cut-off. This value can be reduced a lot more in realtime system. We only took 60s of data and had to remove first 2 seconds for removing transients so we didn't reduce this cut-off further.
  • The I and Q demodulated beatntoe signal is combined to get a complex beatnote signal amplitude at excitation frequency.
  • This signal is divided by cts amplitude of excitation and multiplied by square of excitation frequency to get calibration factor for ITMX in units of nm/cts/Hz^2.
  • The noise spectrum of absolute value of  the calibration factor is plotted in attachment 1, along with its RMS. The calibration factor was detrended linearly so the the DC value was removed before taking the spectrum.
  • So Attachment 1 is the spectrum of noise in calibration factor when measured with this method. The shaded region is 15.865% - 84.135% percentile region around the solid median curves.

We got a value of \frac{7.3 \pm 3.9}{f^2}\, \frac{nm}{cts}.  The calibration factor in use is from \frac{7.32}{f^2} nm/cts from 13984.

Next steps could be to budget this noise while we setup some way of having this calibration factor generated in realitime using oscillators on a FE model. Calibrating actuation of a single optic in a single arm is easy, so this is a good test setup for getting a noise budget of this calibration method.

 

Attachment 1: ITMX_calibration_With_ALS_Beat.pdf
ITMX_calibration_With_ALS_Beat.pdf
  16241   Thu Jul 8 11:20:38 2021 Anchal, Paco, GautamSummaryLSCPRFPMI locking attempts

Last night Gautam walked us through the algorithm used to lock PRFPMI. We tried it several times with the PSL HEPA filter off between 10:00 pm July 7th to 1:00 am July 8th. None of our attempts were successful. In between, we tried to do the locking with old IMC settings as well, but it did not change the result for us. In most attempts, the arms would start to resonate with PRMI with about 200 times the power than without power recycling while the arms are still controlled by ALS beatnote. The handover of lock controls "CARM+DARM locked to ALS beatnote" to "Main laser + IMC locked to the CARM+DARM" would always fail. More specifically, we were seeing that as soon as we hand over the DC control of CARM from ALS beatnote to IR by feeding back to MC2, the lock would inevitably fail before the rest of the high-frequency control can be transferred over.

Nonetheless, Paco and I got a good demo of how to do PRFPMI locking if the need appears. With more practice and attempts, we should be able to achieve the lock at some point in the future. The issues in handover could be due to any of the following:

  • Although it seems like ALS beatnote fed control of arms keep them within the CARM IR linewidth as we see the IR resonating, there still could be some excess noise that needs to be dealt with.
  • Gautam conjectures, that the presence of high power in the arms connects the ITMs and the ETMs with an optical spring changing the transfer function of the pendula. This in turn changes the phase margin and possibly makes the CARM loop in IR PRFPMI unstable.
  • We should also investigate the loop transfer functions near the handover point for the ALS beatnote loop and the IR CARM loop and calculate the crossover frequency and gain/phase margins there.

More insights or suggestions are welcome.


Note; An earthquake came around lunch time and tripped all watchdogs. Most suspensions were recovered without issues, but ITMX appeared to be stuck. We tried the shaking procedure, but after this we couldn't restore the XARM lock. From alignment, we tried optimizing the TRX but we only got up to ~0.5 and ASS wouldn't work as usual. In the end the issue was that we had forgotten to enable the LL coil output devil so after we did this, we managed to recover the XARM.

  16240   Tue Jul 6 17:40:32 2021 KojiSummaryGeneralLab cleaning

We held the lab cleaning for the first time since the campus reopening (Attachment 1).
Now we can use some of the desks for the people to live! Thanks for the cooperation.

We relocated a lot of items into the lab.

  • The entrance area was cleaned up. We believe that there is no 40m lab stuff left.
    • BHD BS optics was moved to the south optics cabinet. (Attachment 2)
    • DSUB feedthrough flanges were moved to the vacuum area (Attachment 3)
  • Some instruments were moved into the lab.
    • The Zurich instrument box
    • KEPCO HV supplies
    • Matsusada HV supplies
  • We moved the large pile of SUPERMICROs in the lab. They are around MC2 while the PPE boxes there were moved behind the tube around MC2 area. (Attachment 4)
  • We have moved PPE boxes behind the beam tube on XARM behind the SUPERMICRO computer boxes. (Attachment 7)
  • ISC/WFS left over components were moved to the pile of the BHD electronics.
    • Front panels (Attachment 5)
    • Components in the boxes (Attachment 6)

We still want to make some more cleaning:

- Electronics workbenches
- Stray setup (cart/wagon in the lab)
- Some leftover on the desks
- Instruments scattered all over the lab
- Ewaste removal

Attachment 1: P_20210706_163456.jpg
P_20210706_163456.jpg
Attachment 2: P_20210706_161725.jpg
P_20210706_161725.jpg
Attachment 3: P_20210706_145210.jpg
P_20210706_145210.jpg
Attachment 4: P_20210706_161255.jpg
P_20210706_161255.jpg
Attachment 5: P_20210706_145815.jpg
P_20210706_145815.jpg
Attachment 6: P_20210706_145805.jpg
P_20210706_145805.jpg
Attachment 7: PXL_20210707_005717772.jpg
PXL_20210707_005717772.jpg
  16239   Tue Jul 6 16:35:04 2021 Anchal, Paco, GautamUpdateIOORestored MC

We found that megatron is unable to properly run scripts/MC/WFS/mcwfsoff and scripts/MC/WFS/mcwfson scripts. It fails cdsutils commands due to a library conflict. This meant that WFS loops were not turned off when IMC would get unlocked and they would keep integrating noise into offsets. The mcwfsoff script is also supposed to clear up WFS loop offsets, but that wasn't happening either. The mcwfson script was also not bringing back WFS loops on.

Gautam fixed these scripts temprorarily for running on megatron by using ezcawrite and ezcaswitch commands instead of cdsutils commands. Now these scripts are running normally. This could be the reason for wildly fluctuating WFS offsets that we have seen in teh past few months.

gautam: the problem here is that megatron is running Ubuntu18 - I'm not sure if there is any dedicated CDS group packaging for Ubuntu, and so we're using some shared install of the cdsutils (hosted on the shared chiara NFS drive), which is complaining about missing linked lib files. Depending on people's mood, it may be worth biting the bullet and make Megatron run Debian10, for which the CDS group maintains packages.

Quote:

MC was unlocked and struggling to recover this morning due to misguided WFS offsets. In order to recover from this kind of issue, we

  1. Cleared the bogus WFS offsets
  2. Used the MC alignment sliders to change MC1 YAW from -0.9860 to -0.8750 until we saw the lowest order mode transmission on the video monitor.
  3. With MC Trans sum at around ~ 500 counts, we lowered the C1:IOO-WFS_TRIGGER_THRESH_ON from 5000 to 500, and the C1:IOO-WFS_TRIGGER_MON from 3.0 to 0.0 seconds and let the WFS integrators work out some nonzero angular control offsets.
  4. Then, the MC Trans sum increased to about 2000 counts but started oscillating slowly, so we restored the delayed loop trigger from 0.0 to 3.0 seconds and saw the MC Trans sum reach its nominal value of ~ 14000 counts over a few minutes.

The MC is now restored and the plan is to let it run for a few hours so the offsets converge; then run the WFS relief script.

  16238   Tue Jul 6 10:47:07 2021 Paco, AnchalUpdateIOORestored MC

MC was unlocked and struggling to recover this morning due to misguided WFS offsets. In order to recover from this kind of issue, we

  1. Cleared the bogus WFS offsets
  2. Used the MC alignment sliders to change MC1 YAW from -0.9860 to -0.8750 until we saw the lowest order mode transmission on the video monitor.
  3. With MC Trans sum at around ~ 500 counts, we lowered the C1:IOO-WFS_TRIGGER_THRESH_ON from 5000 to 500, and the C1:IOO-WFS_TRIGGER_MON from 3.0 to 0.0 seconds and let the WFS integrators work out some nonzero angular control offsets.
  4. Then, the MC Trans sum increased to about 2000 counts but started oscillating slowly, so we restored the delayed loop trigger from 0.0 to 3.0 seconds and saw the MC Trans sum reach its nominal value of ~ 14000 counts over a few minutes.

The MC is now restored and the plan is to let it run for a few hours so the offsets converge; then run the WFS relief script.

  16237   Fri Jul 2 12:42:56 2021 Anchal, Paco, GautamSummaryLSCsnap file changed for MICH

We corrected the MICH locking snap file C1configure_MI.req and saved an updated C1configure_MI.snap. Now the 'Restore MICH' script in IFO_CONFIGURE>!MICH>Restore MICH works. The corrections included adding the correct rows of PD_DOF matrices to be at the right settings (use AS55 as error signal). The MICH_A_GAIN and MICH_B_GAIN needed to be saved as well.

We also were able to get to PRMI SB resonance. PRM was misalgined earlier from optimal position and after some manual aligning, we were able to get it to lock just by hitting IFO_CONFIGURE>!PRMI>Restore PRMI SB (3f).

  16236   Thu Jul 1 16:55:21 2021 AnchalSummaryOptical LeversFixed Centeringoptical levers PRM

This was a mistake. When arms are locked, PRM is misaligned by setting -800 offset in PIT dof of PRM. The oplev is set to function in normal state not this misalgined configuration. I undid my changes today by switching off the offset, realigning the oplev to center and then restoring the single arm locked state. The PRM OpLevs loops are off now.

Quote:

 PRM because I've known that we have been keeping its OpLev off. The reason was clear once I opened the table. The oplev reflection beam was hitting the PD box instead of the PD. After correcting, I was able to swithc on PRM opLev loops and saw normal functioning.

 

  16235   Thu Jul 1 16:45:25 2021 YehonathanUpdateBHDSOS assembly

The bonding test passed - the weight still hangs from the dumbell. Unfortunately, I broke the bond trying to release the assembly from the bracket. I made another batch of 6 dumbell+magnet.

I used some of the leftover epoxy to bond an assembly from the previous batch to a bracket so I can test it.

  16234   Thu Jul 1 11:37:50 2021 PacoUpdateGeneralrestarted c0rga

Physically rebooted c0rga workstation after failing to ssh into it (even as it was able to ping into it...) the RGA seems to be off though. The last log with data on it appears to date back to 2020 Nov 10, but reasonable spectra don't appear until before 11-05 logs. Gautam verified that the RGA was intentionally turned off then.

  16233   Thu Jul 1 10:34:51 2021 Paco, AnchalSummaryLSCETMY QPD fixed

Paco worked on alignign the beam splitter to get light on the ETMY QPD and was successful in centering it without any other changes in the settings.

  16232   Wed Jun 30 18:44:11 2021 AnchalSummaryLSCTried fixing ETMY QPD

I worked in Yend station, trying to get the ETMY QPD to work properly. When I started, only one (quadrant #3) of the 4 quadrants were seeing any lights. By just changing the beam splitter that reflects some light off to the QPD, I was able to get some amount of light in quadrant #2. However, no amount of steering would show any light in any other quadrants.

The only reason I could think of is that the incoming beam gets partially clipped as it seems to be hitting the beam splitter near the top edge. So for this to work properly, a mirror upstream needs to be adjusted which would change the alignment of TRX photodiode. Without the light on TRX photodiode, there is no lock and there is no light. So one can't steer this beam without lossing lock.

I tried one trick, in which, I changed the YARM lock trigger to POY DC signal. I got it to work to get the lock going even when TRY was covered by a beam finder card. However, this lock was still bit finicky and would loose lock very frequently. It didn't seem worth it to potentially break the YARM locking system for ETMY QPD before running this by anyone and this late in evening. So I reset everything to how it was (except the beam splitter that reflects light to EMTY QPD. That now has equal ligth falling on quadrant #2 and #3.

The settings I temporarily changed were:

  • C1:LSC-TRIG_MTRX_7_10 changed from 0 to -1 (uses POY DC as trigger)
  • C1:LSC-TRIG_MTRX_7_13 changed from 1 to 0 (stops using TRY DC as trigger)
  • C1:LSC-YARM_TRIG_THRESH_ON changed from 0.3 to -22
  • C1:LSC-YARM_TRIG_THRESH_OFF changed from 0.1 to -23.6
  • C1:LSC-YARM_FM_TRIG_THRESH_ON changed from 0.5 to -22
  • C1:LSC-YARM_FM_TRIG_THRESH_OFF changed from 0.1 to -23.6

All these were reverted back to there previous values manually at the end.

 

  16231   Wed Jun 30 15:31:35 2021 AnchalSummaryOptical LeversCentered optical levers on ITMY, BS, PRM and ETMY

When both arms were locked, we found that ITMY optical lever was very off-center. This seems to have happened after the c1susaux rebooting we did in June 17th. I opened the ITMY table and realigned the OPLev beam to the center when the arm was locked. I repeated this process for BS, PRM and ETMY. I did PRM because I've known that we have been keeping its OpLev off. The reason was clear once I opened the table. The oplev reflection beam was hitting the PD box instead of the PD. After correcting, I was able to swithc on PRM opLev loops and saw normal functioning.

  16230   Wed Jun 30 14:09:26 2021 Ian MacMillanUpdateCDSSUS simPlant model

I have looked at my code from the previous plot of the transfer function and realized that there is a slight error that must be fixed before we can analyze the difference between the theoretical transfer function and the measured transfer function.

The theoretical transfer function, which was generated from Photon has approximately 1000 data points while the measured one has about 120. There are no points between the two datasets that have the same frequency values, so they are not directly comparable. In order to compare them I must infer the data between the points. In the previous post [16195] I expanded the measured dataset. In other words: I filled in the space between points linearly so that I could compare the two data sets. Using this code:

#make values for the comparison
tck_mag = splrep(tst_f, tst_mag) # get bspline representation given (x,y) values
gen_mag = splev(sim_f, tck_mag) # generate intermediate values
dif_mag=[]
for x in range(len(gen_mag)):
    dif_mag.append(gen_mag[x]-sim_mag[x]) # measured minus predicted

tck_ph = splrep(tst_f, tst_ph) # get bspline representation given (x,y) values
gen_ph = splev(sim_f, tck_ph) # generate intermediate values
dif_ph=[]
for x in range(len(gen_ph)):
    dif_ph.append(gen_ph[x]-sim_ph[x])

At points like a sharp peak where the measured data set was sparse compared to the peak, the difference would see the difference between the intermediate “measured” values and the theoretical ones, which would make the difference much higher than it really was.

To fix this I changed the code to generate the intermediate values for the theoretical data set. Using the code here:

tck_mag = splrep(sim_f, sim_mag) # get bspline representation given (x,y) values
gen_mag = splev(tst_f, tck_mag) # generate intermediate values
dif_mag=[]
for x in range(len(tst_mag)):
    dif_mag.append(tst_mag[x]-gen_mag[x])#measured minus predicted

tck_ph = splrep(sim_f, sim_ph) # get bspline representation given (x,y) values
gen_ph = splev(tst_f, tck_ph) # generate intermediate values
dif_ph=[]
for x in range(len(tst_ph)):
    dif_ph.append(tst_ph[x]-gen_ph[x])

Because this dataset has far more values (about 10 times more) the previous problem is not such an issue. In addition, there is never an inferred measured value used. That makes it more representative of the true accuracy of the real transfer function.

This is an update to a previous plot, so I am still using the same data just changing the way it is coded. This plot/data does not have a Q of 1000. That plot will be in a later post along with the error estimation that we talked about in this week’s meeting.

The new plot is shown below in attachment 1. Data and code are contained in attachment 2

Attachment 1: SingleSusPlantTF.pdf
SingleSusPlantTF.pdf
Attachment 2: Plant_TF_Test.zip
  16229   Tue Jun 29 20:45:52 2021 YehonathanUpdateBHDSOS assembly

I glued another batch of 6 magnet+dumbell assemblies. I will take a look at them under the microscope once they are cured.

I also hanged a weight of ~150g from a sample dumbell made in the previous batch (attachments) to test the magnet+dumbell bonding strength.

Attachment 1: 20210629_135736.jpg
20210629_135736.jpg
Attachment 2: 20210629_135746.jpg
20210629_135746.jpg
  16228   Tue Jun 29 17:42:06 2021 Anchal, Paco, GautamSummaryLSCMICH locking tutorial with Gautam

Today we went through LSC locking mechanics with Gautam and as a "Hello World" example, worked on locking michelson cavity.


MICH settings changed:

  • Gautam at some point added 9 dB attenuation filters in MICH filter module in LSC to match the 9 dB pre-amplifier before digitization.
  • This required changing teh trigger thresholds, C1:LSC-MICH_TRIG_THRESH_ON and C1:LSC-MICH_TRIG_THRESH_OFF.
  • We looked at C1:LSC-AS55_Q_ERR_DQ and C1:LSC-ASDC_OUT_DQ on ndscope.
  • The zero crossings in AS55_Q correspond to ASDC going to zero. We found the threshold values of ASDC by finding the linear region in zero crossing of AS55_Q.
  • We changed the thresold values to UP: -0.3mW and DOWN -0.05mW. The thresholds were also changed in C1LSC_FM_TRIG.
  • We also set FM2,3,6 and 8 to be triggered on threshold.

We characterized the loop OLTF, found the UGF to be 90 Hz and measured the noise at error and control points.

gautam: one aim of this work was to demonstrate that the "Lock Michelson (dark)" script call from the IFOconfigure screen worked - it did, reliably, after the setting changes mentioned above.

  16227   Mon Jun 28 12:35:19 2021 YehonathanUpdateBHDSOS assembly

On Thursday, I glued another set of 6 dumbells+magnets using the same method as before. I made sure that dumbells are pressed onto the magnets.

I came in today to check the gluing situation. The situation looks much better than before. It seems like the glue is stable against small forces (magnetic etc.). I checked the assemblies under a microscope.

It seems like I used excessive amounts of glue (attachment 1,2). The surfaces of the dumbells were also contaminated (attachment 3). I cleaned the dumbells' surfaces using acetone and IPO (attachment 4) and scratched some of the glue residues from the sides of the assemblies.

Next time, I will make a shallow bath of glue to obtain precise amounts using a needle.

I glued a sample assembly on a metal bracket using epoxy. Once it cures I will hang a weight on the dumbell to test the gluing strength.

Attachment 1: toomuchglue1.png
toomuchglue1.png
Attachment 2: toomuchglue2.png
toomuchglue2.png
Attachment 3: dirtydumbell.png
dirtydumbell.png
Attachment 4: cleandumbell.png
cleandumbell.png
Attachment 5: assembly_on_metalbracket.png
assembly_on_metalbracket.png
  16226   Fri Jun 25 19:14:45 2021 JonUpdateEquipment loanZurich Instruments analyzer

I returned the Zurich Instruments analyzer I borrowed some time ago to test out at home. It is sitting on first table across from Steve's old desk.

Attachment 1: ZI.JPG
ZI.JPG
  16225   Fri Jun 25 14:06:10 2021 JonUpdateCDSFront-End Assembly and Testing

Summary

Here is the final summary (from me) of where things stand with the new front-end systems. With Anchal and Ian's recent scripted loopback testing [16224], all the testing that can be performed in isolation with the hardware on hand has been completed. We currently have no indication of any problem with the new hardware. However, the high-frequency signal integrity and noise testing remains to be done.

I detail those tests and link some DTT templates for performing them below. We have not yet received the Myricom 10G network card being sent from LHO, which is required to complete the standalone DAQ network. Thus we do not have a working NDS server in the test stand, so cannot yet run any of the usual CDS tools such as Diaggui. Another option would be to just connect the new front-ends to the 40m Martian/DAQ networks and test them there.

Final Hardware Configuration

Due to the unavailablity of the 18-bit DACs that were expected from the sites, we elected to convert all the new 18-bit AO channels to 16-bit. I was able to locate four unused 16-bit DACs around the 40m [16185], with three of the four found to be working. I was also able to obtain three spare 16-bit DAC adapter boards from Todd Etzel. With the addition of the three working DACs, we ended up with just enough hardware to complete both systems.

The final configuration of each I/O chassis is as follows. The full setup is pictured in Attachment 1.

  C1BHD C1SUS2
Component Qty Installed Qty Installed
16-bit ADC 1 2
16-bit ADC adapter 1 2
16-bit DAC 1 3
16-bit DAC adapter 1 3
16-channel BIO 1 1
32-channel BO 0 6

This hardware provides the following breakdown of channels available to user models:

  C1BHD C1SUS2
Channel Type Channel Count Channel Count
16-bit AI* 31 63
16-bit AO 16 48
BO 0 192

*The last channel of the first ADC is reserved for timing diagnostics.

The chassis have been closed up and their permanent signal cabling installed. They do not need to be reopened, unless future testing finds a problem.

RCG Model Configuration

An IOP model has been created for each system reflecting its final hardware configuration. The IOP models are permanent and system-specific. When ready to install the new systems, the IOP models should be copied to the 40m network drive and installed following the RCG-compilation procedure in [15979]. Each system also has one temporary user model which was set up for testing purposes. These user models will be replaced with the actual SUS, OMC, and BHD models when the new systems are installed.

The current RCG models and the action to take with each one are listed below:

Model Name Host CPU DCUID Path (all paths local to chiara clone machine) Action
c1x06 c1bhd 1 23 /opt/rtcds/userapps/release/cds/c1/models/c1x06.mdl Copy to same location on 40m network drive; compile and install
c1x07 c1sus2 1 24 /opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl Copy to same location on 40m network drive; compile and install
c1bhd c1bhd 2 25 /opt/rtcds/userapps/release/isc/c1/models/c1bhd.mdl Do not copy; replace with permanent OMC/BHD model(s)
c1su2 c1su2 2 26 /opt/rtcds/userapps/release/sus/c1/models/c1su2.mdl Do not copy; replace with permanent SUS model(s)

Each front-end can support up to four user models.

Future Signal-Integrity Testing

Recently, the CDS group has released a well-documented procedure for testing General Standards ADC and DACs: T2000188. They've also automated the tests using a related set of shell scripts (T2000203). Unfortnately I don't believe these scripts will work at the 40m, as they require the latest v4.x RCG.

However, there is an accompanying set of DTT templates that could be very useful for accelerating the testing. They are available from the LIGO SVN (log in with username: "first.last@LIGO.ORG"). I believe these can be used almost directly, with only minor updates to channel names, etc. There are two classes of DTT-templated tests:

  1. DAC -> ADC loopback transfer functions
  2. Voltage noise floor PSD measurements of individual cards

The T2000188 document contains images of normal/passing DTT measurements, as well as known abnormalities and failure modes. More sophisticated tests could also be configured, using these templates as a guiding example.

Hardware Reordering

Due to the unexpected change from 18- to 16-bit AO, we are now short on several pieces of hardware:

  • 16-bit AI chassis. We originally ordered five of these chassis, and all are obligated as replacements within the existing system. Four of them are now (temporarily) in use in the front-end test stand. Thus four of the new 18-bit AI chassis will need to be retrofitted with 16-bit hardware.
  • 16-bit DACs. We currently have exactly enough DACs. I have requested a quote from General Standards for two additional units to have as spares.
  • 16-bit DAC adapters. I have asked Todd Etzel for two additional adapter boards to also have as spares. If no more are available, a few more should be fabricated.
Attachment 1: test_stand.JPG
test_stand.JPG
  16224   Thu Jun 24 17:32:52 2021 Ian MacMillanUpdateCDSFront-End Assembly and Testing

Anchal and I ran tests on the two systems (C1-SUS2 and C1-BHD). Attached are the results and the code and data to recreate them.

We connected one DAC channel to one ADC channel and thus all of the results represent a DAC/ADC pair. We then set the offset to different values from -3000 to 3000 and recorded the measured signal. I then plotted the response curve of every DAC/ADC pair so each was tested at least once.

There are two types of plots included in the attachments

1) a summary plot found on the last pages of the pdf files. This is a quick and dirty way to see if all of the channels are working. It is NOT a replacement for the other plots. It shows all the data quickly but sacrifices precision.

2) In an in-depth look at an ADC/DAC pair. Here I show the measured value for a defined DC offset. The Gain of the system should be 0.5 (put in an offset of 100 and measure 50). I included a line to show where this should be. I also plotted the difference between the 0.5 gain line and the measured data. 

As seen in the provided plots the channels get saturated after about the -2000 to 2000 mark, which is why the difference graph is only concentrated on -2000 to 2000 range. 

Summary: all the channels look to be working they all report very little deviation off of the theoretical gain. 

Note: ADC channel 31 is the timing signal so it is the only channel that is wildly off. It is not a measurement channel and we just measured it by mistake.

Attachment 1: C1-SU2_Channel_Responses.pdf
C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf
Attachment 2: C1-BHD_Channel_Responses.pdf
C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf
Attachment 3: CDS_Channel_Test.zip
  16223   Thu Jun 24 16:40:37 2021 KojiUpdateSUSMC lock acquired back again

[Koji, Anchal]

The issue of the PD output was that the PD whitened outputs of the sat amp (D080276) are differential, while the successive circuit (D000210 PD whitening unit) has the single-ended inputs. This means that the neg outputs (D080276 U2) have always been shorted to GND with no output R. This forced AD8672 to work hard at the output current limit. Maybe there was a heat problem due to this current saturation as Anchal reported that the unit came back sane after some power-cycling or opening the lid. But the heat issue and the forced differential voltage to the input stage of the chip eventually cause it to fail, I believe.

Anchal came up with the brilliant idea to bypass this issue. The sat amp box has the PD mon channels which are single-ended. We simply shifted the output cables to the mon connectors. The MC1 sus was nicely damped and the IMC was locked as usual. Anchal will keep checking if the circuit will keep working for a few days.

Attachment 1: P_20210624_163641_1.jpg
P_20210624_163641_1.jpg
  16222   Wed Jun 23 09:05:02 2021 AnchalUpdateSUSMC lock acquired back again

MC was unable to acquire lock because the WFS offsets were cleared to zero at some point and because of that MC was very misaligned to be able to catch back lock. In such cases, one wants the WFS to start accumulating offsets as soon as minimal lock is attained so that the mode cleaner can be automatically aligned. So I did following that worked:

  • Made the C1:IOO-WFS_TRIG_WAIT_TIME (delay in WFS trigger) from 3s to 0s.
  • Reduced C1:IOO-WFS_TRIGGER_THRESH_ON (Switchin on threshold) from 5000 to 1000.
  • Then as soon as a TEM00 was locked with poor efficiency, the WFS loops started aligning the optics to bring it back to lock.
  • After robust lock has been acquired, I restored the two settings I changed above.
Quote:

 


At the end, since MC has trouble catching lock after opening PSL shutter, I tried running burt restore the ioo to 2021/Jun/17/06:19/c1iooepics.snap but the problem persists

 

  16221   Tue Jun 22 17:05:26 2021 YehonathanUpdateBHDSOS assembly

According to the schematics, the distance between the original EQ tap holes is 0.5". Given that the original tap holes' diameter is 0.13" there is enough room for a 1/4" drill.

Quote:

Then, can we replace the four small EQ stops at the bottom (barrel surface) with two 1/4-20 EQ stops? This will require drilling the bottom EQ stop holders (two per SOS).

 

 

  16220   Tue Jun 22 16:53:01 2021 Ian MacMillanUpdateCDSFront-End Assembly and Testing

The channels on both the C1BHD and C1SUS2 seem to be frozen: they arent updating and are holding one value. To fix this Anchal and I tried:

  • restarting the computers 
    • restarting basically everything including the models
  • Changing the matrix values
  • adding filters
  • messing with the offset 
  • restarting the network ports (Paco suggested this apparently it worked for him at some point)
  • Checking to make sure everything was still connected inside the case (DAC, ADC, etc..)

I wonder if Jon has any ideas. 

  16219   Tue Jun 22 16:52:28 2021 PacoUpdateSUSADC/Slow channels issues

After sliding the alignment bias around and browsing through elog while searching for "stuck" we concluded the ITMX osems needed to be freed. To do this, the procedure is to slide the alignment bias back and forth ("shaking") and then as the OSEMs start to vary, enable the damping. We did just this, and then restored the alignment bias sliders slowly into their original positions. Attachment 1 shows the ITMX OSEM sensor input monitors throughout this procedure.


At the end, since MC has trouble catching lock after opening PSL shutter, I tried running burt restore the ioo to 2021/Jun/17/06:19/c1iooepics.snap but the problem persists

Attachment 1: shake_and_damp.png
shake_and_damp.png
  16218   Tue Jun 22 11:56:16 2021 Anchal, PacoUpdateSUSADC/Slow channels issues

We checked back in time to see how the BS and PRM OSEM slow channels are zero. It was clear that they became zero when we worked on this issue on June 17th, Thursday. So we simply went back and power cycled the c1susaux acromag chassis. After that, we had to log in to c1susaux computer and run

sudo /sbin/ifdown eth1
sudo /sbin/ifup eth1

This restarted the ethernet port acromag chassis is connected to. This solved this issue and we were able to see all the slow channels in BS and PRM.

But then, we noticed that the OPLEV of ITMX is unable to read the position of the beam on the QPD at all. No light was reaching the QPD. We went in, opened the ITMX table cover and confirmed that the return OPLEV beam is way off and is not even hitting one of the steering mirrors that brings it to the QPD. We switched off the OPLEV contribution to the damping.

We did burt restore to 16th June morning using
burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Jun/16/06:19/c1susaux.snap -l /tmp/controls_1210622_095432_0.write.log -o /tmp/controls_1210622_095432_0.nowrite.snap -v

This did not solve the issue.

Then we noticed that the OSEM signals from ITMX were saturated in opposite directions for Left and Right OSEMs. The Left OSEM fast channels are saturated to 1.918 um for UL and 1.399 um for LL, while both right OSEM channels are bottomed to 0 um. On the other hand, the acromag slow PD monitors are showing 0 on the right channels but 1097 cts on UL PDMon and 802 cts in LL PD Mon. We actually went in and checked the DC voltages from the PD input monitor LEMO ports on the ITMX dewhitening board D000210-A1 and measured non-zero voltages across all the channels. Following is a summary:

ITMX OSEM readouts
  C1-SUS-ITMX_XXSEN_OUT
(Fast ADC Channels) (um)
C1-SUS-ITMX_xxPDMon
(Slow Acromag Monitors) (cts)
Multimeter measurements at input to Dewhitening Boards
(V)
UL 1.918 1097 0.901
LL 1.399 802 0.998
UR 0 0 0.856
LR 0 0 0.792
SD 0.035 20 0.883

We even took out the 4-pin LEMO outputs from the dewhitening boards that go to the anti-aliasing chassis and checked the voltages. They are same as the input voltages as expected. So the dewhitening board is doing its job fine and the OSEMs are doing their jobs fine.

It is weird that both the ADC and the acromags are reading these values wrong. We believe this is causing a big yaw offset in the ITMX control signal causing the ITMX to turn enough make OPLEV go out of range. We checked the CDS FE status (attachment 1). Other than c1rfm showing a yellow bar (bit 2 = GE FANUC RFM card 0) in RT Net Status, nothing else seems wrong in c1sus computer. c1sus FE model is running fine. c1x02 (the lower level model) does show a red bar in TIM which suggests some timing issue. This is present in c1x04 too.


Bottomline:

Currently, the ITMX coil outputs are disabled as we can't trust the OSEM channels. We're investigating more why any of this is happening. Any input is welcome.

 

 

 

Attachment 1: CDS_FE_Status.png
CDS_FE_Status.png
  16217   Mon Jun 21 17:15:49 2021 Ian MacMillanUpdateCDSCDS Upgrade

Anchal and I wrote a script (Attachment 1) that will test the ADC and DAC connections with inputs on the INMON from -3000 to 3000. We could not run it because some of the channels seemed to be frozen. 

Attachment 1: DAC2ADC_Test.py
´╗┐import os
import time
import numpy as np
import subprocess
from traceback import print_exc
import argparse


def grabInputArgs():
    parser = argparse.ArgumentParser(
... 75 more lines ...
  16216   Fri Jun 18 23:53:08 2021 KojiUpdateBHDSOS assembly

Then, can we replace the four small EQ stops at the bottom (barrel surface) with two 1/4-20 EQ stops? This will require drilling the bottom EQ stop holders (two per SOS).

 

ELOG V3.1.3-