40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 3 of 344  Not logged in ELOG logo
IDdown Date Author Type Category Subject
  17213   Tue Oct 25 22:01:53 2022 ranaConfigurationPEMAuto Z on trillium interface board

thanks, this seems to have recentered well.

It looks like it started to act funny at 0400 UTC on 10/24, so thats 9 PM on Sunday in the 40m. What was happening then?


Attachment 1: Screen_Shot_2022-10-26_at_4.45.30_PM.png
  17212   Tue Oct 25 17:27:11 2022 PacoSummaryBHDLO phase control with RF + audio sidebands

[Yuta, Paco]

Today we locked LO phase with BH55 + Audio dithering


We worked with MICH locked using AS55_Q with an offset = 50. Our BH55_Q_ERR is the same as in the previous elog (in this thread). We enabled audio dithering of AS1 to produce 280.54 Hz sidebands (exc gain = 15000). We used ELP80 (elliptic, 4th order lowpass with the second resonant notch at 280.54 Hz) at the BH55_Q_AS1_DEMOD_I output. This allowed us to generate an error signal to feedback into AS1 POS. Attachment #1 shows a screen capture of this configuration.


We close a loop with the above configuration to lock the LO phase using only filters FM5, FM8 and then optionally boost with FM2. The compromise we had to make because of our phase margin was to achieve UGF ~ 20 Hz (in contrast with ~ 70 Hz used in single bounce). Attachment #2 shows the measured OLTFs for LO_PHASE control using this scheme; the red was the final measured loop, while the blue was our initial reference before increasing the servo gain.


Attachment 1: Screenshot_2022-10-25_17-36-45_LOPHASELOCKED.png
Attachment 2: Screenshot_2022-10-25_17-33-33_LOPHASELOCKING_MICHBHD_ELP80_280HzDither.png
  17211   Tue Oct 25 14:29:56 2022 PacoSummarySEIEarthquake tripped SUS

[Yuta, Paco, JC]

This eq  potentially tripped ETMY, PR2, PR3, AS1, AS4, SR2, LO1, LO2 suspensions during today's WB meeting. We restored them into normal local damping.

We aligned the arm cavities just to verify things were ok and then moved on to BHD comissioning. No problems spotted so far.

  17210   Tue Oct 25 13:55:37 2022 KojiConfigurationPEMAuto Z on trillium interface board

This nicely brought the sensing signal back to ~zero. See attachment

Some basic info:

  • BS Seismometer is T240 (Trillium)
  • The interface unit is at 1X5 Slot 26. D1002694
  • aLIGO Trillium 240 Interface Quick Start Guide T1000742
Attachment 1: Screen_Shot_2022-10-25_at_13.56.46.png
  17209   Tue Oct 25 09:57:34 2022 PacoConfigurationPEMAuto Z on trillium interface board

I pressed the Auto-Z(ero) button for ~ 3 seconds at ~9:55 local (pacific) time on the trillium interface on 1X5.

  17208   Tue Oct 25 08:25:00 2022 JCUpdateBHDBHD fringe aligned with reduced LO and AS beam clipping

I aligned today using this scheme. I couldn't seem to get C1:IOO-MC_TRANS_SUM above 13400 by using WFS or manually aligning. The original state before was the following:
                  Pitch       Yaw     
    -0.4672     -0.7714
C1:SUS-MC2:      4.0446     -1.3558
C1:SUS-MC3:     -2.0006      1.6001


  17207   Tue Oct 25 05:26:24 2022 ranaSummaryIOOMC SUS tuning

in looking closer at the IMC WFS performance I notice 2 issues:

  1. the watchdog thresholds are set to 200 mV, but this only made sense back when the OSEM calibration was 2 V/mm. Because of the increased analog gain(?) the RMS value of the watchdog sensors is now ~35 mV for MC1, so it will trip its watchdog and unlock the IMC with a 10x smaller seismic impulse than before. I recommend changing the watchdog thresholds appropriately changing the OSEM sensor signals to something so that its the same for all SUS.
  2. For the MC SUS, the F2P or F2A decoupling filters are not on. SO the POS damping loop is injecting a lot of pitch noise into the mirrors. We could perhaps lower the ~1 Hz angular motion by commissioning those filters for the MC optics. Does anyone know why we have several filters in those filter banks? I think we could contain it all in 1, although its fine to make a few different ones with different Q's for testing the performance.
  17206   Mon Oct 24 18:01:00 2022 PacoSummaryBHDBHD actuation measurements

[Yuta, Paco]

Today we calibrated the actuation on BHD suspended optics: LO1, LO2, AS1, AS4.
Actuation transfer functions for these optics look good.

ITMY actuation

For a reference we locked LO-ITMY single bounce using the LSC MICH loop. The error point was BH55_Q, the whitening filter gain was 45 dB, IQ demod rotation angle = 151.061 deg, the servo gain was -10, and the actuation point was ITMY. The measured UGF for this loop was ~ 150 Hz when FM2, 3, 4, 5 and 8 were all enabled. Note FM8 is an elliptic low pass (600 Hz cutoff).

LO1, LO2, AS1, AS4 actuation

We then lock the LO phase by feeding back BH55_Q_ERR to the actuation points under test with exactly the same filters but a servo gain of 0.6 but otherwise we are using the same servo filters FM2, 3, 4, 5 and 8 for this controls. The measured UGFs were all near ~ 70 Hz.

Here we had to be careful not to excite mechanical (?) resonances similar to the previously observed "violin" modes in LO1. In particular, we first noticed unsupressed 816 Hz noise in AS1 was being reinjected by the loop sometimes tripping the local damping loops, so we added bandstop filters at the AS1_LSC output filter bank. The resulting loop was then allowed to increase the gain and turn on FM2 and FM3 (boosts). This was also the case in AS4, where 268 Hz and second + third harmonics appeared to be excited by our feedback control. Finally, AS4 also displayed some mechanical excitation at 96.7 Hz, which seemed too low to be a "violin" mode, and its "Q" factor was not as high. We added a bandstop for this as well.

Attachment #1 shows LO_PHASE OLTFs when actuating in the different optics. By taking the actuation ratios (Attachment #2) with respect to our ITMY actuation reference and which had previously been calibrated to be 4.74e-9 / f^2 m / cts, we now have estimated our BHD suspension actuation calibrations to be:

  • LO1 = 3.14e-8 / f^2 m / cts
  • LO2 = 2.52e-8 / f^2 m / cts
  • AS1 = 3.14e-8 / f^2 m / cts
  • AS4 = 2.38e-8 / f^2 m / cts

This magnitudes are consistent with the expected coil driver ranges (about a factor of 10 difference).

Attachment 1: OLTF_LOPhaseLocking_ITMYSingleBounce.png
Attachment 2: ActuationRatio_ITMY_AS1_AS4_LO1_LO2.png
  17205   Sat Oct 22 21:36:28 2022 ranaSummaryBHDBH55 phase locking efforts

give us an animated GIF of this cool new tool! - I'm curious what happens if you look at 2 DoF of the same suspension. Also would be cool to apply a bandpass filter before plotting XY, so that you could look for correlations at higher frequencies, not just seismic noise


Using xyplot tool, we tried to see the relationship between C1:HPC-BHDC_DIFF_OUT and C1:HPC-BH55_Q_DEMOD_I_OUT. The two signals, according to our theory, should be 90 degrees out of phase and should form an ellipse on XY plot. But what we saw was basically no correlation between the two.

  17204   Fri Oct 21 16:15:10 2022 yutaSummaryBHDLO phase locking with BH55 audio dither trials

[Paco, Yuta]

We are still struggling with locking LO phase in MICH or ITM single bounce with BH55 with audio dither.
Without audio dither, BH55 can be used to lock.

What works:
 - LO phase locking with ITMX single bounce, using BH55_Q
  - BH55_Q configuration: 45 dB whitening gain, with whitening filter on.
  - C1:LSC-BH55_PHASE_R=147.621 deg gives most signal in BH55_Q.
  - LO phase can be locked using BH55_Q, C1:HPC-LO_PHASE_GAIN=-0.5 (bright fringe for A, dark for B), feeding back to LO1 gives UGF of ~80Hz (funny structure in ~20 Hz region; see Attachment #1)

 - LO phase locking with ITMX single bounce, using BHDC_DIFF
  - BHDC B/A = 1.57 (gain balanced with C1:HPC-IN_MTRX)
  - LO phase can be locked using BHDC_DIFF, C1:HPC-LO_PHASE_GAIN=-0.4 (mid-fringe lock), feeding back to LO1 gives UGF of ~50 Hz (see Attachment #2).

 - LO phase locking with MICH locked with AS55_Q, using BH55_Q
  - AS55_Q configuration: 24 dB whitening gain, with whitening filter off
  - C1:LSC-AS55_PHASE_R=-150 deg gives most signal in AS55_Q
  - MICH can be locked using AS55_Q, C1:LSC-MICH_GAIN=-10, C1:LSC-MICH_OFFSET=30 (slightly off from AS dark fringe), feeding back to 0.5*BS gives UGF of ~100Hz (see Attachment #3)
  - LO phase can be locked using BH55_Q, C1:HPC-LO_PHASE_GAIN=-0.8 (bright fringe for A, dark for B), feeding back to LO1 gives UGF of ~45Hz (see Attachment #4)

 - LO phase locking with MICH locked with AS55_Q, using BHDC_DIFF
  - LO phase can be locked using BHDC_DIFF, C1:HPC-LO_PHASE_GAIN=1 (mid-fringe lock), feeding back to LO1. Not a very stable lock.

What does not work:
 - LO phase locking using BH55_Q demodulated at LO1 (or AS1) dither frequency, neither in ITMX sigle bounce or MICH locked with/without offset using AS55_Q
  - C1:HPC-AS1_POS_OSC_FREQ=142.7 Hz, C1:HPC-AS1_POS_OSC_CLKGAIN=3000, C1:HPC-BH55_Q_AS1_DEMOD_PHASE=-15 deg, BLP30 is used.
  - Attachment #5 shows error signals when LO phase is locked with BH55_Q. BHDC_DIFF and BH55_Q_AS1_DEMOD_I having some coherence is a good indication, but we cannot lock LO phase with BH55_Q_AS1_DEMOD_I yet.
  - Also, injection at 13.14 Hz with an amplitude of 300 for AS1 can be seen in both BH55_Q and BH55_Q_AS1_DEMOD_I (26 Hz peak for BHDC_DIFF, as it is quadratic, as expected), which means that BH55_Q_AS1_DEMOD_I is seeing something.

 - Check actuation TFs for LO1, LO2, AS1 too see if there are any funny structures at ~ 20 Hz.
 - LO phase locking might require at least ~50 Hz of UGF. Use higher audio dither frequency so that we can increase the control bandwidth.
 - Check analog filtering situation for BHDC_A and BHDC_B signals (they go minus when fringes are moving fast)

Attachment 1: Screenshot_2022-10-21_15-33-18_LO_OLTF_LO1_BH55_Q.png
Attachment 2: Screenshot_2022-10-21_15-42-50_LO_OLTF_LO1_BHDCDIFF.png
Attachment 3: Screenshot_2022-10-21_15-44-53_MICH_OLTF.png
Attachment 4: Screenshot_2022-10-21_15-50-43_LO_OLTF_LO1_BH55_Q_MICH.png
Attachment 5: Screenshot_2022-10-21_16-11-16.png
  17203   Fri Oct 21 10:37:36 2022 AnchalSummaryBHDBH55 phase locking efforts

After the amplifier was modified with a capacitor, we continued trying to approach locking LO phase to in quadrature with AS beam. Following is a short summary of the efforts:

  • To establish some ground, we tested locking MICH using BH55_Q instead of AS55_Q. After amplification, BH55_Q is almost the same level in signal as AS55_Q and a robust lock was possible.
  • Then we locked the LO phase using BH55_Q (single RF sideband locking), which locks the homodyne phase angle to 90 degrees. We were able to successfully do this by turning on extra boost at FM2 and FM3 along with FM4 and FM5 that were used to catch lock.
  • We also tried locking in a single ITMY bounce configuration. This is a Mach-Zehnder interferometer with PR2 acting as the first beam splitter and BHDBS as the recombination beamsplitter. Note that we failed earlier at this attempt due to the busted demodulation board. This lock worked as well with single RF demodulation using BH55_Q.
  • The UGF achieved in the above configurations was ~15 Hz.
  • In between and after the above steps, we tried using audio dither + RF sideband, and double demodulation to lock the LO phase but it did not work:
    • We could see a good Audio dither signal at 142.7 Hz on the BH55_Q signal. SNR above 20 was seen.
    • However, on demodulating this signal and transferring all signal to C1:HPC-BH55_Q_DEMOD_I_OUT, we were unable to lock the LO phase.
    • Using xyplot tool, we tried to see the relationship between C1:HPC-BHDC_DIFF_OUT and C1:HPC-BH55_Q_DEMOD_I_OUT. The two signals, according to our theory, should be 90 degrees out of phase and should form an ellipse on XY plot. But what we saw was basically no correlation between the two.
    • Later, I tried one more thing. The comb60 filter on BH55 is not required when using audio dither with it, so I switched it off.
      • I turned off comb60 filters on both BH55_I and BH55_Q filter modules.
      • I set the audio dither to 120 Hz this time to utilize the entire 120 Hz region between 60 Hz and 180 Hz power line peaks.
      • I changed the demodulation low pass filter to 60 Hz Butterworth filter. I tried using 2nd order to lose less phase due to this filter.
      • These steps did not fetch me any different results than before, but I did not get a good time to investigate this further as we moved into CDS upgrade activities.
  17202   Thu Oct 20 19:47:24 2022 TegaUpdateCDSRelocate front-ends to Rack 1X7

We mounted chiara, all front-end machines and switches in rack 1x7; reconnected power, dolphin, onestop and timing cables; and restarted the front-ends. Attachments 1 & 2 show the front and rear of rack 1X7. We are going to continue the clean up work tomorrow.


The ifo is not back up as can be seen in attachment 3. We think the timing issue mentioned earlier is the culprit, but all FEs seem to agree to within a second, so I am not sure. I restarted the models with iop errors other than the timing error, i.e. c1lsc, c1sus, c1ioo and c1iscex. This eliminated most of the errors but the timing error. However, the overflow field on c1lsc is non-zero and the number keeps increasing - indicating a problem with c1lsc? The new status is shown in attachement 4. My understanfin is the a red `TIM` flag in the CDS stateword is not a functional problem, so I guess we are almost there. I did a burt restore on rossa using a snapshot we took earlier today before the shutdown, reset the SUS watchdogs and started the docker services on optimus. Now the IMC is locked.


We are still getting shared memory glitch on c1ioo, see attachment 5.

Note: We left nodus, megatron, optimus and fb1 in rack 1X6 for now.


Attachment 1: IMG_20221020_193623269.jpg
Attachment 2: IMG_20221020_193637034.jpg
Attachment 3: iop_status_20221020.png
Attachment 4: iop_status_20221020_restart_models.png
Attachment 5: c1ioo_shm_glitch.png
  17201   Thu Oct 20 14:13:42 2022 yutaSummaryPSLPMC and IMC sideband frequencies measured

I measured the sideband frequencies for PMC and IMC lock (to use it for Mariner PMC and IMC design).

PMC: 35.498912(2) MHz
IMC: 29.485038(2) MHz

 - Mini-Circuits UFC-6000 was used. The spec sheet says the frequency accuracy in 1-40 MHz is +/- 2 Hz.
 - "29.5 MHz OUT" port of 40m Frequency Generation Unit (LIGO-T1000461) was connected to UFC-6000 to measure IMC sideband frequency.
 - "LO TO SERVO" port of Crystal Frequency Ref (LIGO-D980353) was connected to UFC-6000 to measure PMC sideband frequency.
 - It seems like IMC sideband frequency changed from 29.485 MHz to 29.491 MHz back in 2011 (40m/4621). We are back to 29.485 MHz. I'm not sure what happened after this.

Attachment 1: IMC.JPG
Attachment 2: PMC.JPG
  17200   Wed Oct 19 11:09:20 2022 RadhikaUpdateBHDBH55 RF output amplified

[Anchal, Radhika]

We selected a 102K (1 nF) ceramic capacitor and a 100 uF electrolytic capacitor for the RF amplifier power pins. I soldered the connections and reinstalled the amplifier [Attachments 1, 2].


1) please remember to follow the loading and power up instructions to avoid destroying our low noise RF amplifiers. Its not as easy as powering up any usual device.

2) also, please use the correct decoupling capacitors at the RF amp power pins. Its going to have problems if its powered from a distant supply over a long cable.


Attachment 1: IMG_3840.jpeg
Attachment 2: IMG_3847.jpeg
  17199   Wed Oct 19 09:48:49 2022 AnchalUpdateOptimal ControlIMC open loop noise monitor

Turning WFS loops back on at:

PDT: 2022-10-19 09:48:16.956979 PDT
UTC: 2022-10-19 16:48:16.956979 UTC
GPS: 1350233314.956979

  17198   Tue Oct 18 20:43:38 2022 AnchalUpdateOptimal ControlIMC open loop noise monitor

WFS loops were running for past 2 hours when I made the overall gain slider zero at:

PDT: 2022-10-18 20:42:53.505256 PDT
UTC: 2022-10-19 03:42:53.505256 UTC
GPS: 1350186191.505256

The output values are fixed to a good alignment. IMC transmission is about 14100 counts right now. I'll turn on the loop tomorrow morning. Data from tonight can be used for monitoing open loop noise.

  17197   Tue Oct 18 15:24:48 2022 TegaUpdateCDSRack 1X7 work proposal

We have decided to split the remaining CDS work on rack 1x7 into two phases, both of which end with us bringing the 40m systems back up.


Phase 1 (Clear rack 1X7 of all mounted pieces of equipment)

  • Move nodus to 1X6 bottom slot
  • Move optimus to 1X6 (replaces old fat FB which can be moved to storage)
  • Move DAQ and network switches to the top of 1X7 rack
  • Move the UPS to under 1X6
  • Clear 1X7 power rail of any connections


Phase 2 (Replace the mounting rails and mount all pieces of equipment)

  • Mount DAQ, network and Dolphin switches at the top rear of 1x7 rack
  • Mount 6 new front-ends
  • Mount KVM switch
  • Move nodus back to 1X7
  • Move optimus back to 1X7


  17196   Mon Oct 17 22:27:25 2022 ranaUpdateBHDBH55 RF output amplified

1) please remember to follow the loading and power up instructions to avoid destroying our low noise RF amplifiers. Its not as easy as powering up any usual device.

2) also, please use the correct decoupling capacitors at the RF amp power pins. Its going to have problems if its powered from a distant supply over a long cable.

  17195   Mon Oct 17 20:04:16 2022 AnchalUpdateBHDBH55 RF output amplified

[Radhika, Anchal]

We have added an RF amplifier to the output of BH55. See the MICH signal on BH55 outputs as compared to AS55 output on the attached screenshot.


Next steps:

- Amplify the BH55 RF signal before demodulation to increase the SNR. In order to power an RF amplifier, we need to use a breakout board to divert some power from the DB15 cable currently powering BH55.



  • Radhika first tried to use ZFL-500-HLN+ amplifier taken out from the amplifier storage along X-arm.
  • She used a DB15 breakout board to source the amplifier power from PD interface cable.
  • However, she reported no signal at the output.
  • We found that BH55 RFPD was not properly fixed tot eh optical table. We bolted it down properly and aligned the beam to the photodiode.
  • We still did not see any RF output.
  • I took over from Radhika on this issue. I tested the transfer function of the amplifier using moku:lab. I found that it was not amplifying at all.
  • I brought in a beanchtop PS and tested the amplifier by powering it directly. It drew 100 mA of current but showed no amplififcation in transfer function. The transfer function was constant at -40 dB with or without the amplifier powered.
  • I took out another RF amplifier from the same storage. This time a ZFL-1000-LN. I tested it with both benchtop PS and PD interface power source, it was wokring with 20 dB amplification.
  • I completed the installation and cable management. See photos attached.
  • I also took the opportunity to center the ITMY oplev.

Please throw away malfunctioning parts or label them malfunctioning before storing them with other parts. If we have to test each and every part before installation, it will waste too much of our time.


Attachment 1: BH55_RF_Amp_Working.png
Attachment 2: rn_image_picker_lib_temp_1b072d08-3780-4b1d-9325-5795ed099d3d.jpg
Attachment 3: rn_image_picker_lib_temp_9d7ed3c0-7ff0-4ff7-9349-0211be397dc5.jpg
Attachment 4: rn_image_picker_lib_temp_05da14e1-eae0-4a84-8761-1c42b122cb1b.jpg
  17194   Mon Oct 17 17:42:35 2022 JCHowToOPLEV TablesIMC Reflected beam sketch

I sketched up a quick drawing with estimated length for the IMC reflected beam. This includes the distances and focal length. Distances are from optic to optic. 

Attachment 1: Screenshot_2022-10-18_093033.png
  17193   Mon Oct 17 11:57:27 2022 ChrisOmnistructureCDSTiming system monitoring

I started a GPS timing monitor script, /opt/rtcds/caltech/c1/Git/40m/scripts/cds/tempusTimingMon/tempusTimingMon.py, which runs inside a docker container on optimus. It accesses the GPS receiver (EndRun Tempus LX) status via pysnmp, and creates the following epics channels with pcaspy:

  • C1:DAQ-GPS_TFOM “The Time Figure of Merit (TFOM) value ranges from 3 to 9 and indicates the current estimate of the worst case time error. It is a logarithmic scale, with each increment indicating a tenfold increase in the worst case time error boundaries.” (nominally 3, corresponding to 100 nsec)
  • C1:DAQ-GPS_STATE “Current GPS signal processor state. One of 0, 1 or 2, with 0 being the acquisition state and 2 the fully locked on state.”
  • C1:DAQ-GPS_SATS “Current number of GPS satellites being tracked.”
  • C1:DAQ-GPS_DAC “Current 16 bit, Voltage Controlled TCXO DAC value. Typical range is 20000 to 40000, where more positive numbers have the effect of raising the TCXO frequency.”
  • C1:DAQ-GPS_SNR “The current average carrier to noise ratio of all tracked satellites, in units of dB. Values less than 35 indicate weak signal conditions.”

Attached is a 1-day trend of the above channels, along with the IRIG-B timing offsets from the IOPs. No big timing excursions or slippages were seen yet, although the c1sus IOP (FEC-20) timing seems to be hopping around by one IOP sample time (15 µsec).

Attachment 1: timing.png
  17192   Sat Oct 15 17:22:56 2022 ChrisUpdateOptimal ControlIMC alignment controller testing

[Anchal, Radhika, Jamie, Chris]

We conducted a test of three alternative controllers for the IMC pitch DOFs on Friday. These were loaded into a new RTS model c1sbr, which runs on the c1ioo front end as a user-space program at 256 Hz. It communicates with the c1ioo controller via shared memory IPCs to exchange error and control signals.

The IMC maintained lock during the handoffs, and we were able to take one minute of data for each (circa GPS 1349807926, 1349808426, 1349808751; spectra attached), which we can review to assess the performance vs the baseline. (On the first trial, lock was lost at the end when the script tried to switch back to the baseline controller, because we did not take care to clear the integrators. On subsequent trials we did that part by hand.)

The method of setting up this test was convoluted, but now that we see it working, we can start putting in the merge requests to get the changes better integrated into the system. First, modifications were required to the realtime code generator, to get controllers running at the new sample rate of 256 Hz. (This was done in a separate filesystem image on fb1, /diskless/root.buster256, which is only loaded by c1ioo, so as to isolate the changes from the other front end machines.) The generated code then needed hand-edits to insert additional header files and linker options, so that the alternative controllers could be loaded from .so shared libraries. Also, the kernel parameters had to be set as described here, to allow the user-space controller to have a CPU core all to itself. Finally, isolating the core was done following the recipe in this script (skipping the parts related to docker, since we didn’t use it).

Attachment 1: 180.png
Attachment 2: 066.png
Attachment 3: 202.png
  17191   Fri Oct 14 17:04:28 2022 RadhikaUpdateBHDBH55 Q abnormality + fix

[Yuta, Anchal, Radhika]

Yesterday we attempted to lock MICH and BHD using the BH55_Q_ERR signal. We adjusted the demodulation phase to send the bulk of the error signal to the Q quadrature. With the LO beam misaligned, we first locked MICH with AS55_Q_ERR. We tried handing over the feedback signal to BH55_Q_ERR, which in theory should have been equivalent to AS55_Q_ERR. But this would not reduce the error and would instead break the MICH lock. Qualitatively the BH55_Q signal looked noisier than AS55_Q.

We used the Moku:Lab to send a 55 MHz signal into the demod board, replacing the BH55 RF input [Attachment 1]. The frequency was chosen to be 10 Hz  away from the demodulation frequency (5x Marconi source frequency). However, a 10Hz peak was not visible from the spectra - instead, we observed a 60 Hz peak. Tweaking the frequency offset a few times, we realized that there must be a ~50Hz offset between the Moku:Lab and the Marconi.

We generated an X-Y plot of BH55_Q vs. AS55_DC with the MICH fringe: this did not follow a circle or ellipse, but seemed to incoherently jump around. Meanwhile the X-Y plot BH55_I vs. AS55_DC looked like a coherent ellipse. This indicated that something might have been wrong with the demod board producing the I and Q quadrature signals.

We fed the BH55 RF signal into an unused demod board (previously AS165) [Attachment 2] and updated the channel routing accordingly. This step recovered elliptical I and Q signals with Moku input signal, and their relative gain was adjusted to produce a circle X-Y plot [Attachment 3]. C1:LSC-BH55_Q_GAIN was adjusted to 155.05/102.90=1.5068, and measured diff C1:LSC-BH55_PHASE_D was adjusted to 94.42 deg.

Now BH55_Q_ERR was able to be used to lock the MICH DOF. However, BH55 still appears to be noisy in both I and Q quadratures, causing the loop to feedback a lot of noise.

Next steps:

- Amplify the BH55 RF signal before demodulation to increase the SNR. In order to power an RF amplifier, we need to use a breakout board to divert some power from the DB15 cable currently powering BH55.

Attachment 1: IMG_3805.jpeg
Attachment 2: IMG_3807.jpeg
Attachment 3: BH55_IQDemodMeasuredDiff_1349737773.png
  17190   Thu Oct 13 23:52:45 2022 KojiUpdateIMCIMC ASC: summary pages and notes

The output matrices have been calculated on Aug 4, 2022 by me. [40m ELOG 17060]

Regarding the noise see [40m ELOG 17061]

With regard to the current IMC WFS design, a SURF student in 2014 (Andres Medina) and Nick Smith made the revision.
The telescope design was described in the elogs [40m ELOG 10410] [40m ELOG 10427] and also T1400670.

  17189   Thu Oct 13 23:25:22 2022 ranaUpdateIMCIMC ASC: summary pages and notes

Tega has kindly made a summary page for the IMC WFS. Its in a tab on the usual summary pages.

One thing I notice is that the feedback to MC2 YAW seems to have very little noise. What's up with that?

The output matrix (attached) shows that the WFS have very little feedback to MC2 in YAW, but normal feedback in PIT. Has anyone recalculated this output matrix in the past ~1-2 years?

I'm going to read Prof. Izumi's paper (https://arxiv.org/abs/2002.02703) to get some insight.

The output matrix doesn't seem to have any special thing to make this happen. Any ideas on what this could be?

Attachment 1: Screen_Shot_2022-10-14_at_3.19.43_PM.png
  17188   Thu Oct 13 19:06:42 2022 KojiUpdateLab OrganizationLab Cleanup 10/12/2022

I have moved the following electronics / components to "Section Y10 beneath the tube"

  • IQ Demod Spares D0902745 (Components)
  • Sat Box / HAM-A 40m parts D08276/D110117
  • 16bit DAC AI Rear PCB D070101-v3
  • D1900163 HV COIL DRIVER
  • Ribbon Cables for Chassis (Cardboard box)
  • Chassis DC Breaker Switches (Cardboard box)
  • Triplett HDMI displays (x3) / Good for portable CCD monitors / PC monitors. Battery powered!
  • ISC Whitening BI Config Boards D1001631/D1900166
  • AA/AI Untested D070081
  • WFS Interface / Soft Start D1101865/D1101816
  • Internal Wiring Kit Cable Spools
  • ISC Whitening / Interface D1001530 / D1002054
  • aLIGO WFS Head D1101614
  • Internal Wiring Kit (Large Plastic Box)
  • Front/Rear Panels (Large Plastic Box)
  • HV Coil Driver Test Kit / Spare PA95s (Large Plastic Box)


  17187   Thu Oct 13 14:46:34 2022 JCUpdateLab OrganizationLab Cleanup 10/12/2022

During Wednesday’s lab clean up, we made a ton of progress in organization. Our main focus was to tackle CDS debris from the ongoing upgrade. We proceeded with the following tasks.

  • Loop the OneStop Cables and mark ‘good’ or ‘bad’.
  • Clean materials from the PD testing table.
  • Removed the SuperMicro boxes from the lab
  • Vacuum area organization.
  • Remove old plastic containers from the laboratory.
  • Relocate Koji’s electronics underneath Y-Arm
  • Arrange cabling to the TestStand and create clearance to the Machine Shop/Laboratory exit.

Attachment #2 shows that all the CDS equipment has been relocated behind Section X3 of the X-Arm.

Attachment 1: IMG_6356.JPG
Attachment 2: IMG_6358.JPG
  17186   Wed Oct 12 18:18:16 2022 ranaUpdateASSModel Changes

Although we've usually used this empirical way to run the alignment, I'd prefer if we had an analytic / numerical model for it.

Radhika, coud you look into writing down the equations for how the various dithers show up in the various error signals into a Overleaf doc? I'm thinking about this currently for the IMC, so we can zoom about it next week once you've had a chance to think about it for a few days. It would be helpful to have this worked out for the 40m alignment for future debugging. Co-located dither and actuation is not likely the best way to do things from a noise and loop x-coupling point of view.


  17185   Wed Oct 12 14:01:23 2022 RadhikaUpdateASSModel Changes

[Anchal, Radhika]

We proceeded with the TODO items from [17166].

We tried to update the YARM ASS output matrix to appropriately feed back the ETM and ITM T error signals (input beam pointing) to actuate on PR2 and PR3. Using the existing matrix (used for actuating on TT1 and TT2) led to diverging error signals and big drops in transmission. We iteratively tried flipping signs on the matrix elements, but exhausting all combinations of parity was not efficent, since angular sign conventions could be arbitrary across optics. 

We decided to go ahead with Yuta's suggestion of dithering on PR2 and PR3 for input beam pointing, instead of ETMY and ITMY. This would simplify the output matrix greatly since dithering and actuation would now be applied to the same optics. Anchal made the necessary model changes. We tried a diagonal identity submatrix (for input pointing) to map each error signal to the corresponding DOF. With the length (L) control loops disengaged, this configuration decreased all T error signals and increased YARM transmision. We then re-engaged the L loops: the final result is that YARM transmission reached just below 1 [Attachment 1].

Attachment 1: YARM_ASS.png
  17184   Tue Oct 11 16:52:42 2022 AnchalUpdateIOORenaming WFS channels to match LIGO site conventions

[Tega, Anchal]

c1mcs and c1ioo models have been updated to add new acquisition of data.

IOO:WFS channels

We found from https://ldvw.ligo.caltech.edu/ldvw/view that following channels with "WFS" in them are acquired at the sites:

  • :IOO-WFS1_IP
  • :IOO-WFS1_IY
  • :IOO-WFS2_IP
  • :IOO-WFS2_IY

These are most probably error signals of WFS1 and WFS2. At 40m, we have following channel names instead:


And similar for Q outputs as well. We also have chosen quadrature signals (signals after sensing matrix) at:


We added following testpoints and are acquiruing them at 1024 Hz:

  • C1:IOO-WFS1_IP  (same as C1:IOO-WFS1_I_PIT_OUT)
  • C1:IOO-WFS1_IY  (same as C1:IOO-WFS1_I_YAW_OUT)
  • C1:IOO-WFS2_IP  (same as C1:IOO-WFS2_I_PIT_OUT)
  • C1:IOO-WFS2_IY  (same as C1:IOO-WFS2_I_YAW_OUT)


For the transmission QPD at MC2, we found following acquired channels at the site:


We created testpoints in c1mcs.mdl to add these channel names and acquire them. Following channels are now available at 1024 Hz:


We started acquiring following channels for the 6 error signals at 1024 Hz:


We started acquiring following 6 control signals at 1024 Hz as well:


RXA: useful to know that you have to append "_DQ" to all of the channel names above if you want to find them with nds2-client.

Other changes:

In order to get C1:IOO-MC_TRANS_[DC/P/Y], we had to get rid of same named EPICS output channels in the model. These were been acquired at 16 Hz before this way. We then updated medm screens and autolocker config file. For slow outputs of these channels, we are using C1:IOO-MC_TRANS_[PIT/YAW/SUMFILT]_OUTPUT now. We had to restart daqd service for changes to take effect. This can be done by sshing into fb1 and running:

sudo systemctl restart rts-daqd

Now there is a convinient button present in FE overview status medm screen to restart DAQD service by a simple click.

  17183   Tue Oct 11 11:27:58 2022 KojiUpdateCDSCDS Upgrade remaining issues

Perhaps 1PPS around the PSL was used for the Rb standard to be locked to the GPS 1PPS.

If we need to drive multiple devices, we should use a fanout circuit to avoid distorting the 1PPS. -CW

  17182   Tue Oct 11 10:58:36 2022 ChrisUpdateCDSCDS Upgrade remaining issues

The original fb1 now boots from its new drive, which is installed in a fixed drive bay and connected via SATA. There are no spare SATA power cables inside the chassis, so we’re temporarily powering it from an external power supply borrowed from a USB to SATA kit (see attachment).

The easiest way to eliminate the external supply would be to use a 4-pin Molex to SATA adapter, since the chassis has plenty of 4-pin Molex connectors available. Unfortunately those adapters sometimes start fires. The ones with crimped connectors are thought to be safer than those with molded connectors. However, the safest course will probably be to just migrate the OS partition onto the 2 TB device from the hardware RAID array, once we’re happy with it.

Historic data from the past frames should now be available, as should NDS2 access.

Starting to look into the timing issue:

  • The GPS receiver (EndRun Tempus LX) seems to have low signal from its antenna: it’s sometimes losing lock with the satellites. We should find a way to log the signal strength and locked state of this receiver in EPICS and frames (perhaps using its SNMP interface).
  • There was a BNC tee attached to the 1PPS output of the receiver. One cable was feeding the timing system, and another one went somewhere into the PSL (if I traced it correctly?). I removed the tee and connected the timing system directly to 1PPS.
  • I updated the firmware on the Tempus LX (but don’t expect this to make any difference)
Attachment 1: PXL_20221010_212415180.jpg
  17181   Mon Oct 10 10:14:05 2022 ChrisUpdateCDSCDS Upgrade remaining issues

List of remaining tasks to iron out various wrinkles with the upgrade:

The situation with the timing system is that we have not touched it at all in the upgrade, but have added a new diagnostic: Spectracom timing boards in each FE, to compare vs the ADC clock. So I expect that what we’re seeing now is not new, but may not have been noticed previously.

What we’re seeing is:

  • Timing status in the IOP statewords is red
  • The offset between FE timing and Spectracom is large: ~1000 µsec or greater. At the sites, this is typically much smaller, like 10 µsec
  • The offset is not stable (see first attachment). There are both excursions where it wanders off by tens of µsec and then comes back, as well as slips where it runs away by hundreds of µsec before settling down at a different level.

Possible issues stemming from unstable timing:

  • One of the timing excursions apparently glitched the ADC clock on c1ioo and c1sus2. Those IOPs are currently running with cycle time >15 µsec (see second attachment). This may mean the inputs on certain ADCs are delayed by an extra sample time, relative to the other ADCs
  • If the offset drifts too far, a discrepancy in the timestamps can glitch the IPCs and daqd
Attachment 1: timingslip.png
Attachment 2: adctimingglitch.png
  17180   Mon Oct 10 00:05:24 2022 ChrisUpdateIOOIMC WFS / MC2 SUS glitch

Thanks for pointing out that EPICS data collection (slow channels) was not working. I started the service that collects these channels (standalone_edc, running on c1sus), and pointed it to the channel list in /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini, so this should be working now.

controls@c1sus:~$ systemctl status rts-edc_c1sus
● rts-edc_c1sus.service - Advanced LIGO RTS stand-alone EPICS data concentrator
   Loaded: loaded (/etc/systemd/system/rts-edc_c1sus.service; enabled; vendor preset: enabled)
   Active: active (running) since Sun 2022-10-09 23:30:15 PDT; 10h ago
 Main PID: 32154 (standalone_edc)
   CGroup: /system.slice/rts-edc_c1sus.service
           ├─32154 /usr/bin/standalone_edc -i /etc/advligorts/edc.ini -l
           └─32159 caRepeater



The IMC and the IMC WFS kept running for ~2days. 👍

I wanted to look at the trand of the IMC REFL DC, but the dataviewer showed that the recorded values are zero. And this slow channel is missing in the channel list.

I checked the PSL PMC signals (slow) as an example, and many channels are missing in the channel list.

So something is not right with some part of the CDS.

Note that I also reported that the WFS plot in the above refered previous elog has the value like 1e39



  17179   Sun Oct 9 13:49:49 2022 KojiUpdateIOOIMC WFS / MC2 SUS glitch

The IMC and the IMC WFS kept running for ~2days. 👍

I wanted to look at the trand of the IMC REFL DC, but the dataviewer showed that the recorded values are zero. And this slow channel is missing in the channel list.

I checked the PSL PMC signals (slow) as an example, and many channels are missing in the channel list.

So something is not right with some part of the CDS.

Note that I also reported that the WFS plot in the above refered previous elog has the value like 1e39


Attachment 1: Screen_Shot_2022-10-09_at_13.49.12.png
  17178   Fri Oct 7 22:45:15 2022 AnchalUpdateCDSCDS Upgrade Status Update

[Chris, Anchal, JC, Paco, Yuta]



  1. Ensure a snapshot of all channels is present from Oct 6th on New Chiara.
  2. Shutdown all machines:
    1. All slow computers (Except c1vac).
      Computer List: ssh into the computers and run:
      sudo systemctl stop modbusIOC.service
      sudo shutdown -h now
      1. c1susaux
      2. c1susaux2
      3. c1auxex
      4. c1auxey1
      5. c1psl
      6. c1iscaux
    2. All fast computers. Run on rossa:
      Disconnect left ethernet cables from the back of these computers.
    3. Power off all I/O chassis
    4. Swap the oneStop cables on all I/O chassis to fiber cables. On c1sus, connect the copper oneStop cable to teststand c1sus FE.
    5. Tun on all I/O chassis.
  3. Exchange chairas.
    1. Connect old chiara to teststand network.
    2. Connect New Chiara to martian network.
    3. Turn on both old and new chiara.
    4. Ensure all services are running on New Chiara by comparing with the list made earlier during preparation.

We finished all steps upto step 3 without any issue. We restarted all workstations to get the new nfs mount from New Chiara. Some other machined in lab might requrie restart too if they require nfs mounts. Note, c1sus was initially connected using a fiber oneStop cable that tested OK with the teststand IO chassis, but it still did not work with the c1sus chassis, and was reverted to a copper cable.

[Chris, Anchal, JC]

  • fb1.
    1. Move fb1(clone)'s OS drive into existing fb1 (on 1X6)
    2. Turn on fb1 (on 1X6).
    3. Ensure fb1 is mounting all it's file systems correctly.

While doing step 4, we realized that all 8 drive bays in the existing fb1 are occupied by disks that are managed by a hardware RAID controller (MegaRAID). All 8 hard disks seem to be combined into a single logical volume, which is then partitioned and appears to the OS as a 2 TB storage device (/dev/sda for OS) and 23.5 TB storage device (/dev/sdb for frames). There was no free drive bay to install our OS drive from fb1 (clone), nor was there any already installed drive that we could identify as an "OS drive" and swap out, without affecting access to the frame data. We tried to boot fb1 with the OS drive from fb1 (clone) using multiple SATA to USB cables, but it was not detected as a bootable drive. We then tried to put the OS drive back in fb1 (clone) and use the clone machine as the 40m framebuilder temporarily, in order to work on booting up fb1 in parallel with the rest of the upgrade. We found that fb1 (clone) would no longer boot from the drive either, as it had apparently lost (or never possessed?) its grub boot loader. The boot loader was reinstalled from the debian 10 install thumbdrive, and then fb1 (clone) booted up and functioned normally, allowing the remainder of the upgrade to go forward.

[Chris, Jamie]

Jamie investigated the situation with the existing fb1, and found that there seem to be additional drive bays inside the fb1 chassis (not accessible from the front panel), in which the new OS disk could be installed and connected to open SATA ports on the motherboard. We can try this possible route to booting up fb1 and restoring access to past frames next week.

[Chris, Anchal]



  • New FEs
    • Connect the network switch for new FEs to martian network. Make sure that old chiara is not connected to this same switch.
    • Turn on the new FEs. All models should start on boot in sequence.
    • Check if all models have green lights.
  • Burt restore using latest snapshot available.
  • Perform tests:
    • Check if all local damping loops are working as before.
    • Check if all IPC channels are transmitting and receiving correctly.
    • Check if IMC is able to lock.

We carried out the rest of the steps upto 7.3. We started all slow machines, some of them required reloading the daemons using:

sudo systemctl daemon-reload
sudo systemctl restart modbusIOC

We found that we were unable to ssh to c1psl, c1susaux, and c1iscaux. It turned out that chiara (clone) had a very outdated martian host table for the nameserver. Since Chris had introduced some new IPs for IPMI ports, dolphin switch etc, we could not simply copy back from the old chiara. So Chris used diff command to go through all changes and restored DNS configuration.

We were able to burt restore to Oct 7, 03:19 am point using the latest snapshot on New Chiara. All suspensions were being locally damped properly. We restarted megatron and optimus to get nfs mounts. All docker services are running normally, IMC autolocker is working and FF slow PID is working as well. PMC autolocker is also working fine. megatron's autourt cron job is running properly and restarted creating snapshots from 6:19 pm onwards.

Remaining things to do:

  • Test basic IFO locking
  • Resume BHD commissioning activities.
  • Chris and Jamie would work on transfering fb1 job to real fb1. This would restore access to all past frames which is not available right now.
  • Eventually, move the new FEs to 1X7 for permanent move into new CDS system.
  • After a few weeks of succesful run, we can remove old FEs from racks and associated cables.
  17177   Fri Oct 7 20:00:46 2022 KojiUpdateIOOIMC WFS / MC2 SUS glitch

After the CDS upgrade team called for a day (their work TBD), I took over the locked IMC to check how it looked like.

The lock was robust but the IMC REFL spot and the WFS DC/MC2 QPD were moving too much.
I wondered if there were something wrong with the damping. I thought MC3 damping seemed weak, but this was torelable level.LR
During the ring down check of MC2, I saw that the OSEM signals were glitchy. In the end I found it was LR sensor which was glitchy and fluctuating.

I went into the lab and pushed the connectors on the euro card modules and the side connectors as well as the cables on the MC2 sat amp.
I came back to the control room and found the MC2 LR OSEMs had the jump and it seems more stable now.

I leave the IMC locked & WFS running. This sus situation is not great at all and before we go too far, we'll work on the transition to the new electronics (but not today or next week).

By the way the unit of the signals on the dataviewer didn't make sense. Something seemed wrong with them.

Attachment 1: Screenshot_2022-10-07_19-59-45.png
  17176   Thu Oct 6 18:50:57 2022 AnchalSummaryBHDBH55 meas diff angle estimation and LO phase lock attempts

[Yuta, Paco, Anchal]

BH55 meas diff

We estimated meas diff angle for BH55 today by following this elog post. We used moku:lab Moku01 to send a 55 MHz tone to PD input port of BH55 demodulation board. Then we looked at I_ERR and Q_ERR signals. We balanced the gain on I channel to 1.16 to get the two signals to same peak to peak heights. Then we changed the mead diff angle to 91.97 to make the "bounding box" zero. Our understanding is that we just want the ellipse to be along x-axis.

We also aligned beam input to BH55 bit better. We used the single bounce beam from aligned ITMY as the reference.

LO phase lock with single RF demodulation

We attempted to lock LO phase with just using BH55 demodulated output.


  • ITMX, ETMs were significantly misaligned.
  • At BH port, overlapping beams are single bounce back from ITMY and LO beam.

We expected that we would be able to lock to 90 degree LO phase just like DC locking. But now we understand that we can't beat the light with it's own phase modulated sidebands.

The confusion happened because it would work with Michelson at the dark port output of michelson, amplitude modulation is generated at 55 MHz. We tried to do the same thing as was done for DC locking with single bounce  and then michelson, but we should have seen this beforehand. Lesson: Always write down expectation before attempting the lock.


  17175   Thu Oct 6 12:02:21 2022 AnchalUpdateCDSCDS Upgrade Plan

[Chris, Anchal]

Chris and I discussed our plan for CDS upgrade which amounts to moving new FEs, new chiara, and new FB1 OS system tomartian network.


  • Chiara (clone) (will be called "New Chiara" henceforth) will be resynced to existing chiara to get all model and medm changes.
  • All models on New Chiara will be rebuilt, and reinstalled.
  • All running servies on existing chiara will be printed and stored for comparison later.
  • New Chiara's OS drive will be updraged to Debian 10 and all services will be restored:
    • DHCP
    • DNS
    • NFS
    • rsync
  • Existing fb1 DAQ network card (10 GBps ethernet card) will be verified.
  • Make a list of all fb1 file system mounts and their UUIDs.

Upgrade plan:

Date: Fri Oct 7, 2022
Time: 11:00 am (After coffee and donuts)
Minimum required people: Chris, Anchal, JC (the more the merrier)


  1. Ensure a snapshot of all channels is present from Oct 6th on New Chiara.
  2. Shutdown all machines:
    1. All slow computers (Except c1vac).
      Computer List: ssh into the computers and run:
      sudo systemctl stop modbusIOC.service
      sudo shutdown -h now
      1. c1susaux
      2. c1susaux2
      3. c1auxex
      4. c1auxey1
      5. c1psl
      6. c1iscaux
    2. All fast computers. Run on rossa:
      Disconnect left ethernet cables from the back of these computers.
    3. Power off all I/O chassis
    4. Swap the oneStop cables on all I/O chassis to fiber cables. On c1sus, connect the copper oneStop cable to teststand c1sus FE.
    5. Tun on all I/O chassis.
  3. Exchange chairas.
    1. Connect old chiara to teststand network.
    2. Connect New Chiara to martian network.
    3. Turn on both old and new chiara.
    4. Ensure all services are running on New Chiara by comparing with the list made earlier during preparation.
  4. fb1.
    1. Move fb1(clone)'s OS drive into existing fb1 (on 1X6)
    2. Turn on fb1 (on 1X6).
    3. Ensure fb1 is mounting all it's file systems correctly.
  5. New FEs
    1. Connect the network switch for new FEs to martian network. Make sure that old chiara is not connected to this same switch.
    2. Turn on the new FEs. All models should start on boot in sequence.
    3. Check if all models have green lights.
  6. Burt restore using latest snapshot available.
  7. Perform tests:
    1. Check if all local damping loops are working as before.
    2. Check if all IPC channels are transmitting and receiving correctly.
    3. Check if IMC is able to lock.
    4. Try single arm locking
    5. Try MICH locking.
  8. Make contingency plan on how to revert to old system if something fails.
  17174   Thu Oct 6 11:12:14 2022 AnchalUpdateBHDBH55 RFPD installation complete

[Yuta, Paco, Anchal]

BH55 RFPD installation was still not complete until yesterday because of a peculiar issue. As soon as we would increase the whitening gain on this photodiode, we saw spikes coming in at around 10 Hz. Following events took place while debugging this issue:

  • We first thought that RFPD might be bad as we had just picked it up from what we call the graveyard table.
  • Paco fixed the bad connection issue at RF out and we confired RFPD transimpedance by testing it. See 40m/17159.
  • We tried changing the whitening filter board but that did not help.
  • We used BH55 RFPD to lock MICH by routing the demodulation board outputs to AS55 channels on WF2 board. We were able to lock MICH and increase whitening gain without the presence of any spikes. This ruled out any issue with RFPD.
  • Yuta and I tried swapping the whitening filter board but the problem persisted, which made us realize that the issue could be in the acromag that is writing the whitening gain for BH55 RFPD.
  • We combed through the /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db file to check if the whitening gain DAC channels are written twice but that was not the case. But changing the scan rate of the whitening gain output channel did change the rate at which teh spikes were coming.
  • This proved that some other process is constantly writing zero on these outputs.
  • It tuned out that all unused channels of acromags for c1iscaux are still defined and made to write 0 through /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_SPARE.db file. I don't think we need this spare file. If someone wants to use spare channels, they can quickly add it to dB file and restart the modbusIOC service on c1iscaux, it takes less than 2 minutes to do it. I vote to completely get rid of this file or atleast not use it in the cmd file.
  • After removing the violating channels, the problem with BH55 RFPD is resolved.

The installation of BH55 RFPD is complete now.


  17173   Thu Oct 6 07:29:30 2022 ChrisUpdateComputersSuccessful takeover attempt with the new front ends

[JC, Chris]

Last night’s CDS upgrade attempt succeeded in taking over the IFO. If the IFO users are willing, let’s try to run with it today.

The new system was left in control of the IFO hardware overnight, to check its stability. All looks OK so far.

The next step will be to connect the new FEs, fb1, and chiara to the martian network, so they’re directly accessible from the control room workstations (currently the system remains quarantined on the teststand network). We’ll also need to bring over the changes to models, scripts, etc that have been made since Tega’s last sync of the filesystem on chiara.

The previous elog noted a mysterious broken state of the OneStop link between FE and IO chassis, where all green LEDs light up on the OneStop board in the IO chassis, except the four next to the fiber link connector. This was seen on c1sus and c1iscex. It was recoverable last night on c1iscex, by fully powering down both FE computer and chassis, waiting a bit, and then powering up chassis and computer again. Currently c1sus is running with a copper OneStop cable because of the fiber link troubles we had, but this procedure should be tried to see if one of the fiber links can be made to work after all.

In order to string the short copper OneStop cable for c1sus, we had to move the teststand rack closer to the IO chassis, up against the back of 1X6/1X7. This is a temporary state while we prepare to move the FEs to their new rack. It hopefully also allows sufficient clearance to the exit door to pass the upcoming fire inspection.

At first, we connected the teststand rack’s power cables to the receptacle in 1X7, but this eventually tripped 1X7’s circuit breaker in the wall panel. Now, half of the teststand rack is on the receptacle in 1X6, and the other half is on 1X7 (these are separate circuits).

After the breaker trip, daqd couldn’t start. It turned out that no data was flowing to it, because the power cycle caused the DAQ network switch to forget a setting I had applied to enable jumbo frames on the network. The configuration has now been saved so that it should apply automatically on future restarts. For future reference, the web interface of this switch is available by running firefox on fb1 and navigating to

When the FE machines are restarted, a GPS timing offset in /sys/kernel/gpstime/offset sometimes fails to initialize. It shows up as an incorrect GPS time in /proc/gps and on the GDS_TP MEDM screens, and prevents the data from getting timestamped properly for the DAQ. This needs to be looked at and fixed soon. In the meantime, it can be worked around by setting the offset manually: look at the value on one of the FEs that got it right, and apply it using sudo sh -c "echo CORRECT_OFFSET >/sys/kernel/gpstime/offset".

In the first ~30 minutes after the system came up last night, there were transient IPC errors, caused by drifting timestamps while the GPS cards in the FEs got themselves resynced to the satellites. Since then, timing has remained stable, and no further errors occurred overnight. However, the timing status is still reported as red in the IOP state vectors. This doesn’t seem to be an operational problem and perhaps can be ignored, but we should check it out later to make sure.

Also, the DAC cards in c1ioo and c1iscey reported FIFO EMPTY errors, triggering their DACKILL watchdogs. This situation may have existed in the old system and gone undetected. To bypass the watchdog, I’ve added the optimizeIO=1 flag to the IOP models on those systems, which makes them skip the empty FIFO check. This too should be further investigated when we get a chance.

  17172   Tue Oct 4 21:00:49 2022 ChrisUpdateComputersFailed takeover attempt with the new front ends

[Jamie, JC, Chris]

Today we made a failed attempt to take over the 40m hardware with the new front ends on the test stand.

As an initial test, we connected the new c1iscey to its I/O chassis using the OneStop fiber link. This went smoothly, so we tried to proceed with the rest of the system, which uncovered several problems. Consequently, we’ve reverted control back to the old front ends tonight, and will regroup and make another attempt tomorrow.

Status summary:

  • c1iscey worked on the first try
  • c1lsc worked, after we sorted out which of the two OneStop cables run to its rack we needed to use
  • c1sus2 sort of worked (its models have been crashing sporadically)
  • c1ioo had a busted OneStop cable, and worked after that was replaced
  • c1sus refused to work with the fiber OneStop cables (we tried several, including the known working one from c1ioo), but we jury-rigged it to run over a copper cable, after nudging the teststand rack a bit closer to the chassis
  • c1iscex refused to work with the fiber OneStop cables, and substituting copper was not an option, so we were stuck

There are various pathologies that we've seen with the OneStop interface cards in the I/O chassis. We don't seem to have the documentation for these cards, but our interpretive guesses are as follows:

  • When working, it is supposed to illuminate all the green LEDs along the top of the card, and the four next to the connector. In this state, you can run lspci -vt on the host, and see the various PLX/Contec/etc devices that populate the chassis.
  • When the cable is unplugged or bad, only four green LEDs illuminate on the card, and none by the connector. No devices from the chassis can be seen from the host.
  • On c1iscex and c1sus, when a fiber link is plugged in, it turns on all the LEDs along the top of the card, but the four next to the connector remain dark. We’re not sure yet what this is trying to tell us, but lspci finds no devices from the chassis, same as if it is unplugged.
  • Also, sometimes on c1iscex, no LEDs would illuminate at all (possibly the card was not seated properly).

Tomorrow, we plan to swap out the c1iscex I/O chassis for the one in the test stand, and see if that lets us get the full system up and running.

  17171   Mon Oct 3 15:19:05 2022 PacoUpdateBHDLO phase noise and control after violin mode filters

[Anchal, Paco]

We started the day by taking a spectrum of C1:HPC-LO_PHASE_IN1, the BHD error point, and confirming the absence of 268 Hz peaks believed to be violin modes on LO1. We then locked the LO phase by actuating on LO2, and AS1. We couldn't get a stable loop with AS4 this morning. In all of these trials, we looked to see if the noise increased at 268 Hz or its harmonics but luckily it didn't. We then decided to add the necessary output filters to avoid exciting these violin modes. The added filters are in the C1:SUS-LO1_LSC bank, slots FM1-3 and comprise bandstop filters at first, second and third harmonics observed previously (268, 536, and 1072 Hz); bode plots for the foton transfer functions are shown in Attachment #1. We made sure we weren't adding too much phase lag near the UGF (~ 1 degree @ 30 Hz).

We repeated the LO phase noise measurement by actuating on LO1, LO2 and AS1, and observe no noise peaks related to 268 Hz this time. The calibrated spectra are in Attachment #2. Now the spectra look very similar to one another, which is nice. The rms is still better when actuating with AS1.


After the above work ended, I tried enabling FM1-3 on the C1:HPC_LO_PHASE control filters. These filters boost the gain to suppress noise at low frequencies. I carefully enabled them when actuating on LO1, and managed to suppress the noise by another factor of 20 below the UGF of ~ 30 Hz. Attachment #3 shows the screenshot of the uncalibrated noise spectra for (1) unsupressed (black, dashed), (2) suppressed with FM4-5 (blue, solid), and (3) boosted FM1-5 suppression (red).

Next steps:

  • Compare LO-ITMY and LO-ITMX single bounce noise spectra and MICH.
  • Compare DC locking scheme versus BH55 once it's working.
Attachment 1: filters_c1sus_lo1_lsc.png
Attachment 2: BHDC_asd_act.png
Attachment 3: boosted_lo_phase_control.png
  17170   Mon Oct 3 13:11:22 2022 YehonathanUpdateBHDSome comparison of LO phase lock schemes

I pushed a notebook and a Finesse model for comparing different LO phase locking schemes. Notebook is on https://git.ligo.org/40m/bhd/-/blob/master/controls/compare_LO_phase_locking_schemes.ipynb,

Here's a description of the Finesse modeling:

I use a 40m kat model https://git.ligo.org/40m/bhd/-/blob/master/finesse/C1_w_initial_BHD_with_BHD55.kat derived from the usual 40m kat file. There I added and EOMs (in the spaces between the BS and ITMs and in front of LO2) to simulate audio dithering. A PD was added at a 5% pickoff from one of the BHD ports to simulate the RFPD recently installed on the ITMY table.

First I find the nominal LO phase by shaking MICH and maximizing the BHD response as a function of the LO phase (attachment 1).

Then, I run another simulation where I shake the LO phase at some arbitrary frequency and measure the response at different demodulation schemes at the RFPD and at the BHD readout.

The optimal responses are found by using the 'max' keyword instead of specifying the demodulation phase. This uses the demodulation phase that maximizes the signal. For example to extract the signal in the 2 RF sideband scheme I use:

pd3 BHD55_2RF_SB $f1 max $f2 max $fs max nPickoffPDs

I plot these responses as a function of LO phase relative to the nominal phase divided by 90 degrees (attachment 2). The schemes are:

1. 2 RF sidebands where 11MHz and 55MHz on the LO and AS ports are used.

2. Single RF sideband (11/55 MHz) together with the LO carrier. As expected, this scheme is useful only when trying to detect the amplitude quadrature.

3. Audio dithering MICH and using it together with one of the LO RF sidebands. The actuation strength is chosen by taking the BS actuation TF 1e-11 m/cts*(50/f)**2 and using 10000 cts giving an amplitude of 3nm for the ITMs.

For LO actuation I can use 13 times more actuation strentgh becasue its coild drivers' output current is 13 more then the old ones.

4. Double audio dithering of LO2+MICH detecting it directly at the BHD readout (attachment 3).

Without noise considerations, it seems like double audio dithering is by far the best option and audio+RF is the next best thing.

The next thing to do is to make some noise models in order to make the comparison more concrete.

This noise model will include Input noises, residual MICH motion, and laser noise. Displacement noise will not be included since it is the thing we want to be detected.

Attachment 1: MICH_sens_vs_LO_phase.pdf
Attachment 2: LO_phase_sens_vs_LO_phase_RF.pdf
Attachment 3: LO_phase_sens_vs_LO_phase_double_audio.pdf
  17169   Mon Oct 3 08:35:59 2022 TegaUpdateIMCAdding IMC channels to frames for NN test


For the upcoming NN test on the IMC, we need to add some more channels to the frames. Can someone please add the MC2 TRANS SUM, PIT, YAW at 256 Hz? and then make sure they're in frames.

and even though its not working correctly, it would be good if someone can turn the MC WFS on for a little while. I'd just like to get some data to test some code. If its easy to roughly close the loops, that would be helpful too.


Currently, none of these channels are being written on frames. From simulink model, it seems the channels:




are supposed to be DQed but are not present in the /opt/rtcds/caltech/c1/chans/daq/C1MCS.ini file. I tried simply adding these channels to the file and rerunning the daqd_ services but that caused 0x2000 error on c1mcs model. In my attempt, I did not know what chnnum to give for these channels so I omitted that and maybe that is the issue.

The only way I know to fix this is to make and install c1mcs model again which would bring these channels into C1MCS.ini file. But We'll have to run activateDQ.py if we do that which I am not totally sure if it is in running condition right now. @Christopher Wipf do you have any suggestions?


aren't they all filtered? If so, perhaps we can choose whatever is the equivalent naming at the LIGO sites rather than roll our own again.

@Tega Edo can we run activateDQ.py or will that break everything now?


@Rana Adhikari Looking into this now.

@Anchal Gupta The only problem I see with activateDQ.py is the use of the deprecated print function, i.e. print var instead of print(var). After fixing that, it runs OK and does not change the input INI files as they already have the required channel names. I have created a temporary folder, /opt/rtcds/caltech/c1/chans/daq/activateDQtests/, which is populated with copies of the original INI files, a modified version of activateDQ.py that does not overwrite the original input files, and a script file difftest.sh that compares the input and output files so we can test the functionality of activateDQ.py in isolation. Furthermore, looking through the code suggests that all is well. Can you look at what I have done to check that this is indeed the case? If so, your suggestion of rebuilding and installing the updated c1mcs model and running activateDQ.py afterward should do the trick.

I tested the code with:

cd /opt/rtcds/caltech/c1/chans/daq/activateDQtests/


which creates output files with an _ prefix, for example _C1MCS.ini is the output file for C1MCS.ini, then I ran


to compare all the input and corresponding output files.

Note that the channel names you are proposing would change after running activateDQ.py, i.e.




My question is this: why aren't we using the correct channel names in the first place so that we have less work to do later on when we finally decide to stop using this postbuild script?


Yeah I found that these ERR channels are acquired and stored. I don't think we should do this either. Not sure what was the original motivation for this change. I tried commenting out this part of activateDQ.py and remaking and reinstalled c1mcs but it seems that activateDQ.py is called as postbuild script automatically on install and it uses some other copy of this file as my changes did not take affect and the DQ name change still happened.


Ah, we encountered the same puzzle as well. Chris found out that our models have `SCRIPT=activateDQ.py` embedded in the cds parameter block description, see attached image. We believe this is what triggers the postbuild script call to `activateDQ.py`. As for the file location, modern rtcds would have it in /opt/rtcds/caltech/c1/post_build, but I am not sure where ours is located. I did a quick search for this but could not find it in the usual place so I looked around for a bit and found this:

controls@rossa> find /opt/rtcds/userapps/ -name "activateDQ.py"







My guess is the last one /opt/rtcds/userapps/trunk/cds/c1/scripts/activateDQ.py.

Maybe we can ask @Yuta Michimura since he wrote this script?

Anyway, we could also try removing SCRIPT=activateDQ.py from the cds parameter block description to see if that stops the postbuild script call, but keep in mind that doing so would also stop the update of the OSEM and oplev channel names. This way we know what script is being used since we will have to run it after every install (this is a bad idea).

controls@c1sus:~ 0$ env | grep script


It looks like the guess was correct. Note that in the newer version of rtcds, we can use `rtcds env` instead of `env` to see what is going on.

Attachment 1: Screen_Shot_2022-09-30_at_9.52.39_AM.png
  17168   Sat Oct 1 13:09:49 2022 AnchalUpdateIMCWFS turned on

I turned on WFS on IMC at:

PDT: 2022-10-01 13:09:18.378404 PDT
UTC: 2022-10-01 20:09:18.378404 UTC
GPS: 1348690176.378404

The following channels are being saved in frames at 1024 Hz rate:


We can keep it running over the weekend as we will not use the interferometer. I'll keep an eye on it with occasional log in. We'll post the time when we switch it off again.

The IMC lost lock at:

UTC    Oct 03, 2022    01:04:16    UTC
Central    Oct 02, 2022    20:04:16    CDT
Pacific    Oct 02, 2022    18:04:16    PDT

GPS Time = 1348794274

The WFS loops kept running and thus took IMC to a misaligned state. Between the above two times, IMC was locked continuously with very brief lock loss events, and had all WFS loops running.

  17167   Fri Sep 30 20:18:55 2022 PacoUpdateBHDLO phase noise with different actuation points

[Paco, Koji]

We took lo phase noise spectra actuating on the for different optics-- LO1, LO2, AS1, and AS4. The servo was not changed during this time with a gain of 0.2, and we also took a noise spectrum without any light on the DCPDs. The plot is shown in Attachment #1, calibrated in rad/rtHz, and shown along with the rms values for the different suspension actuation points. The best one appears to be AS1 from this measurement, and all the optics seem to show the same 270 Hz (actually 268 Hz) resonant peak.

268 Hz noise investigation

Koji suspected the observed noise peak belongs to some servo oscillation, perhaps of mechanical origin so we first monitored the amplitude in an exponentially averaging spectrum. The noise didn't really seem to change too much, so we decided to try adding a bandstop filter around 268 Hz. After the filter was added in FM6, we turned it on and monitored the peak height as it began to fall slowly. We measured the half-decay time to be 264 seconds, which implies an oscillation with Q = 4.53 * f0 * tau ~ 3.2e5. This may or may not be mechanical, further investigation might be needed, but if it is mechanical it might explain why the peak persisted in Attachment #1 even when we change the actuation point; anyways we saw the peak drop ~ 20 dB after more than half an hour... After a while, we noticed the 536 Hz peak, its second harmonic, was persisting, even the third harmonic was visible.

So this may be LO1 violin mode & friends -

We should try and repeat this measurement after the oscillation has stopped, maybe looking at the spectra before we close the LO_PHASE control loop, then closing it carefully with our violin output filter on, and move on to other optics to see if they also show this noise.

Attachment 1: BHDC_asd_actuation_point.png
  17166   Fri Sep 30 18:30:12 2022 AnchalUpdateASSModel Changes

I updated c1ass model today to use PR2 PR3 instead of TT1 TT2 for YARM ASS. This required changes in c1su2 too. I have split c1su2 into c1su2 (LO1., LO2, AS1, AS4) and c1su3 (SR2, PR2, PR3). Now the two models are using 31 and 21 CPU out of 60 which was earlier 55/60. All changed compiled correctly and have been uploaded. Models have been restared and medm screens have been updated.

Model changes


  • Everything related to SR2, PR2, and PR3 have been moved to c1su3.
  • Extra binary output channels are also distributed between c1su2 and c1su3. BO_4 and BO5 have been moved to c1su3.


  • Added IPC receiving from ASS for PR2 and PR3


  • Inputs to TT1 and TT2 PIT and YAW filter modules have been terminated to ground.
  • The ASS outputs for YARM have been renamed to PR2 and PR3 from TT1 and TT2.
  • IPC sending blocks added to send PR2 and PR3 ASC signals to c1su3.


To do:

  • Updated YARM ASS output matrix to handle change in coil driver actuation on PR2 and PR3 in comparison to TT1 and TT2.
  • Yuta suggested dithering PR2 and PR3 for input beam pointing for YARM alignment.
  17165   Thu Sep 29 18:01:14 2022 AnchalUpdateBHDBH55 LSC Model Updates - part IV

More model changes


  • BH55_I and BH55_Q are now being read at ADC_0_14 and ADC_0_15. The ADC_0_20 and ADC_0_21 are bad due to faulty whitening filter board.
  • The whitening switch controls were also shifted accordingly.
  • the slow epics channels for BH55 anti-aliasing switch and whitening switch were added in /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db


  • MC1, MC2, and MC3 are running on new suspension models now.


  • DCPD_A and DCPD_B have been renamed to BHDC_A and BHDC_B following naming convention at other ports.
  • After the input summing matrix, the signals are called BHDC_SUM and BHDC_DIFF now.
  • BHDC_SUM and BHDC_DIFF can be directly using in sensing matrix bypassing the dither demodulation (to be used for DC locking)
  • BH55_I and BH55_Q are also sent for dither demodulation now (to be used in double dither method, RF and audio).
  • SHMEM channel names to c1bac were changed.


  • Conformed with new SHMEM channel names from c1hpc
  17164   Thu Sep 29 15:12:02 2022 JCUpdateComputersSetup the 6 new front-ends to boot off the FB1 clone

[Jamie, Christopher, JC]

This morning we decided to label the the fiber optic cables. While doing this, we noticed that the ends had different label, 'Host' and 'Target'. Come to find out, the fiber optic cables are directional. Four out of Six of the cables were reversed. Luckily, 1 cable for the 1Y3 IO Chassis has a spare already laid (The cable we are currently using).  Chris, Jamie, and I have begun reversing these cable to there correct position.


[Tega, JC]

We laid 4 out of 6 fiber cables today. The remaining 2 cables are for the I/O chassis on the vertex so we would test the cables the lay it tomorrow. We were also able to identify the problems with the 2 supposedly faulty cable, which are not faulty. One of them had a small bend in the connector that I was able to straighten out with a small plier and the other was a loose connection in the switch part. So there was no faulty cable, which is great! Chris wrote a matlab script that does the migration of all the model files. I am going through them, i.e. looking at the CDS parameter block to check that all is well. Next task is to build and install the updated models. Also need to update the '/opt/rtcds' and '/opt/rtapps' directory to the latest in the 40m chiara system.



ELOG V3.1.3-