40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 26 of 348  Not logged in ELOG logo
ID Date Author Type Category Subject
  16256   Sun Jul 25 20:41:47 2021 ranaUpdateLoss MeasurementLoss measurement

What are the quantitative root causes for why the statistical uncertainty is so large? Its larger than 1/sqrt(N)

  16255   Sun Jul 25 18:21:10 2021 KojiUpdateGeneralCanon camera / small silver tripod / macro zoom lens / LED ring light returned / Electronics borrowed

Camera and accesories returned

One HAM-A coildriver and one sat amp borrowed -> QIL

https://nodus.ligo.caltech.edu:8081/QIL/2616

 

  16254   Thu Jul 22 16:06:10 2021 PacoUpdateLoss MeasurementLoss measurement

[yehonathan, anchal, paco, gautam]

We concluded estimating the XARM and YARM losses. The hardware configuration from yesterday remains, but we repeated the measurements because we realized our REFL55_I_ERR and REFL55_Q_ERR signals representing the PD520 and MC_TRANS were scaled, offset, and rotated in a way that wasn't trivially undone by our postprocessing scripts... Another caveat that we encountered today was the need to add a "macroscopic" misalignment to the ITMs when doing the measurement to avoid any accidental resonances.

The final measurements were done with 16 repetitions, 30 second duration, and the logfiles are under scripts/lossmap_scripts/armLoss/logs/20210722_1423.txt and scripts/lossmap_scripts/armLoss/logs/20210722_1513.txt

Finally, the estimated YARM loss is 39\pm7 ppm, while the estimated XARM loss is 38\pm8 ppm. This is consistent with the inferred PRC gain from Monday and a PRM loss of ~ 2%.


Future measurements may want to look into slow drift of the locked vs misaligned traces (systematic errors?) and a better way of estimating the statistical uncertainty (e.g. by splitting the raw time traces into short segments)

  16253   Wed Jul 21 18:08:35 2021 yehonathanUpdateLoss MeasurementLoss measurement

{Gautam, Yehonathan, Anchal, Paco}

We prepared for the loss measurement using DC reflection method. We did the following changes:

1. REFL55_Q was disconnected and replaced with MC_T cable coming from the PD on the MC2 table. The cable has a red tag on it. Consequently we lost the AS beam. We realigned the optics and regained arm locks. The spot on the AS QPD had to be corrected.

2. We tried using AS55 as the PD for the DC measurement but we got ratios of ~ 0.97 which implies losses of more than 100 ppm. We decided to go with the traditional PD520 used for these measurements in the past.

3. We placed the PD520 used for loss measurements in front of the AS55 PD and optimized its position.

4. AS110 cable was disconnected from the PD and connected to PD520 to be used as the loss measurement cable.

5. In 1Y2 rack, AS110 PD cable was disconnected, REFL55_I was disconnected and AS110 cable was connected to REFL55_I channel.

So for the test, the MC transmission was measured at REFL55_Q and the AS DC was measured at REFL55_I.

We used the scripts/lossmap_scripts/armLoss/measArmLoss.py script. Note that this script assumes that you begin with the arm locked.

We are leaving the IFO in the configuration described above overnight and we plan to measure the XARM loss early AM. After which we shall restore the affected electrical and optical paths.


We ran the /scripts/lossmap_scripts/armLoss/measureArmLoss.py script in pianosa with 25 repetitions and a 30 s "duty cycle" (wait time) for the Y arm. Preliminary results give an estimated individual arm loss of ~ 30 ppm (on both X/Y arms) but we will provide a better estimate with this measurement. 

  16252   Wed Jul 21 14:50:23 2021 KojiUpdateSUSNew electronics

Received:

Jun 29, 2021 BIO I/F 6 units
Jul 19, 2021 PZT Drivers x2 / QPD Transimedance amp x2

 

Attachment 1: P_20210629_183950.jpeg
P_20210629_183950.jpeg
Attachment 2: P_20210719_135938.jpeg
P_20210719_135938.jpeg
  16251   Mon Jul 19 22:16:08 2021 pacoUpdateLSCPRFPMI locking

[gautam, paco]

Gautam managed to lock PRFPMI a little before ~ 22:00 local time. The ALS to RF handoff logic was found to be repeatable, which enabled us to lock a total of 4 times this evening. Under this nominal state, we can work on PRFPMI to narrow down less known issues and carry out systematic optimization. The second time we achieved lock, we ran sensing lines before entering the ASC stage (which we knew would destroy the lock), and offline analysis of the sensing matrix is pending (gpstime = 1310792709 + 5 min).

Things to note:

(a) there is an unexpected offset suggesting that the ALS and RF disagreed on what the lock setpoint should be, and it is still unclear where the offset is coming from.

(b) the first time the lock was reached, the ASC up stage destroyed it, suggesting these loops need some care (we were able to engage the ASC loops at low gains (0.2 instead of 1) but as soon as we enabled some integrators this consistently destroyed the lock

(c) gautam had (burt) restored to the settings from back in March when the PRFPMI was last locked, suggesting there was a small but somehow significant difference in the IFO that helped today relative to last week


Take home message--> The mere fact that we were able to lock PRFPMI rules out the considerably more serious problems with the signal chain electronics or processing. This should also be a good starting point for further debugging and optimization.


gautam: the circulating power, when the ASC was tweaked, hit 400 (normalized to single arm locked with a misaligned PRM) suggesting a recycling gain of 22.5, and an average arm loss of ~30ppm round trip (assuming 2% loss in the PRC). 

  16250   Sat Jul 17 00:52:33 2021 KojiUpdateGeneralCanon camera / small silver tripod / macro zoom lens / LED ring light borrowed -> QIL

Canon camera / small silver tripod / macro zoom lens / LED ring light borrowed -> QIL

Attachment 1: P_20210716_213850.jpg
P_20210716_213850.jpg
  16249   Fri Jul 16 16:26:50 2021 gautamUpdateComputersDocker installed on nodus

I wanted to try hosting some docker images on a "private" server, so I installed Docker on nodus following the instructions here. The install seems to have succeeded, and as far as I can tell, none of the functionality of nodus has been disturbed (I can ssh in, access shared drive, elog seems to work fine etc). But if you find a problem, maybe this action is responsible. Note that nodus is running Scientific Linux 7.3 (Nitrogen).

  16248   Thu Jul 15 14:25:48 2021 PacoUpdateLSCCM board

[gautam, paco]

We tested the CM board by implementing the high bandwidth IR lock (single arm). In preparation for this test we temporarily connected the POY11_Q_MON output to the CM board IN1 input and checked the YARM POY transfer function by running the AA_YARM_TEMPLATE under users/Templates/LSC/LSC_loops/YARM_POY/. We made sure the YARM dither optimized TRY so as to maximize the optical gain stage. Then we proceeded as follows:

  • From the LSC --> CM Servo screen, we controlled the REFL 1 Gain (dB) slider (nominal +25) and MC Servo IN2 Gain (dB) slider (nominal -32 dB) to transfer the low bandwidth (digital) control to the high bandwidth (analog) control of the YARM.
  • During this game, we monitored the C1:LSC-POY11_I_ERR_DQ & C1:LSC-CM_SLOW_OUT_DQ error signal channels for saturation, oscillations, or stability.
  • Once a set of gains was successful in maintaining a stable lock, we measured the OLTF using SR 785 to track the UGF as we mix the two paths.
  • Once the gains have increased, a boost and super-boost stages may be enabled as well.

Ultimately, our ability to progressively increase the control bandwidth of the YARM is a proxy that the CM board is working properly. Attachment 1 shows the OLTF progression as we increased the loop's UGF. Note how as we approached the maximum measured UGF of ~ 22 kHz, our phase margin decreased signifying poor stability.


At the end of this measurement, at about ~ 15:45 I restored the CM board IN1 input and disconnected the POY11_Q_MON

gautam: the conclusion here is that the CM board seems to work as advertised, and it's not solely responsible for not being able to achieve the IR handoff. 

Attachment 1: high_BW_TFs.pdf
high_BW_TFs.pdf
  16247   Wed Jul 14 20:42:04 2021 gautamUpdateLSCLocking

[paco, gautam]

we decided to give the PRFPMI lock a go early-ish. Summary of findings today eve:

  1. Arms under ALS control display normal noise and loop UGFs.
  2. PRMI took longer than usual to lock (when arms are held off resonance) - could be elevated sesimic, but warrants measuring PRMI loop TFs to rule out any funkiness. MICH loop also displayed some saturation on acquisition, but after the boosts and other filters were turned on, the lock seemed robust and the in-loop noise was at the usual levels.
  3. We are gonna do the high bandwidth single arm locking experiments during daytime to rule out any issues with the CM board.

The ALS--> IR CARM handoff is the problematic step. In the past, getting over this hump has just required some systematic loop TF measurements / gain slider readjustments. We will do this in the next few days. I don't think the ALS noise is any higher than it used to be, and I could do the direct handoff as recently as March, so probably something minor has changed.

  16246   Wed Jul 14 19:21:44 2021 KojiUpdateGeneralBrrr

Jordan reported on Jun 18, 2021:
"HVAC tech came today, and replaced the thermostat and a coolant tube in the AC unit. It is working now and he left the thermostat set to 68F, which was what the old one was set to."

  16245   Wed Jul 14 16:19:44 2021 gautamUpdateGeneralBrrr

Since the repair work, the temperature is significantly cooler. Surprisingly, even at the vertex (to be more specific, inside the PSL enclosure, which for the time being is the only place where we have a logged temperature sensor, but this is not attributable to any change in the HEPA speed), the temperature is a good 3 deg C cooler than it was before the HVAC work (even though Koji's wind vane suggest the vents at the vertex were working). The setpoint for the entire lab was modified? What should the setpoint even be?

Quote:
 

- I went to the south arm. There are two big vent ducts for the outlets and intakes. Both are not flowing the air.
  The current temp at 7pm was ~30degC. Max and min were 31degC and 18degC.

- Then I went to the vertex and the east arm. The outlets and intakes are flowing.

Attachment 1: rmTemp.pdf
rmTemp.pdf
  16244   Mon Jul 12 18:06:25 2021 YehonathanUpdateCDSOpto-isolator for c1auxey

I edited /cvs/cds/caltech/target/c1auxey1/ETMYaux.db (after creating a backup) and added the spare coil driver channels.

I tested those channels using caget while fixing wiring issues. The tests were all succesful. The digital output channel were tested using the Windows machine since they are locked by some EPICs mechanism I don't yet understand.

One worrying point is I found that the differential analog inputs to be unstable unless I connected a reference to some stable voltage source unlike previous tests showed. It was unstable (but less) even when I connected the ref to the ground connectors on the power supplies on the workbench. This is really puzzling.

When I say unstable I mean that most of the time the voltage reading shows the right value, but occasionly there is a transient sharp volage drop of the order of 0.5V. I will do a more quantitative analysis tomorrow.

 

  16243   Fri Jul 9 18:35:32 2021 YehonathanUpdateCDSOpto-isolator for c1auxey

Following Koji's channel list review, we made changes to the wiring spreadsheet.

Today, I made the changes real in the Acromag chassis. I went through the channel list one by one and made sure it is wired correctly. Additionally, since we now need all the channels the existing isolators have, I replaced the isolator with the defective channel with a new one.

The things to do next:

1. Create entries for the spare coil driver and satellite box channels in the EPICs DB.

2. Test the spare channels.

  16242   Fri Jul 9 15:39:08 2021 AnchalSummaryALSSingle Arm Actuation Calibration with IR ALS Beat [Correction]

I did this analysis again by just doing demodulation go 5s time segments of the 60s excitation signal. The major difference is that I was not summing up the sine-cosine multiplied signals, so the error associated was a lot more. If I simply multpy the whole beatnote signal with digital LO created at excitation frequency, divide it up in 12 segments of 5 s each, sum them up individually, then take the mean and standard deviation, I get the answer as:
\frac{6.88 \pm 0.05}{f^2} nm/ctsas opposed to \frac{7.32 \pm 0.03}{f^2} nm/ctsthat was calculated using MICH signal earlier by gautum in 13984.

Attachment 1 shows the scatter plot for the complex calibration factors found for the 12 segments.

My aim in the previous post was however to get a time series of the complex calibration factor from which I can take a noise spectral density measurement of the calibration. I'll still look into how I can do that. I'll have to add a low pass filter to integrate the signal. Then the noise spectrum up to the low pass pole frequency would be available. But what would this noise spectrum really mean? I still have to think a bit about it. I'll put another post soon.

Quote:

We attempted to simulate "oscillator based realtime calibration noise monitoring" in offline analysis with python. This helped us in finding about a factor of sqrt(2) that we were missing earlier in 16171. we measured C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ when X-ARM was locked to main laser and Xend green laser was locked to XARM. An excitation signal of amplitude 600 was setn at 619 hz at C1:ITMX_LSC_EXC.

Signal analysis flow:

  • The C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ is calibrated to give value of beatntoe frequency in Hz. But we are interested in the fluctuations of this value at the excitation frequency. So the beatnote signal is first high passed with 50 hz cut-off. This value can be reduced a lot more in realtime system. We only took 60s of data and had to remove first 2 seconds for removing transients so we didn't reduce this cut-off further.
  • The I and Q demodulated beatntoe signal is combined to get a complex beatnote signal amplitude at excitation frequency.
  • This signal is divided by cts amplitude of excitation and multiplied by square of excitation frequency to get calibration factor for ITMX in units of nm/cts/Hz^2.
  • The noise spectrum of absolute value of  the calibration factor is plotted in attachment 1, along with its RMS. The calibration factor was detrended linearly so the the DC value was removed before taking the spectrum.
  • So Attachment 1 is the spectrum of noise in calibration factor when measured with this method. The shaded region is 15.865% - 84.135% percentile region around the solid median curves.

We got a value of \frac{7.3 \pm 3.9}{f^2}\, \frac{nm}{cts}.  The calibration factor in use is from \frac{7.32}{f^2} nm/cts from 13984.

Next steps could be to budget this noise while we setup some way of having this calibration factor generated in realitime using oscillators on a FE model. Calibrating actuation of a single optic in a single arm is easy, so this is a good test setup for getting a noise budget of this calibration method.

 

Attachment 1: ITMX_calibration_With_ALS_Beat.pdf
ITMX_calibration_With_ALS_Beat.pdf
  16241   Thu Jul 8 11:20:38 2021 Anchal, Paco, GautamSummaryLSCPRFPMI locking attempts

Last night Gautam walked us through the algorithm used to lock PRFPMI. We tried it several times with the PSL HEPA filter off between 10:00 pm July 7th to 1:00 am July 8th. None of our attempts were successful. In between, we tried to do the locking with old IMC settings as well, but it did not change the result for us. In most attempts, the arms would start to resonate with PRMI with about 200 times the power than without power recycling while the arms are still controlled by ALS beatnote. The handover of lock controls "CARM+DARM locked to ALS beatnote" to "Main laser + IMC locked to the CARM+DARM" would always fail. More specifically, we were seeing that as soon as we hand over the DC control of CARM from ALS beatnote to IR by feeding back to MC2, the lock would inevitably fail before the rest of the high-frequency control can be transferred over.

Nonetheless, Paco and I got a good demo of how to do PRFPMI locking if the need appears. With more practice and attempts, we should be able to achieve the lock at some point in the future. The issues in handover could be due to any of the following:

  • Although it seems like ALS beatnote fed control of arms keep them within the CARM IR linewidth as we see the IR resonating, there still could be some excess noise that needs to be dealt with.
  • Gautam conjectures, that the presence of high power in the arms connects the ITMs and the ETMs with an optical spring changing the transfer function of the pendula. This in turn changes the phase margin and possibly makes the CARM loop in IR PRFPMI unstable.
  • We should also investigate the loop transfer functions near the handover point for the ALS beatnote loop and the IR CARM loop and calculate the crossover frequency and gain/phase margins there.

More insights or suggestions are welcome.


Note; An earthquake came around lunch time and tripped all watchdogs. Most suspensions were recovered without issues, but ITMX appeared to be stuck. We tried the shaking procedure, but after this we couldn't restore the XARM lock. From alignment, we tried optimizing the TRX but we only got up to ~0.5 and ASS wouldn't work as usual. In the end the issue was that we had forgotten to enable the LL coil output devil so after we did this, we managed to recover the XARM.

  16240   Tue Jul 6 17:40:32 2021 KojiSummaryGeneralLab cleaning

We held the lab cleaning for the first time since the campus reopening (Attachment 1).
Now we can use some of the desks for the people to live! Thanks for the cooperation.

We relocated a lot of items into the lab.

  • The entrance area was cleaned up. We believe that there is no 40m lab stuff left.
    • BHD BS optics was moved to the south optics cabinet. (Attachment 2)
    • DSUB feedthrough flanges were moved to the vacuum area (Attachment 3)
  • Some instruments were moved into the lab.
    • The Zurich instrument box
    • KEPCO HV supplies
    • Matsusada HV supplies
  • We moved the large pile of SUPERMICROs in the lab. They are around MC2 while the PPE boxes there were moved behind the tube around MC2 area. (Attachment 4)
  • We have moved PPE boxes behind the beam tube on XARM behind the SUPERMICRO computer boxes. (Attachment 7)
  • ISC/WFS left over components were moved to the pile of the BHD electronics.
    • Front panels (Attachment 5)
    • Components in the boxes (Attachment 6)

We still want to make some more cleaning:

- Electronics workbenches
- Stray setup (cart/wagon in the lab)
- Some leftover on the desks
- Instruments scattered all over the lab
- Ewaste removal

Attachment 1: P_20210706_163456.jpg
P_20210706_163456.jpg
Attachment 2: P_20210706_161725.jpg
P_20210706_161725.jpg
Attachment 3: P_20210706_145210.jpg
P_20210706_145210.jpg
Attachment 4: P_20210706_161255.jpg
P_20210706_161255.jpg
Attachment 5: P_20210706_145815.jpg
P_20210706_145815.jpg
Attachment 6: P_20210706_145805.jpg
P_20210706_145805.jpg
Attachment 7: PXL_20210707_005717772.jpg
PXL_20210707_005717772.jpg
  16239   Tue Jul 6 16:35:04 2021 Anchal, Paco, GautamUpdateIOORestored MC

We found that megatron is unable to properly run scripts/MC/WFS/mcwfsoff and scripts/MC/WFS/mcwfson scripts. It fails cdsutils commands due to a library conflict. This meant that WFS loops were not turned off when IMC would get unlocked and they would keep integrating noise into offsets. The mcwfsoff script is also supposed to clear up WFS loop offsets, but that wasn't happening either. The mcwfson script was also not bringing back WFS loops on.

Gautam fixed these scripts temprorarily for running on megatron by using ezcawrite and ezcaswitch commands instead of cdsutils commands. Now these scripts are running normally. This could be the reason for wildly fluctuating WFS offsets that we have seen in teh past few months.

gautam: the problem here is that megatron is running Ubuntu18 - I'm not sure if there is any dedicated CDS group packaging for Ubuntu, and so we're using some shared install of the cdsutils (hosted on the shared chiara NFS drive), which is complaining about missing linked lib files. Depending on people's mood, it may be worth biting the bullet and make Megatron run Debian10, for which the CDS group maintains packages.

Quote:

MC was unlocked and struggling to recover this morning due to misguided WFS offsets. In order to recover from this kind of issue, we

  1. Cleared the bogus WFS offsets
  2. Used the MC alignment sliders to change MC1 YAW from -0.9860 to -0.8750 until we saw the lowest order mode transmission on the video monitor.
  3. With MC Trans sum at around ~ 500 counts, we lowered the C1:IOO-WFS_TRIGGER_THRESH_ON from 5000 to 500, and the C1:IOO-WFS_TRIGGER_MON from 3.0 to 0.0 seconds and let the WFS integrators work out some nonzero angular control offsets.
  4. Then, the MC Trans sum increased to about 2000 counts but started oscillating slowly, so we restored the delayed loop trigger from 0.0 to 3.0 seconds and saw the MC Trans sum reach its nominal value of ~ 14000 counts over a few minutes.

The MC is now restored and the plan is to let it run for a few hours so the offsets converge; then run the WFS relief script.

  16238   Tue Jul 6 10:47:07 2021 Paco, AnchalUpdateIOORestored MC

MC was unlocked and struggling to recover this morning due to misguided WFS offsets. In order to recover from this kind of issue, we

  1. Cleared the bogus WFS offsets
  2. Used the MC alignment sliders to change MC1 YAW from -0.9860 to -0.8750 until we saw the lowest order mode transmission on the video monitor.
  3. With MC Trans sum at around ~ 500 counts, we lowered the C1:IOO-WFS_TRIGGER_THRESH_ON from 5000 to 500, and the C1:IOO-WFS_TRIGGER_MON from 3.0 to 0.0 seconds and let the WFS integrators work out some nonzero angular control offsets.
  4. Then, the MC Trans sum increased to about 2000 counts but started oscillating slowly, so we restored the delayed loop trigger from 0.0 to 3.0 seconds and saw the MC Trans sum reach its nominal value of ~ 14000 counts over a few minutes.

The MC is now restored and the plan is to let it run for a few hours so the offsets converge; then run the WFS relief script.

  16237   Fri Jul 2 12:42:56 2021 Anchal, Paco, GautamSummaryLSCsnap file changed for MICH

We corrected the MICH locking snap file C1configure_MI.req and saved an updated C1configure_MI.snap. Now the 'Restore MICH' script in IFO_CONFIGURE>!MICH>Restore MICH works. The corrections included adding the correct rows of PD_DOF matrices to be at the right settings (use AS55 as error signal). The MICH_A_GAIN and MICH_B_GAIN needed to be saved as well.

We also were able to get to PRMI SB resonance. PRM was misalgined earlier from optimal position and after some manual aligning, we were able to get it to lock just by hitting IFO_CONFIGURE>!PRMI>Restore PRMI SB (3f).

  16236   Thu Jul 1 16:55:21 2021 AnchalSummaryOptical LeversFixed Centeringoptical levers PRM

This was a mistake. When arms are locked, PRM is misaligned by setting -800 offset in PIT dof of PRM. The oplev is set to function in normal state not this misalgined configuration. I undid my changes today by switching off the offset, realigning the oplev to center and then restoring the single arm locked state. The PRM OpLevs loops are off now.

Quote:

 PRM because I've known that we have been keeping its OpLev off. The reason was clear once I opened the table. The oplev reflection beam was hitting the PD box instead of the PD. After correcting, I was able to swithc on PRM opLev loops and saw normal functioning.

 

  16235   Thu Jul 1 16:45:25 2021 YehonathanUpdateBHDSOS assembly

The bonding test passed - the weight still hangs from the dumbell. Unfortunately, I broke the bond trying to release the assembly from the bracket. I made another batch of 6 dumbell+magnet.

I used some of the leftover epoxy to bond an assembly from the previous batch to a bracket so I can test it.

  16234   Thu Jul 1 11:37:50 2021 PacoUpdateGeneralrestarted c0rga

Physically rebooted c0rga workstation after failing to ssh into it (even as it was able to ping into it...) the RGA seems to be off though. The last log with data on it appears to date back to 2020 Nov 10, but reasonable spectra don't appear until before 11-05 logs. Gautam verified that the RGA was intentionally turned off then.

  16233   Thu Jul 1 10:34:51 2021 Paco, AnchalSummaryLSCETMY QPD fixed

Paco worked on alignign the beam splitter to get light on the ETMY QPD and was successful in centering it without any other changes in the settings.

  16232   Wed Jun 30 18:44:11 2021 AnchalSummaryLSCTried fixing ETMY QPD

I worked in Yend station, trying to get the ETMY QPD to work properly. When I started, only one (quadrant #3) of the 4 quadrants were seeing any lights. By just changing the beam splitter that reflects some light off to the QPD, I was able to get some amount of light in quadrant #2. However, no amount of steering would show any light in any other quadrants.

The only reason I could think of is that the incoming beam gets partially clipped as it seems to be hitting the beam splitter near the top edge. So for this to work properly, a mirror upstream needs to be adjusted which would change the alignment of TRX photodiode. Without the light on TRX photodiode, there is no lock and there is no light. So one can't steer this beam without lossing lock.

I tried one trick, in which, I changed the YARM lock trigger to POY DC signal. I got it to work to get the lock going even when TRY was covered by a beam finder card. However, this lock was still bit finicky and would loose lock very frequently. It didn't seem worth it to potentially break the YARM locking system for ETMY QPD before running this by anyone and this late in evening. So I reset everything to how it was (except the beam splitter that reflects light to EMTY QPD. That now has equal ligth falling on quadrant #2 and #3.

The settings I temporarily changed were:

  • C1:LSC-TRIG_MTRX_7_10 changed from 0 to -1 (uses POY DC as trigger)
  • C1:LSC-TRIG_MTRX_7_13 changed from 1 to 0 (stops using TRY DC as trigger)
  • C1:LSC-YARM_TRIG_THRESH_ON changed from 0.3 to -22
  • C1:LSC-YARM_TRIG_THRESH_OFF changed from 0.1 to -23.6
  • C1:LSC-YARM_FM_TRIG_THRESH_ON changed from 0.5 to -22
  • C1:LSC-YARM_FM_TRIG_THRESH_OFF changed from 0.1 to -23.6

All these were reverted back to there previous values manually at the end.

 

  16231   Wed Jun 30 15:31:35 2021 AnchalSummaryOptical LeversCentered optical levers on ITMY, BS, PRM and ETMY

When both arms were locked, we found that ITMY optical lever was very off-center. This seems to have happened after the c1susaux rebooting we did in June 17th. I opened the ITMY table and realigned the OPLev beam to the center when the arm was locked. I repeated this process for BS, PRM and ETMY. I did PRM because I've known that we have been keeping its OpLev off. The reason was clear once I opened the table. The oplev reflection beam was hitting the PD box instead of the PD. After correcting, I was able to swithc on PRM opLev loops and saw normal functioning.

  16230   Wed Jun 30 14:09:26 2021 Ian MacMillanUpdateCDSSUS simPlant model

I have looked at my code from the previous plot of the transfer function and realized that there is a slight error that must be fixed before we can analyze the difference between the theoretical transfer function and the measured transfer function.

The theoretical transfer function, which was generated from Photon has approximately 1000 data points while the measured one has about 120. There are no points between the two datasets that have the same frequency values, so they are not directly comparable. In order to compare them I must infer the data between the points. In the previous post [16195] I expanded the measured dataset. In other words: I filled in the space between points linearly so that I could compare the two data sets. Using this code:

#make values for the comparison
tck_mag = splrep(tst_f, tst_mag) # get bspline representation given (x,y) values
gen_mag = splev(sim_f, tck_mag) # generate intermediate values
dif_mag=[]
for x in range(len(gen_mag)):
    dif_mag.append(gen_mag[x]-sim_mag[x]) # measured minus predicted

tck_ph = splrep(tst_f, tst_ph) # get bspline representation given (x,y) values
gen_ph = splev(sim_f, tck_ph) # generate intermediate values
dif_ph=[]
for x in range(len(gen_ph)):
    dif_ph.append(gen_ph[x]-sim_ph[x])

At points like a sharp peak where the measured data set was sparse compared to the peak, the difference would see the difference between the intermediate “measured” values and the theoretical ones, which would make the difference much higher than it really was.

To fix this I changed the code to generate the intermediate values for the theoretical data set. Using the code here:

tck_mag = splrep(sim_f, sim_mag) # get bspline representation given (x,y) values
gen_mag = splev(tst_f, tck_mag) # generate intermediate values
dif_mag=[]
for x in range(len(tst_mag)):
    dif_mag.append(tst_mag[x]-gen_mag[x])#measured minus predicted

tck_ph = splrep(sim_f, sim_ph) # get bspline representation given (x,y) values
gen_ph = splev(tst_f, tck_ph) # generate intermediate values
dif_ph=[]
for x in range(len(tst_ph)):
    dif_ph.append(tst_ph[x]-gen_ph[x])

Because this dataset has far more values (about 10 times more) the previous problem is not such an issue. In addition, there is never an inferred measured value used. That makes it more representative of the true accuracy of the real transfer function.

This is an update to a previous plot, so I am still using the same data just changing the way it is coded. This plot/data does not have a Q of 1000. That plot will be in a later post along with the error estimation that we talked about in this week’s meeting.

The new plot is shown below in attachment 1. Data and code are contained in attachment 2

Attachment 1: SingleSusPlantTF.pdf
SingleSusPlantTF.pdf
Attachment 2: Plant_TF_Test.zip
  16229   Tue Jun 29 20:45:52 2021 YehonathanUpdateBHDSOS assembly

I glued another batch of 6 magnet+dumbell assemblies. I will take a look at them under the microscope once they are cured.

I also hanged a weight of ~150g from a sample dumbell made in the previous batch (attachments) to test the magnet+dumbell bonding strength.

Attachment 1: 20210629_135736.jpg
20210629_135736.jpg
Attachment 2: 20210629_135746.jpg
20210629_135746.jpg
  16228   Tue Jun 29 17:42:06 2021 Anchal, Paco, GautamSummaryLSCMICH locking tutorial with Gautam

Today we went through LSC locking mechanics with Gautam and as a "Hello World" example, worked on locking michelson cavity.


MICH settings changed:

  • Gautam at some point added 9 dB attenuation filters in MICH filter module in LSC to match the 9 dB pre-amplifier before digitization.
  • This required changing teh trigger thresholds, C1:LSC-MICH_TRIG_THRESH_ON and C1:LSC-MICH_TRIG_THRESH_OFF.
  • We looked at C1:LSC-AS55_Q_ERR_DQ and C1:LSC-ASDC_OUT_DQ on ndscope.
  • The zero crossings in AS55_Q correspond to ASDC going to zero. We found the threshold values of ASDC by finding the linear region in zero crossing of AS55_Q.
  • We changed the thresold values to UP: -0.3mW and DOWN -0.05mW. The thresholds were also changed in C1LSC_FM_TRIG.
  • We also set FM2,3,6 and 8 to be triggered on threshold.

We characterized the loop OLTF, found the UGF to be 90 Hz and measured the noise at error and control points.

gautam: one aim of this work was to demonstrate that the "Lock Michelson (dark)" script call from the IFOconfigure screen worked - it did, reliably, after the setting changes mentioned above.

  16227   Mon Jun 28 12:35:19 2021 YehonathanUpdateBHDSOS assembly

On Thursday, I glued another set of 6 dumbells+magnets using the same method as before. I made sure that dumbells are pressed onto the magnets.

I came in today to check the gluing situation. The situation looks much better than before. It seems like the glue is stable against small forces (magnetic etc.). I checked the assemblies under a microscope.

It seems like I used excessive amounts of glue (attachment 1,2). The surfaces of the dumbells were also contaminated (attachment 3). I cleaned the dumbells' surfaces using acetone and IPO (attachment 4) and scratched some of the glue residues from the sides of the assemblies.

Next time, I will make a shallow bath of glue to obtain precise amounts using a needle.

I glued a sample assembly on a metal bracket using epoxy. Once it cures I will hang a weight on the dumbell to test the gluing strength.

Attachment 1: toomuchglue1.png
toomuchglue1.png
Attachment 2: toomuchglue2.png
toomuchglue2.png
Attachment 3: dirtydumbell.png
dirtydumbell.png
Attachment 4: cleandumbell.png
cleandumbell.png
Attachment 5: assembly_on_metalbracket.png
assembly_on_metalbracket.png
  16226   Fri Jun 25 19:14:45 2021 JonUpdateEquipment loanZurich Instruments analyzer

I returned the Zurich Instruments analyzer I borrowed some time ago to test out at home. It is sitting on first table across from Steve's old desk.

Attachment 1: ZI.JPG
ZI.JPG
  16225   Fri Jun 25 14:06:10 2021 JonUpdateCDSFront-End Assembly and Testing

Summary

Here is the final summary (from me) of where things stand with the new front-end systems. With Anchal and Ian's recent scripted loopback testing [16224], all the testing that can be performed in isolation with the hardware on hand has been completed. We currently have no indication of any problem with the new hardware. However, the high-frequency signal integrity and noise testing remains to be done.

I detail those tests and link some DTT templates for performing them below. We have not yet received the Myricom 10G network card being sent from LHO, which is required to complete the standalone DAQ network. Thus we do not have a working NDS server in the test stand, so cannot yet run any of the usual CDS tools such as Diaggui. Another option would be to just connect the new front-ends to the 40m Martian/DAQ networks and test them there.

Final Hardware Configuration

Due to the unavailablity of the 18-bit DACs that were expected from the sites, we elected to convert all the new 18-bit AO channels to 16-bit. I was able to locate four unused 16-bit DACs around the 40m [16185], with three of the four found to be working. I was also able to obtain three spare 16-bit DAC adapter boards from Todd Etzel. With the addition of the three working DACs, we ended up with just enough hardware to complete both systems.

The final configuration of each I/O chassis is as follows. The full setup is pictured in Attachment 1.

  C1BHD C1SUS2
Component Qty Installed Qty Installed
16-bit ADC 1 2
16-bit ADC adapter 1 2
16-bit DAC 1 3
16-bit DAC adapter 1 3
16-channel BIO 1 1
32-channel BO 0 6

This hardware provides the following breakdown of channels available to user models:

  C1BHD C1SUS2
Channel Type Channel Count Channel Count
16-bit AI* 31 63
16-bit AO 16 48
BO 0 192

*The last channel of the first ADC is reserved for timing diagnostics.

The chassis have been closed up and their permanent signal cabling installed. They do not need to be reopened, unless future testing finds a problem.

RCG Model Configuration

An IOP model has been created for each system reflecting its final hardware configuration. The IOP models are permanent and system-specific. When ready to install the new systems, the IOP models should be copied to the 40m network drive and installed following the RCG-compilation procedure in [15979]. Each system also has one temporary user model which was set up for testing purposes. These user models will be replaced with the actual SUS, OMC, and BHD models when the new systems are installed.

The current RCG models and the action to take with each one are listed below:

Model Name Host CPU DCUID Path (all paths local to chiara clone machine) Action
c1x06 c1bhd 1 23 /opt/rtcds/userapps/release/cds/c1/models/c1x06.mdl Copy to same location on 40m network drive; compile and install
c1x07 c1sus2 1 24 /opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl Copy to same location on 40m network drive; compile and install
c1bhd c1bhd 2 25 /opt/rtcds/userapps/release/isc/c1/models/c1bhd.mdl Do not copy; replace with permanent OMC/BHD model(s)
c1su2 c1su2 2 26 /opt/rtcds/userapps/release/sus/c1/models/c1su2.mdl Do not copy; replace with permanent SUS model(s)

Each front-end can support up to four user models.

Future Signal-Integrity Testing

Recently, the CDS group has released a well-documented procedure for testing General Standards ADC and DACs: T2000188. They've also automated the tests using a related set of shell scripts (T2000203). Unfortnately I don't believe these scripts will work at the 40m, as they require the latest v4.x RCG.

However, there is an accompanying set of DTT templates that could be very useful for accelerating the testing. They are available from the LIGO SVN (log in with username: "first.last@LIGO.ORG"). I believe these can be used almost directly, with only minor updates to channel names, etc. There are two classes of DTT-templated tests:

  1. DAC -> ADC loopback transfer functions
  2. Voltage noise floor PSD measurements of individual cards

The T2000188 document contains images of normal/passing DTT measurements, as well as known abnormalities and failure modes. More sophisticated tests could also be configured, using these templates as a guiding example.

Hardware Reordering

Due to the unexpected change from 18- to 16-bit AO, we are now short on several pieces of hardware:

  • 16-bit AI chassis. We originally ordered five of these chassis, and all are obligated as replacements within the existing system. Four of them are now (temporarily) in use in the front-end test stand. Thus four of the new 18-bit AI chassis will need to be retrofitted with 16-bit hardware.
  • 16-bit DACs. We currently have exactly enough DACs. I have requested a quote from General Standards for two additional units to have as spares.
  • 16-bit DAC adapters. I have asked Todd Etzel for two additional adapter boards to also have as spares. If no more are available, a few more should be fabricated.
Attachment 1: test_stand.JPG
test_stand.JPG
  16224   Thu Jun 24 17:32:52 2021 Ian MacMillanUpdateCDSFront-End Assembly and Testing

Anchal and I ran tests on the two systems (C1-SUS2 and C1-BHD). Attached are the results and the code and data to recreate them.

We connected one DAC channel to one ADC channel and thus all of the results represent a DAC/ADC pair. We then set the offset to different values from -3000 to 3000 and recorded the measured signal. I then plotted the response curve of every DAC/ADC pair so each was tested at least once.

There are two types of plots included in the attachments

1) a summary plot found on the last pages of the pdf files. This is a quick and dirty way to see if all of the channels are working. It is NOT a replacement for the other plots. It shows all the data quickly but sacrifices precision.

2) In an in-depth look at an ADC/DAC pair. Here I show the measured value for a defined DC offset. The Gain of the system should be 0.5 (put in an offset of 100 and measure 50). I included a line to show where this should be. I also plotted the difference between the 0.5 gain line and the measured data. 

As seen in the provided plots the channels get saturated after about the -2000 to 2000 mark, which is why the difference graph is only concentrated on -2000 to 2000 range. 

Summary: all the channels look to be working they all report very little deviation off of the theoretical gain. 

Note: ADC channel 31 is the timing signal so it is the only channel that is wildly off. It is not a measurement channel and we just measured it by mistake.

Attachment 1: C1-SU2_Channel_Responses.pdf
C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf C1-SU2_Channel_Responses.pdf
Attachment 2: C1-BHD_Channel_Responses.pdf
C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf C1-BHD_Channel_Responses.pdf
Attachment 3: CDS_Channel_Test.zip
  16223   Thu Jun 24 16:40:37 2021 KojiUpdateSUSMC lock acquired back again

[Koji, Anchal]

The issue of the PD output was that the PD whitened outputs of the sat amp (D080276) are differential, while the successive circuit (D000210 PD whitening unit) has the single-ended inputs. This means that the neg outputs (D080276 U2) have always been shorted to GND with no output R. This forced AD8672 to work hard at the output current limit. Maybe there was a heat problem due to this current saturation as Anchal reported that the unit came back sane after some power-cycling or opening the lid. But the heat issue and the forced differential voltage to the input stage of the chip eventually cause it to fail, I believe.

Anchal came up with the brilliant idea to bypass this issue. The sat amp box has the PD mon channels which are single-ended. We simply shifted the output cables to the mon connectors. The MC1 sus was nicely damped and the IMC was locked as usual. Anchal will keep checking if the circuit will keep working for a few days.

Attachment 1: P_20210624_163641_1.jpg
P_20210624_163641_1.jpg
  16222   Wed Jun 23 09:05:02 2021 AnchalUpdateSUSMC lock acquired back again

MC was unable to acquire lock because the WFS offsets were cleared to zero at some point and because of that MC was very misaligned to be able to catch back lock. In such cases, one wants the WFS to start accumulating offsets as soon as minimal lock is attained so that the mode cleaner can be automatically aligned. So I did following that worked:

  • Made the C1:IOO-WFS_TRIG_WAIT_TIME (delay in WFS trigger) from 3s to 0s.
  • Reduced C1:IOO-WFS_TRIGGER_THRESH_ON (Switchin on threshold) from 5000 to 1000.
  • Then as soon as a TEM00 was locked with poor efficiency, the WFS loops started aligning the optics to bring it back to lock.
  • After robust lock has been acquired, I restored the two settings I changed above.
Quote:

 


At the end, since MC has trouble catching lock after opening PSL shutter, I tried running burt restore the ioo to 2021/Jun/17/06:19/c1iooepics.snap but the problem persists

 

  16221   Tue Jun 22 17:05:26 2021 YehonathanUpdateBHDSOS assembly

According to the schematics, the distance between the original EQ tap holes is 0.5". Given that the original tap holes' diameter is 0.13" there is enough room for a 1/4" drill.

Quote:

Then, can we replace the four small EQ stops at the bottom (barrel surface) with two 1/4-20 EQ stops? This will require drilling the bottom EQ stop holders (two per SOS).

 

 

  16220   Tue Jun 22 16:53:01 2021 Ian MacMillanUpdateCDSFront-End Assembly and Testing

The channels on both the C1BHD and C1SUS2 seem to be frozen: they arent updating and are holding one value. To fix this Anchal and I tried:

  • restarting the computers 
    • restarting basically everything including the models
  • Changing the matrix values
  • adding filters
  • messing with the offset 
  • restarting the network ports (Paco suggested this apparently it worked for him at some point)
  • Checking to make sure everything was still connected inside the case (DAC, ADC, etc..)

I wonder if Jon has any ideas. 

  16219   Tue Jun 22 16:52:28 2021 PacoUpdateSUSADC/Slow channels issues

After sliding the alignment bias around and browsing through elog while searching for "stuck" we concluded the ITMX osems needed to be freed. To do this, the procedure is to slide the alignment bias back and forth ("shaking") and then as the OSEMs start to vary, enable the damping. We did just this, and then restored the alignment bias sliders slowly into their original positions. Attachment 1 shows the ITMX OSEM sensor input monitors throughout this procedure.


At the end, since MC has trouble catching lock after opening PSL shutter, I tried running burt restore the ioo to 2021/Jun/17/06:19/c1iooepics.snap but the problem persists

Attachment 1: shake_and_damp.png
shake_and_damp.png
  16218   Tue Jun 22 11:56:16 2021 Anchal, PacoUpdateSUSADC/Slow channels issues

We checked back in time to see how the BS and PRM OSEM slow channels are zero. It was clear that they became zero when we worked on this issue on June 17th, Thursday. So we simply went back and power cycled the c1susaux acromag chassis. After that, we had to log in to c1susaux computer and run

sudo /sbin/ifdown eth1
sudo /sbin/ifup eth1

This restarted the ethernet port acromag chassis is connected to. This solved this issue and we were able to see all the slow channels in BS and PRM.

But then, we noticed that the OPLEV of ITMX is unable to read the position of the beam on the QPD at all. No light was reaching the QPD. We went in, opened the ITMX table cover and confirmed that the return OPLEV beam is way off and is not even hitting one of the steering mirrors that brings it to the QPD. We switched off the OPLEV contribution to the damping.

We did burt restore to 16th June morning using
burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Jun/16/06:19/c1susaux.snap -l /tmp/controls_1210622_095432_0.write.log -o /tmp/controls_1210622_095432_0.nowrite.snap -v

This did not solve the issue.

Then we noticed that the OSEM signals from ITMX were saturated in opposite directions for Left and Right OSEMs. The Left OSEM fast channels are saturated to 1.918 um for UL and 1.399 um for LL, while both right OSEM channels are bottomed to 0 um. On the other hand, the acromag slow PD monitors are showing 0 on the right channels but 1097 cts on UL PDMon and 802 cts in LL PD Mon. We actually went in and checked the DC voltages from the PD input monitor LEMO ports on the ITMX dewhitening board D000210-A1 and measured non-zero voltages across all the channels. Following is a summary:

ITMX OSEM readouts
  C1-SUS-ITMX_XXSEN_OUT
(Fast ADC Channels) (um)
C1-SUS-ITMX_xxPDMon
(Slow Acromag Monitors) (cts)
Multimeter measurements at input to Dewhitening Boards
(V)
UL 1.918 1097 0.901
LL 1.399 802 0.998
UR 0 0 0.856
LR 0 0 0.792
SD 0.035 20 0.883

We even took out the 4-pin LEMO outputs from the dewhitening boards that go to the anti-aliasing chassis and checked the voltages. They are same as the input voltages as expected. So the dewhitening board is doing its job fine and the OSEMs are doing their jobs fine.

It is weird that both the ADC and the acromags are reading these values wrong. We believe this is causing a big yaw offset in the ITMX control signal causing the ITMX to turn enough make OPLEV go out of range. We checked the CDS FE status (attachment 1). Other than c1rfm showing a yellow bar (bit 2 = GE FANUC RFM card 0) in RT Net Status, nothing else seems wrong in c1sus computer. c1sus FE model is running fine. c1x02 (the lower level model) does show a red bar in TIM which suggests some timing issue. This is present in c1x04 too.


Bottomline:

Currently, the ITMX coil outputs are disabled as we can't trust the OSEM channels. We're investigating more why any of this is happening. Any input is welcome.

 

 

 

Attachment 1: CDS_FE_Status.png
CDS_FE_Status.png
  16217   Mon Jun 21 17:15:49 2021 Ian MacMillanUpdateCDSCDS Upgrade

Anchal and I wrote a script (Attachment 1) that will test the ADC and DAC connections with inputs on the INMON from -3000 to 3000. We could not run it because some of the channels seemed to be frozen. 

Attachment 1: DAC2ADC_Test.py
import os
import time
import numpy as np
import subprocess
from traceback import print_exc
import argparse


def grabInputArgs():
    parser = argparse.ArgumentParser(
... 75 more lines ...
  16216   Fri Jun 18 23:53:08 2021 KojiUpdateBHDSOS assembly

Then, can we replace the four small EQ stops at the bottom (barrel surface) with two 1/4-20 EQ stops? This will require drilling the bottom EQ stop holders (two per SOS).

 

  16215   Fri Jun 18 19:02:00 2021 YehonathanUpdateBHDSOS assembly

Today I glued some magnets to dumbells.

First, I took 6 magnets (the maximum I can glue in one go) and divided them into 3 north and 3 south. Each triplet on a different razor (attachment 1).

I put the gluing fixture I found on top of these magnets so that each of the magnets sits in a hole in the fixture. I close the fixture but not all the way so that the dumbells get in easily (attachment 2).

I prepared EP-30 glue according to this dcc. I tested the mixture by putting some of it in the small toaster oven in the cleanroom for 15min at 200 degrees F.

The first two batches came out sticky and soft. I discarded the glue cartridge and opened a new one. The oven test results with the new cartridge were much better: smooth and hard surface. I picked up some glue with a needle and applied it to the surface of 6 dumbells I prepared in advance. I dropped the dumbells with the glue facing down into the magnet holes in the fixtures (attachment 3). I tightened the fixture and put some weight on it. I let it cure over the weekend.

I also pushed cut Viton tips that Jordan cleaned into the vented screws. While screwing small EQ stops into the lower clamps I found some problems. 4 of the lower clamps need rethreading. This is quite urgent because without those 4 clamps we don't have enough SOS towers. Moreover, I found that the screws that we bought from UC components to hold the lower clamps on the SOS towers were silver plated. This is a mistake in the SOS schematics (part 23) - they should be SS.

Attachment 1: 20210618_115017.jpg
20210618_115017.jpg
Attachment 2: Untitled_2.png
Untitled_2.png
Attachment 3: 20210618_160041_HDR.jpg
20210618_160041_HDR.jpg
  16214   Fri Jun 18 14:53:37 2021 AnchalSummaryPEMTemperature sensor network proposal

I propose we set up a temperature sensor network as described in attachment 1.

Here there are two types of units:

  • BASE-GATEWAY
    • Holds the processor to talk to the network through Modbus TCP/IP protocol.
    • This unit itself has a temperature sensor in it. It is powered by a power adaptor or PoE from the switch.
    • Each base unit can have at max 2 extended temperature probes ENV-TEMP.
  • ENV-TEMP
    • This is just a temperature probe with no other capabilities.
    • It is powered via PoE from the BASE-GATEWAY unit.

Proposal is

  • to put 2 ENV-TEMP sensors (#1 and #2) along the Y-arm at the end and midway. These are powered and read through a BASE-GATEWAY (#A) at the vertex. The BASE-GATEWAY (#A) will serve as temperature sensor for the vertex.
  • We put one ENV-TEMP(#3) at the X-end powered and read through by another BASE-GATEWAY (#B) at the midpoint of X-arm.
  • Both BASE-GATEWAY are connected by ethernet cables to the network switch. That's it.

These sensors can be configured over network by going to their assigned IP addresses. I'm not sure at the moment how to configure the dB files to get them to write on slow EPICS channels.

We will have an unused port on the BASE-GATEWAY (#B) which can be used to run another temperature sensor, maybe at an important rack, PSL table or somewhere else.

In future, if more sensors are required, there are expansion (network switch like) options for BASE-GATEWAY or daisy-chain options for the probes.



Edit Fri Jun 18 16:28:13 2021 :

See this [wiki page](https://wiki-40m.ligo.caltech.edu/Physical_Environment_Monitoring/Thermometers) for updated plan and final quote.

Attachment 1: 40mTempSensors.pdf
40mTempSensors.pdf
  16213   Fri Jun 18 10:07:23 2021 Anchal, PacoSummaryAUXXend Green Laser PDH OLTF with coherence

We did the measurement of OLTF for Xend green laser PDH loop with excitation added at control point using a SR560 as shown in attachment 1 of 16202. We also measured coherence in our measurement, see attachment 1.


Measurement details:

  • We took the \beta/\gamma measurement as per 16202.
  • We did measurement in two pieces. First in High frequency region, from 1 kHz to 100 kHz.
    • In this setup, the excitation amplitude was kept constant to 5 mV.
    • In this region, the OLTF is small enough that signal to noise ratio is maintained in \gamma (SR560 sum output, measured on CH1). The coherence can be seen to be constant 1 throughout for CH1 in this region.
    • But for \beta (PZT Mon, measured on CH2), the low OLTF actually starts damping both signal and noise and to elevate it above SR785 noise floor, we had a high pass (z:0Hz, p:100kHz, k:1000) SR560 amplifying \beta before measurement (see attachment 2). This amplification has been corrected in Attachment 1. This allowed us to improve the coherence on CH2 to above 0.5 mostly.
  • Second region is from 3 Hz to 1 kHz.
    • In this setup, the excitation was shaped with a low pass (p: 1Hz, k:5) SR560 filter with SR785 source amplitude as 1V.
    • We took 40 averaging cycles in this measurement to improve the coherence further.
    • In this freqeuency region, \beta is mostly coherent as we shaped the excitation as 1/f and due to constant cycle number averaging, the integrated noise goes as 1/\sqrt{f}(see 16202 for math).
    • We still lost coherence in \gamma (CH1) for frequencyes below 100 Hz. the reason is that the excitation is suppressed by OLTF while the noise is not for this channel. So the 1/f shaping of excitation only helps fight against the suppression of OLTF somewhat and not against the noise.
      \gamma = \left( \frac{\eta}{A(s)} - \frac{\nu_e}{G_{OL}(s)} + \frac{\chi}{A(s) C(s)} \right)\frac{G_{OL}(s)}{1-G_{OL}(s)}
    • We need 1/f^2 shaping for this purpose but we were loosing lock with that shaping so we shifted back to 1/f shaping and captured whatever we could.
    • It is clear that the noise takes over below 100 Hz and coherence in CH1 is lost there.

Inferences:

  • Yes, the OLTF does not look how it should look but:
  • The green region in attachment 1 shows the data points where coherence on both CH1 and CH2 was higher than 0.75.  So the saturation measured below 1 kHz, particularly in 100 Hz to 500 Hz (where coherence on both channels is almost 1) is real.
  • This brings the question, what is saturating. As has been suggested before, our excitation signal is probably saturating some internal stage in the uPDH box. We need to investigate this next.
    • It is however very non-intuitive to why this saturation is so non-uniform (zig-zaggy) in both magnitude and phase.
    • In past experiences, whenever I saw somehting saturating, it would cause a flat top response in transfer function.
  • Another interesting thing to note is the reduced UGF in this measurement.
  • UGF is about 40-45 kHz. This we believe is due to reduced mode matching of the green light to the XARM when temperature of the end increases too much. We took the measurement at 6 pm and Koji posted the Xend's temperature to be 30 C at 7 pm in 16206. It certainly becomes harder to lock at hot temperatures, probably due to reduced phase margin and loop gain.
Attachment 1: XEND_PDH_OLTF_with_Coherence.pdf
XEND_PDH_OLTF_with_Coherence.pdf
Attachment 2: Beta_Amp.pdf
Beta_Amp.pdf
  16212   Thu Jun 17 22:25:38 2021 KojiUpdateSUSNew electronics: Sat Amp / Coil Drivers

It is a belated report: We received 5 more sat amps on June 4th. (I said 7 more but it was 6 more) So we still have one more sat amp coming from Todd.

- 1 already delivered long ago
- 8 received from Todd -> DeLeone -> Chub. They are in the lab.
- 11 units on May 21st
- 5 units on Jun 4th
Total 1+8+11+5 = 25
1 more unit is coming

 

Quote:

11 new Satellite Amps were picked up from Downs. 7 more are coming from there. I have one spare unit I made. 1 sat amp has already been used at MC1.

We had 8 HAM-A coil drivers delivered from the assembling company. We also have two coil drivers delivered from Downs (Anchal tested)

 

Attachment 1: P_20210604_231028.jpeg
P_20210604_231028.jpeg
  16211   Thu Jun 17 22:19:12 2021 KojiUpdateElectronics25 HAM-A coil driver units delivered

25 HAM-A coil driver units were fabricated by Todd and I've transported them to the 40m.
 2 units we already have received earlier.
The last (1) unit has been completed, but Luis wants to use it for some A+ testing. So 1 more unit is coming.

Attachment 1: P_20210617_195811.jpg
P_20210617_195811.jpg
  16210   Thu Jun 17 16:37:23 2021 Anchal, PacoUpdateSUSc1susaux computer rebooted

Jon suggested to reboot the acromag chassis, then the computer, and we did this without success. Then, Koji suggested we try running ifup eth0, so we ran `sudo /sbin/ifup eth0` and it worked to put c1susaux back in the martian network, but the modbus service was still down. We switched off the chassis and rebooted the computer and we had to do sudo /sbin/ifup eth0` again (why do we need to do this manually everytime?). Switched on the chassis but still no channels. `sudo systemctl status modbusioc.service' gave us inactive (dead) status. So  we ran sudo systemctl restart modbusioc.service'.

The status became:


● modbusIOC.service - ModbusIOC Service via procServ
   Loaded: loaded (/etc/systemd/system/modbusIOC.service; enabled)
   Active: inactive (dead)
           start condition failed at Thu 2021-06-17 16:10:42 PDT; 12min ago
           ConditionPathExists=/opt/rtcds/caltech/c1/burt/autoburt/latest/c1susaux.snap was not met`

After another iteration we finally got a modbusIOC.service OK status, and we then repeated Jon's reboot procedure. This time, the acromags were on but reading 0.0, so we just needed to run `sudo /sbin/ifup eth1`and finally some sweet slow channels were read. As a final step we burt restored to 05:19 AM today c1susaux.snap file and managed to relock the IMC >> will keep an eye on it.... Finally, in the process of damping all the suspended optics, we noticed some OSEM channels on BS and PRM are reading 0.0 (they are red as we browse them)... We succeeded in locking both arms, but this remains an unknown for us.

  16209   Thu Jun 17 11:45:42 2021 Anchal, PacoUpdateSUSMC1 Gave trouble again

TL;DR

MC1 LL Sensor showed signs of fluctuating large offsets. We tried to find the issue in the box but couldn't find any. On power cycling, the sensor got back to normal. But in putting back the box, we bumped something and c1susaux slow channels froze. We tried to reboot it, but it didn't work and the channels do not exist anymore.


Today morning we came to find that IMC struggled to lock all night (See attachment 1). We kind of had an indication yesterday evening that MC1 LL Sensor PD had a higher variance than usual and Paco had to reset WFS offsets because they had integrated the noise from this sensor. Something similar happened last night, that a false offset and its fluctuation overwhelmed WFS and MC1 got misaligned making it impossible for IMC to get lock.

In the morning, Paco again reset the WFS offsets but not we were sure that the PD variance from MC1 LL osem was very high. See attachment 2 to see how only 1 OSEM is showing higher noise in comparison to the other 4 OSEMs. This behavior is similar to what we saw earlier in 16138 but for UL sensor. Koji and I fixed it in 16139 and we tested all other channels too.

So, Paco and I, went ahead and took out the MC1 satellite amplifier box S2100029 D1002812, opened the top, and checked all the PD channel testpoints with no input current. We didn't find anything odd. Next we checked the LED dirver circuit testpoints with LED OUT and GND shorted. We got 4.997V on all LED MON testpoints which indicate normal functioning.

We just hooked back everything on the MC1 satellit box and checked the sensor channels again on medm screens. To our surprise, it started functioning normally. So maybe, just a power cycling was required but we still don't know what caused this issue.

BUT when I (Anchal) was plugging back the power cables and D25 connectors on the back side in 1X4 after moving the box back into the rack, we found that the slow channels stopped updating. They just froze!

We got worried for some time as the negative power supply indicator LEDs on the acromag chassis (which is just below the MC1 satellite box) were not ON. We checked the power cables and had to open the side panel of the 1X4 rack to check how the power cables are connected. We found that there is no third wire in the power cables and the acromag chassis only takes in single rail supply. We confirmed this by looking at another acromag chassis on Xend. We pasted a note on the acromag chassis for future reference that it uses only positive rails and negative LED monitors are not usually ON.

Back to solving the frozen acromag issue, we conjectured that maybe the ethernet connection is broken. The DB25 cables for the satellite box are bit short and pull around other cables with it when connected. We checked all the ethernet cabling, it looked fine. On c1susaux computer, we saw that the monitor LED for ethernet port 2 which is connected to acromag chassis is solid ON while the other one (which is probably connection to the switch) is blinking.

We tried doing telnet to the computer, it didn't work. The host refused connection from pianosa workstation. We tried pinging the c1susaux computer, and that worked. So we concluded that most probably, the epics modbus server hosting the slow channels on c1susaux is unable to communicate with acromag chassis and hence the solid LED light on that ethernet port instead of a blinking one. We checked computer restart procedure page for SLOW computers on wiki and found that it said if telnet is not working, we can hard reboot the computer.

We hard reboot the computer by long pressing the power button and then presssing it back on. We did this process 3 times with the same result. The ethernet port 2 LED (Acromag chassis) would blink but the ethernet port 1 LED (connected to switch) would not turn ON. We now can not even ping the machine now, let alone telnet into it. All SUS slow monitor channels are not present now ofcourse. We also tried once pressing the reset button (which the manual said would reboot the machine), but we got the same outcome.

Now, we decided to stop poking around until someone with more experience can help us on this.


Bottomline: We don't know what caused the LL sensor issue and hence it has not been fixed. It can happen again. We lost all C1SUSAUX slow channels which are the OSEM and COIL slow monitor channels for PRM, BS, ITMX, ITMY, MC1, MC2 and MC3.

Attachment 1: SummaryScreenShot.png
SummaryScreenShot.png
Attachment 2: MC1_LL_SENSOR_DEAD.png
MC1_LL_SENSOR_DEAD.png
  16208   Thu Jun 17 11:19:37 2021 Ian MacMillanUpdateCDSCDS Upgrade

Jon and I tested the ADC and DAC cards in both of the systems on the test stand. We had to swap out an 18-bit DAC for a 16-bit one that worked but now both machines have at least one working ADC and DAC.

[Still working on this post. I need to look at what is in the machines to say everything ]

  16207   Wed Jun 16 20:32:39 2021 YehonathanUpdateCDSOpto-isolator for c1auxey

I installed 2 additional isolators in the Acromag chassis. I set all the input channels to PNP. I ran the digital inputs (EnableMon channels) through these isolators according to the previous post.

I tested the digital inputs in the following way:

I connected an 18V voltage source to the signal wire under test through a 1Kohm resistor. I connected the GND of the voltage source to the RTN wire of the feedthrough. When the voltage source was connected, the LED on the isolator turned on and the EPICs channel under test was Enabled. When I disconnected the voltage source or shorted the signal wire to GND the LED on the isolator turned off and the EPICs channel showed a Disabled state.

ELOG V3.1.3-