40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
 40m Log, Page 21 of 337 Not logged in
ID Date Author Type Category Subject
15958   Wed Mar 24 15:24:13 2021 gautamUpdateLSCNotes on tests

For my note-taking:

1. Lock PRMI with ITMs as the MICH actuator. Confirm that the MICH-->PRCL contribution cannot be nulled. ✅  [15960]
2. Lock PRMI on REFL165 I/Q. Check if transition can be made smoothly to (and from?) REFL55 I/Q.
3. Lock PRMI. Turn sensing lines on. Change alignment of PRM / BS and see if we can change the orthogonality of the sensing.
4. Lock PRMI. Put a razor blade in front of an out-of-loop photodiode, e.g. REFL11 or REFL33. Try a few different masks (vertical half / horizontal half and L/R permutations) and see if the orthogonality (or lack thereof) is mask-dependent.
5. Double check the resistance/inductance of the PRM OSEMs by measuring at 1X4 instead of flange. ✅  [15966]
6. Check MC spot centering.

If I missed any of the tests we discussed, please add them here.

15957   Wed Mar 24 09:23:49 2021 PacoUpdateSUSMC3 new Input Matrix

[Paco]

• Found IMC locked upon arrival
• Loaded newest MC3 Input Matrix coefficients using /scripts/SUS/InMatCalc/writeMatrix.py after unlocking the MC, and disabling the watchdog.
• Again, the sens signals started increasing after the WD is reenabled with the new Input matrix, so I manually tripped it and restored the old matrix; recovered MC lock.
• Something is still off with this input matrix that makes the MC3 loop unstable.
15956   Wed Mar 24 00:51:19 2021 gautamUpdateLSCSchnupp asymmetry

I used the Valera technique to measure the Schnupp asymmetry to be $\approx 3.5 \, \mathrm{cm}$, see Attachment #1. The data points are points, and the zero crossing is estimated using a linear fit. I repeated the measurement 3 times for each arm to see if I get consistent results - seems like I do. Subtle effects like possible differential detuning of each arm cavity (since the measurement is done one arm at a time) are not included in the error analysis, but I think it's not controversial to say that our Schnupp asymmetry has not changed by a huge amount from past measurements. Jamie set a pretty high bar with his plot which I've tried to live up to.

15955   Tue Mar 23 09:16:42 2021 Paco, AnchalUpdateComputersPower cycled C1PSL; restored C1PSL

So actually, it was the C1PSL channels that had died. We did the following to get them back:

• We went to this page and tried the telnet procedure. But it was unable to find the host.
• So we followed the next advice. We went to the 1X1 rack and manually hard shut off C1PSL computer by holding down the power button until the LEDs went off.
• We wait for 5-7 seconds and switched it back on.
• By the time we were back in control room, the C1PSL channels were back online.
• The mode cleaner however was struggling to keep the lock. It was going in and out of lock.
• So we followed the next advice and did burt restore which ran following command:
burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Mar/22/17:19/c1psl.snap -l /tmp/controls_1210323_085130_0.write.log -o /tmp/controls_1210323_085130_0.nowrite.snap -v
• Now the mode cleaner was locked but we found that the input switch of C1IOO-WFS1_PIT and C1IOO-WFS2_PIT filter banks were off. Which meant that only YAW sensors were in loop in the lock.
• We went back in dataviewer and checked when these channels were shut down. See attachments for time series.
• It seems this happened yesterday, March 22nd near 1:00 pm (20:00:00 UTC). We can't find any mention of anyone else doing it on elog and we left by 12:15pm.
• So we shut down the PSL shutter (C1:PSL-PSL_ShutterRqst) and switched off MC autolocker (C1:IOO-MC_LOCK_ENABLE).
• Switched on C1:IOO-WFS1_PIT_SW1 and C1:IOO-WFS2_PIT_SW1.
• Turned back on PSL shutter (C1:PSL-PSL_ShutterRqst) and MC autolocker (C1:IOO-MC_LOCK_ENABLE).
• Mode cleaner locked back easily and now is keeping lock consistently. Everything looks normal.
15954   Mon Mar 22 19:07:50 2021 Paco, AnchalUpdateSUSTrying coil actuation balance

We found that following protocol works for changing the input matrices to new matrices:

• Shut the PSL shutter C1:PSL-PSL_ShutterRqst. Switch off IMC autolocker C1:IOO-MC_LOCK_ENABLE.
• Switch of the watchdog, C1:SUS-MC1_LATCH_OFF.
• Update the new matrix. (in case of MC1, we need to change sign of C1:SUS-MC1_SUSSIDE_GAIN for new matrix)
• Switch on the watchdog back again which enables all the coil outputs. Confirm that the optic is damped with just OSEM sensors.
• Switch on IMC autolocker C1:IOO-MC_LOCK_ENABLE and open PSL shutter C1:PSL-PSL_ShutterRqst.

We repeated this for MC2 as well and were able to lock. However, we could not do the same for MC3. It was getting unstable as soon as cavity was locked i.e. the WFS were making the lock unstable. However, the unstability was different in different attempts but we didn't try mroe times as we had to go.

### Coil actuation balancing:

• We set LOCKIN1 and LOCKIN2 oscillators at 10.5 Hz anf 13.5 Hz with amplitude of 10 counts.
• We wrote PIT, YAW and Butterfly actuation vectors (see attached text files used for this) on LOCKIN1 and LOCKIN2 for MC1.
• We measured C1:SUS-MC1_ASCYAW_IN1 and C1:SUS-MC1_ASCPIT_IN1 and compared it against the case when no excitation was fed.
• We repeated the above steps for MC2 except that we did not use LOCKIN2. LOCKIN2 was found to already on at oscillator frequency of 0.03Hz with amplitude of 500 counts and was fed to all coils with gain of 1 (so it was effectively moving position DOF at 0.03 Hz.) When we changed it, it became ON back again after we turned on the autolocker, so we guess this must be due to some background script and msut be important so we did not make any changes here. But what is it for?
• We have gotten some good data for MC1 and MC2 to ponder upon next.
• MC1 showed no cross coupling at all while MC2 shoed significant cross coupling between PIT and YAW.
• Both MC1 and MC2 did not show any cross coupling between butterfly actuation and PIT/YAW dof.

### On another news, IOO channels died!

• Infront of us, the medm channels starting with C1:IOO just died. See attachment 8.
• We are not sure why that happened, but we have reported everything we did up here.
• This happened around the time we were ready to switch back on the IMC autolocker and open the shutter. But now these channels are dead.
• All optics were restored with old matrices and settings and are damped in good condition as of now.
• IMC should lock back as soon as someone can restart the EPICS channels and switch on C1:IOO-MC_LOCK_ENABLE and C1:PSL-PSL_ShutterRqst.
15953   Mon Mar 22 16:29:17 2021 gautamUpdateASCSome prep for AS WFS commissioning
1. Added rough cts2mW calibration filters to the quadrants, see Attachment #1. The number I used is:
0.85 A/W         *       500 V/A            *          10 V/V                              *         1638.4 cts/V
(InGaAs responsivity)     (RF transimpedance)  (IQ demod conversion gain)      (ADC calibration)
2. Recovered FPMI locking. Once the arms are locked on POX / POY, I lock MICH using AS55_Q as a sensor and BS as an actuator with ~80 Hz UGF.
3. Phased the digital demod phases such that while driving a sine wave in ETMX PIT, I saw it show up only in the "I" quadrant signals, see Attachment #2.

The idea is to use the FPMI config, which is more easily accessed than the PRFPMI, to set up some tests, measure some TFs etc, before trying to commission the more complicated optomechanical system.

15952   Mon Mar 22 15:10:00 2021 ranaUpdateSUSTrying coil actuation balance

There's an integrator in the MC WFS servos, so you should never be disabling the ASC inputs in the suspensions. Disabling 1 leg in a 6 DOF MIMO system is like a kitchen table with 1 leg removed.

Just diagnose your suspension stuff with the cavity unlocked. You should be able to see the effect by characterizing the damping loops / cross-coupling.

15951   Mon Mar 22 11:57:21 2021 Paco, AnchalUpdateSUSTrying coil actuation balance

[Paco, Anchal]

• For MC coil balancing we will use the ASC (i.e. WFS) error signals since there are no OPLEV inputs (are there OPLEVs at all?).

### Test MC1

• Using the SUS screen LockIns the plan is to feed excitation(s) through the coil outputs, and look at the ASC(Y/P) error signals.
• A diaggui xml template was saved in /users/Templates/SUS/MC1-actDiag.xml which was based on /users/Templates/SUS/ETMY-actDiag.xml
• Before running the measurement, we of course want to plug our input matrix, so we ran /scripts/SUS/InMatCalc/writeMatrix.py only to find that it tripped the MC1 Watchdog.
• The SIDE input seems to have the largest rail, but we just followed the procedure of temporarily increasing the WD max! threshold to allow the damping action and then restoring it.
• This happened because in latest iteration of our code, we followed an advice from the matlab code to ensure the SIDE OSEM -> SIDE DOF matrix element remains positive, but we found out that MC1 SIDE gain (C1:SUS-MC1_SUSSIDE_GAIN) was set to -8000 (instead of a positive value like all other suspensions).
• So we decided to try our new input matrix with a positive gain value of 8000 at C1:SUS-MC1_SUSSIDE_GAIN and we were able to stablize the optic and acquire lock, but...
• We saw that WFS YAW dof started accumulating offset and started disturbing the lock (much like last friday). We disabled the ASC Input button (C1:SUS-MC1_ASCYAW_SW2).
• This made the lock stable and IMC autolocker was able to lock. But the offset kept on increasing (see attachment 1).
• After sometime, the offset begain to exponential go to some steady state value which was around -3000.
• We wrote back the old matrix values and changed the C1:SUS-MC1_SUSSIDE_GAIN back to -8000. But the ASCYAW offset remained to the same position. We're leaving it disabled again as we don't know how to fix this. Hopefully, it will organically come back to small value later in the day like last time (Gautum just reenabled the ASCYAW input and it worked).

Test MC3

• Defeated by MC1, we moved to MC3.
• Here, the gain value for C1:SUS-MC3_SUSSIDE_GAIN was already positive (+500) so it could directly take our new matrix.
• When we switched off watchdog, loaded the new matrix and switched the watchdog back on.
• The IMC lock was slightly distrupted but remain locked. There was no unusual activity in the WFS sensor values. However, we saw the the SIDE coil output is slowly accumulating offset.
• So we switched off the watchdog before it will trip itself, wrote back the old matrix and reinstated the status quo.
• This suggests we need to carefully look back our latest changes of normalization and have new input matriced which keep the system stable other than working on paper with offline data.
15950   Sun Mar 21 19:31:29 2021 ranaSummaryElectronicsRTL-SDR for monitoring RF noise / interference

When we're debugging our RF system, either due to weird demod phases, or low SNR, or non-stationary noise in the PDH signals, its good to have some baseline measurements of the RF levels in the lab.

I got this cheap USB dongle (RTL-SDR.COM) that seems to be capable of this and also has a bunch of open source code on GitHub to support it. It also comes mith an SMA coax and rabbit ear antenna with a flexi-tripod.

I used CubicSDR, which has free .dmg downloads for MacOS.It would be cool to have a student write some python code (perhaps starting with RTL_Power) for this to let us hop between the diffierent RF frequencies we care about and monitor the power in a small band around them.

15949   Fri Mar 19 22:24:54 2021 gautamUpdateLSCPRMI investigations: what IS the matrix??

I did all these checks today.

 Quote: I will check (i) REFL55 transimpedance, (ii) cable loss between AP table and 1Y2 and (iii) is the beam well centered on the REFL55 photodiode.
1. The transimpedance was measured to be ~420 ohms at 55 MHz (-4.3 dB relative to the assumed 700V/A of the NF1611), so close to what I measured in June (the data download didn't work apparently and so I don't have a plot but it can readily be repeated). The DC levels also checked out - with 20mA drive current for the Jenne laser, I measured ~2.3 V on the NF1611 (10kohm DC transimpedance) vs ~13mV on the DC output of the REFL55 PD (50 ohm DC transimpedance).
2. Time domain confirmation of the above statement is seen in Attachment #1. The Agilent was used to drive the Jenne laser with 0dBm RF signal @ 55 MHz. Ch1 (yellow) is the REFL55 PD output, Ch2 (blue) is the NF1611 RFPD, measured at the AP table (sorry for the confusing V/div setting).
3. Re-connected the cabling at the AP table, and measured the signal at 1Y2 using the scope Rana conveniently left there, see Attachment #2. Though the two scopes are different, the cable+connector loss estimated from the Vpp of the signal at the AP table vs that at 1Y2 is 1.5 dB, which isn't outrageous I think.
4. For the integrated test, I left the AM laser incident on the REFL55 photodiode, reconnected all the cabling to the CDS system, and viewed the traces on ndscope, see Attachment #3. Again, I think all the numbers are consistent.
• REFL55 demod board has an overall conversion gain (including the x10 gain of the daughter board preamp) of ~5V I/F per 1V RF.
• There is a flat 18 dB whitening gain.
• The digitized signal was ~13000 ctspp - assuming 3276.8 cts/V, that's ~4Vpp. Undoing the flat whitening gain and the conversion efficiency, I get 13000 / 3276.8 / (10^(18/20)) / 5 ~ 100 mVpp, which is in good agreement with Attachment #3 (pardon the thin traces, I didn't realize it looked so bad until I closed everything).

So it would seem that there is nothing wrong with the sensing electronics. I also think we can rule out any funkiness with the modulation depths since they have been confirmed with multiple different measurements.

One thing I checked was the splitting ratios on the AP table. Jenne's diagram is still accurate (assuming the components are labelled correctly). Let's assume 0.8 W makes it through the IMC to the PRM - then, I would expect, according to the linked diagram, 0.8 W * 0.8 * (1-5.637e-2) * 0.8 * 0.1 * 0.5 * 0.9 ~ 22 mW to make it onto the REFL55 PD. With the PRM aligned and the beam centered on the PD (using DC monitor but I also looked through an IR viewer, looked pretty well centered), I measured 500 mV DC level. Assuming 50 ohm DC transimpedance, that's 500 / 50 / 0.8 ~ 12.5 mW of power on this photodiode, which while is consistent with what's annotated on Jenne's diagram, is ~50% off from expectation. Is the uncertainty in the Faraday transmission and IMC transmission enough to account for this large deviation?

If we want more optical gain, we'd have to put more light on this PD. I suppose we could have ~10x the power since that's what is on IMC REFL when the MC is unlocked? If we want x100 increase in optical gain, we'd also have to increase the transimpedance by 10. I'll double check the simulation but I"m inclined to believe that the sensing electronics are not to blame.

Unconnected to this work but I feel like I'm flying blind without the wall StripTool traces so I restored them on zita (ran /opt/rtcds/caltech/c1/scripts/general/startStrip.sh).

15948   Fri Mar 19 19:15:13 2021 JonUpdateCDSc1auxey assembly

Today I helped Yehonathan get started with assembly of the c1auxey (slow controls) Acromag chassis. This will replace the final remaining VME crate. We cleared the far left end of the electronics bench in the office area, as discussed on Wed. The high-voltage supplies and test equipment was moved together to the desk across the aisle.

Yehonathan has begun assembling the chassis frame (it required some light machining to mount the DIN rails that hold the Acromag units). Next, he will wire up the switches, LED indicator lights, and Acromag power connectors following the the documented procedure.

15947   Fri Mar 19 18:14:56 2021 JonUpdateCDSFront-end testing

### Summary

Today I finished setting up the subnet for new FE testing. There are clones of both fb1 and chiara running on this subnet (pictured in Attachment 2), which are able to boot FEs completely independently of the Martian network. I then assembled a second FE system (Supermicro host and IO chassis) to serve as c1sus2, using a new OSS host adapter card received yesterday from LLO. I ran the same set of PCIe hardware/driver tests as was done on the c1bhd system in 15890. All the PCIe tests pass.

### Subnet setup

For future reference, below is the procedure used to configure the bootserver subnet.

• Select "Network" as highest boot priority in FE BIOS settings
• Connect all machines to subnet switch. Verify fb1 and chiara eth0 interfaces are enabled and assigned correct IP address.
• Add c1bhd and c1sus2 entries to chiara:/etc/dhcp/dhcpd.conf:
host c1bhd {   hardware ethernet 00:25:90:05:AB:46;   fixed-address 192.168.113.91; } host c1bhd {   hardware ethernet 00:25:90:06:69:C2;   fixed-address 192.168.113.92; }
• Restart DHCP server to pick up changes:
sudo service isc-dhcp-server restart • Add c1bhd and c1sus2 entries to fb1:/etc/hosts: 192.168.113.91 c1bhd 192.168.113.92 c1sus2 • Power on the FEs. If all was configured correctly, the machines will boot. ### C1SUS2 I/O chassis assembly • Installed in host: • DolphinDX host adapter • One Stop Systems PCIe x4 host adapter (new card sent from LLO) • Installed in chassis: • Channel Well 250 W power supply (replaces aLIGO-style 24 V feedthrough) • Timing slave • Contec DIO-1616L-PE module for timing control Next time, on to RTCDS model compilation and testing. This will require first obtaining a clone of the /opt/rtcds disk hosted on chiara. 15946 Fri Mar 19 15:31:56 2021 AidanUpdateComputersActivated MATLAB license on donatella Activated MATLAB license on donatella 15945 Fri Mar 19 15:26:19 2021 AidanUpdateComputersActivated MATLAB license on megatron Activated MATLAB license on megatron 15944 Fri Mar 19 11:18:25 2021 gautamUpdateLSCPRMI investigations: what IS the matrix?? From Finesse simulation (and also analytic calcs), the expected PRCL optical gain is ~1 MW/m (there is a large uncertainty, let's say a factor of 5, because of unknown losses e.g. PRC, Faraday, steering mirrors, splitting fractions on the AP table between the REFL photodiodes). From the same simulation, the MICH optical gain in the Q-phase signal is expected to be a factor of ~10 smaller. I measured the REFL55 RF transimpedance to be ~400 ohms in June last year, which was already a little lower than the previous number I found on the wiki (Koji's?) of 615 ohms. So we expect, across the ~3nm PRCL linewidth, a PDH horn-to-horn voltage of ~1 V (equivalently, the optical gain in units of V/m for PRCL is ~0.3 GV/m). In the measurement, the MICH gain is indeed ~x10 smaller than the PRCL gain. However, the measured optical gain (~0.1GV/m, but this is after the x10 gain of the daughter board) is ~10 times smaller than what is expected (after accounting for the various splitting fractions on the AS table between REFL photodiodes). We've established that the modulation depth isn't to blame I think. I will check (i) REFL55 transimpedance, (ii) cable loss between AP table and 1Y2 and (iii) is the beam well centered on the REFL55 photodiode. Basically, with the 400ohm transimpedance gain, we should be running with a whitening gain of 0dB before digitization as we expect a signal of O(1V). We are currently running at +18dB.  Quote: Then I put the RF signal directly into the scope and saw that the 55 MHz signal is ~30 mVpp into 50 Ohms. I waited a few minutes with triggering to make sure I was getting the largest flashes. Why is the optical/RF signal so puny? This is ~100x smaller than I think we want...its OK to saturate the RF stuff a little during lock acquisition as long as the loop can suppress it so that the RMS is < 3 dBm in the steady state. 15943 Fri Mar 19 10:49:44 2021 Paco, AnchalUpdateSUSTrying coil actuation balance [Paco, Anchal] • We decided to try out the coil actuation balancing after seeing some posts from Gautum about the same on PRM and ETMY. • We used diaggui to send swept sine excitation signal to C1:SUS-MC3_ULCOIL_EXC and read it back at C1:SUS-MC3_ASCPIT_IN1. Idea was to create transfer function measurements similar to 15880. • We first tried taking the transfer function with excitation amplitude 0f 1, 10, 50, 200 with damping loops on (swept from 10 to 100 Hz lograthmically in 20 points). • We found no meaningful measurement and looked like we were just measuring noise. • We concluded that it is probably because our damping loops are damping all the excitation down. • So we decided to switch off damping and retry. • We switched off: C1:SUS-MC3_SUSPOS_SW2 , C1:SUS-MC3_SUSPIT_SW2, C1:SUS-MC3_ASCPIT_SW2, C1:SUS-MC3_ASCYAW_SW2, C1:SUS-MC3_SUSYAW_SW2, and C1:SUS-MC3_SUSSIDE_SW2. • We repeated teh above measurements going up in amplitudes of excitation as 1, 10, 20. We saw the oscillation going out of UL_COIL but the swept sine couldn't measure any meaningful transfer function to C1:SUS-MC3_ASCPIT_IN1. So we decided to just stop. We are probably doing something wrong. ### Trying to go back to same state: • We switch on: C1:SUS-MC3_SUSPOS_SW2 , C1:SUS-MC3_SUSPIT_SW2, C1:SUS-MC3_ASCPIT_SW2, C1:SUS-MC3_ASCYAW_SW2, C1:SUS-MC3_SUSYAW_SW2, and C1:SUS-MC3_SUSSIDE_SW2. • But C1:SUS-MC3_ASCYAW_INMON had accumulated about 600 offset and was distrupting the alignment. We switched off C1:SUS-MC3_ASCYAW_SW2 hoping the offset will go away once the optic is just damped with OSEM sensors, but it didn't. • Even after minutes, the offset in C1:SUS-MC3_ASCYAW_INMON kept on increasing and crossed beyond 2000 counts limit set in C1:IOO-MC3_YAW filter bank. • We tried to unlock the IMC and lock it back again but the offset still persisted. • We tried to add bias in YAW DOF by increasing C1:SUS-MC3_YAW_OFFSET, and while it was able to somewhat reduce the WFS C1:SUS-MC3_ASCYAW_INMON offset but it was misalgning the optic and the lock was lost. So we retracted the bias to 0 and made it zero. • We tried to track back where the offset is coming from. In C1IOO_WFS_MASTER.adl, we opened the WFS2_YAW filter bank to see if the sensor is indeed reading the increasing offset. • It is quite weird that C1:IOO-WFS2_YAW_INMON is just oscillating but the output in this WFS2_YAW filter bank is slowly increasing offset. • We tried to zero the gain and back to 0.1 to see if some holding function is causing it, but that was not the case. The output went back to high negative offset and kept increasing. • We don't know what else to do. Only this one WFS YAW output is increasing, everything else is at normal level with no increasing offset or peculiar behavior. • We are leaving C1:SUS-MC3_ASCYAW_SW2 off as it is disrupting the IMC lock. [Jon walked in, asked him for help] • Jon suggested to do burt restore on IOO channels. • We used (selected through burtgooey): burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Mar/19/08:19/c1iooepics.snap -l /tmp/controls_1210319_113410_0.write.log -o /tmp/controls_1210319_113410_0.nowrite.snap -v < • No luck, the problem persists. 15942 Thu Mar 18 21:37:59 2021 ranaUpdateLSCPRMI investigations: what IS the matrix?? • Locked PRMI several tmes after Gautam setup. Easy w IFO CONFIG screen • tuned up alignment • Still POP22_I doesn't go above ~111, so not triggering the loops. Lowered triggers to 100 (POP22 units) and it locks fine now. • Ran update on zita, and now it lost its mounts (and maybe its mind). Zita needs some love to recover the StripTool plots • Put the600 ebay TDS3052 near the LSC rack and tried to look at the RF power, but found lots of confusing information. Is there really a RF monitor in this demod board or was it disconnected by a crazy Koji ? I couldn't see any signal above a few mV.
• Put a 20 dB coupler in line with the RF input and saw zip. Then I put the RF signal directly into the scope and saw that the 55 MHz signal is ~30 mVpp into 50 Ohms. I waited a few minutes with triggering to make sure I was getting the largest flashes. Why is the optical/RF signal so puny? This is ~100x smaller than I think we want...its OK to saturate the RF stuff a little during lock acquisition as long as the loop can suppress it so that the RMS is < 3 dBm in the steady state.
15941   Thu Mar 18 18:06:36 2021 gautamUpdateElectronicsModified Sat Amp and Coil Driver

I uploaded the annotated schematics (to be more convenient than the noise analysis notes linked from the DCC page) for the HAM-A coil driver and Satellite Amplifier.

15940   Thu Mar 18 13:12:39 2021 gautamUpdateComputer Scripts / ProgramsOmnigraffle vs draw.io

What is the advantage of Omnigraffle c.f. draw.io? The latter also has a desktop app, and for creating drawings, seems to have all the functionality that Omnigraffle has, see for example here. draw.io doesn't require a license and I feel this is a much better tool for collaborative artwork. I really hate that I can't even open my old omnigraffle diagrams now that I no longer have a license.

Just curious if there's some major drawback(s), not like I'm making any money off draw.io.

 Quote: After Anchal left for his test, I took the time to set up the iMAC station so that Stephen (and others) can remote desktop into it to use Omnigraffle.
15939   Thu Mar 18 12:46:53 2021 ranaUpdateSUSTesting of new input matrices with new data

Good Enough! Let's move on with output matrix tuning. I will talk to you guys about it privately so that the whole doesn't learn our secret, and highly sought after, actuation balancing.

I suspect that changing the DC alignment of the SUS changes the required input/output matrix (since changes in the magnet position w.r.t. the OSEM head change the sensing cross-coupling and the actuation gain), so we want to make sure wo do all this with the mirror at the correct alignment.

15938   Thu Mar 18 12:35:29 2021 ranaUpdatesafetyDoor to outside from control room was unlocked

I think this is probably due to the safety tour yesterday. I beleive Jordan showed them around the office area and C&B. Not sure why they left through the control room.

 Quote: I came into the lab a few mins ago and found the back door open. I closed it. Nothing obvious seems amiss. Caltech security periodically checks if this door is locked but it's better if we do it too if we use this door for entry/exit.

15937   Thu Mar 18 09:18:49 2021 Paco, AnchalUpdateSUSTesting of new input matrices with new data

[Paco, Anchal]

Since the new generated matrices were created for the measurement made last time, they are of course going to work well for it. We need to test with new independent data to see if it works in general.

• We have run scripts/SUS/InMatCal/freeSwingMC.py for 1 repition and free swinging duration of 1050s on tmux session FreeSwingMC on Rossa. Started at GPS: 1300118787.
• Thu Mar 18 09:24:57 2021 : The script ended successfully. IMC is locked back again. Killing the tmux session.
• Attached are the results of 1-kick test, time series data and the ASD of DOFs for calculated using existing input matrix and our calculated input matrix.
• The existing one was already pretty good except for maybe the side DOF which was improved on our diagonalization.

[Paco]

After Anchal left for his test, I took the time to set up the iMAC station so that Stephen (and others) can remote desktop into it to use Omnigraffle. For this, I enabled the remote login and remote management settings under "Sharing" in "System Settings". These two should allow authenticated ssh-ing and remote-desktopping respectively. The password is the same that's currently stored in the secrets.

Quickly tested using my laptop (OS:linux, RDP client = remmina + VNC protocol) and it worked. Hopefully Stephen can get it to work too.

15936   Thu Mar 18 07:02:27 2021 KojiUpdateLSCREFL11 demod retrofitting

Attachment 1: Transfer Functions

The original circuit had a gain of ~20 and the phase delay of ~1deg at 10kHz, while the new CH-I and CH-Q have a phase delay of 3 deg and 2 deg, respectively.

Attachment 2: Output Noise Levels

The AD797 circuit had higher noise at low frequency and better noise levels at high frequency. Each TLE2027 circuit was tuned to eliminate the instability and shows a better noise level compared to the low-frequency spectrum of the AD797 version.

RXA: AD797 , all hail the op-amps ending with 27 !

15935   Thu Mar 18 01:12:31 2021 gautamUpdateLSCPRFPMi
1. Integrated >1 hour at RF only control, high circulating powers tonight.
• All of the locklosses were due to me typing a wrong number / turning on the wrong filter.
• So the lock seems pretty stable, at least on the 20 minute timescale.
• No idea why given the various known broken parts.
2. Did a bunch of characterization.
• DARM OLTF - Attachment #1. The reference is when DARM is under ALS control.
• CARM OLTF - Attachment #2. Seems okay.
• Sensing matrix - Attachment #3. The CARM and DARM phases seem okay. Maybe the CARM phase can be tuned a bit with the delay line, but I think we are within 10 degrees.
3. TRX/TRY between 300-400, with large fluctuations mostly angular. So PRG ~17-22, to answer Koji's question in the meeting today.
• This is similar to what I had before the vent of Sep 2020.
• Not surprising to me, since I claim that we are in the regime where the recycling gain is limited by the flipped folding mirrors.
4. Tried to tweak the ASC (QPD only) by looking at the step responses, but I could never get the loop gains such that I could close an integrator on all the loops.

I need to think a little bit about the ASC commissioning strategy. On the positive side

1. REFL11 board seems to perform at least as well as before.
2. ALS performance made me (as Pep would say), so so happy.
3. Whole lock acquisiton sequence takes ~5mins if the PRMI catches lock quickly (5/7 times tonight).
4. Process seems repeatable.

1. How to get the AS WFS in the picture?
2. What does the (still) crazy sensing matrix mean? I think it's not possible to transfer vertex control to 1f signals with this kind of sensing.
3. What does it mean that the PRM actuation seems to work, even though the coils are imabalnced by a factor of 3-5, and the coil resistances read out <2 ohms???
4. What's going on at the ALS-->CARM transition? The ALS noise is clearly low enough that I can sit inside the CARM linewidth. Yet, there seems to be some offset between what ALS thinks is the resonant point, and what the REFL11 signal thinks is the resonant point. I am kind of able to "power through" this conflict, but the IMC error point (=AO path) is not very happy during the transition. It worked 8/8 times tonight, but would be good to figure out how to make this even more robust.
15934   Wed Mar 17 16:30:46 2021 AnchalUpdateSUSNormalized Input Matrices plotted better than SURF students

Here, I present the same input matrices now normalized row by row to have same norm as current matrices rows. These now I plotted better than last time. Other comments same as 15902. Please let us know what you think.

Thu Mar 18 09:11:10 2021 :

Note: The comparison of butterfly dof in the two cases is bit bogus. The reason is that we know what the butterfly vector is in sensing matrix (N_osems x (N_dof +1)) and that is the last column being (1, -1, 1, -1, 0) corresponding to (UL, UR, LR, LL, Side). However, the matrix we multiply with the OSEM data is the inverse of this matrix (which becomes the input matrix) which has dimensions ((N_dof + 1) x N_osem) and has the last row corresponding to the butterfly dof. This row was not stored for old calculation of the input matrix (which is currently in use) and can not be recovered (mathematically not possible) with the existing 5x4 part of that input matrix. We just added (1, -1, 1, -1, 0) row in the bottom of this matrix (as was done in the matlab codes) but that is wrong and hence the butterfly vector looks so bad for the existing input matrix.

Proposal: We should store the last row of generated input matrix somewhere for such calculations. Ideally, another row in the epics channels for the input matrix would be the best place to store them but I guess that would be too destructive to implement. Other options are to store this 5 number information in wiki or just elogs. For this post, the buttefly row for the generated input matrix is present in the attached files (for future references).

15933   Wed Mar 17 15:04:20 2021 gautamUpdateElectronicsRibbon cable for chassis

I had asked Chub to order 100ft ea of 9, 15 and 25 conductor ribbon cable. These arrived today and are stored in the VEA alongside the rest of the electronics/chassis awaiting assembly.

15932   Wed Mar 17 15:02:06 2021 gautamUpdatesafetyDoor to outside from control room was unlocked

I came into the lab a few mins ago and found the back door open. I closed it. Nothing obvious seems amiss.

Caltech security periodically checks if this door is locked but it's better if we do it too if we use this door for entry/exit.

15931   Wed Mar 17 14:40:39 2021 YehonathanUpdateBHDSOS SmCo magnets Inspection

Continuing with envelope number 2

 Magnet number Magnetic field (kG) 1 2.89 2 2.85 3 2.92 4 2.75 5 2.95 6 2.91 7 2.93 8 2.9 9 2.93 10 2.9 11 2.85 12 2.89 13 2.85 14 2.88 15 2.92 16 2.75 17 2.97 18 2.88 19 2.85 20 2.87 21 2.93 22 2.9 23 2.9 24 2.89 25 2.88 26 2.88 27 2.95 28 2.88 29 2.88 30 2.9 31 2.96 32 2.91 33 2.93 34 2.9 35 2.9 36 3.03 37 2.84 38 2.95 39 2.89 40 2.88 41 2.88 42 2.93 43 2.97 44 2.74 45 2.84 46 2.85 47 2.85 48 2.87 49 2.88 50 2.8

I think I have to redo envelope 1 tomorrow.

15930   Wed Mar 17 11:57:54 2021 Paco, AnchalUpdateSUSTested New Input Matrix for MC1

[Paco, Anchal]

Paco accidentally clicked on C1:SUS-MC1_UL_TO_COIL_SW_1_1 (MC1 POS to UL Coil Switch) and clicked it back on. We didn't see any loss of lock or anything significant on the large monitor on left.

### Testing the new calculated input matrix

• Switched off the PSL shutter (C1:PSL-PSL_ShutterRqst)
• Switched off IMC autolocker (C1:IOO-MC_LOCK_ENABLE)
• Uploaded the same input matrix as the current one to check writing function in scripts/SUS/InMatCalc/testingInMat.py . We have created backup text file for current settings in backupMC1InMat.txt .
• Uploaded the new input matrix in normalized form. To normalize, we first made each row vector unit vector and then multiplied by the norm of current input matrix's row vectors (see scripts/SUS/InMatCalc/normalizeNewInputMat.py)
• Switched ON the PSL shutter (C1:PSL-PSL_ShutterRqst)
• Switched ON IMC autolocker (C1:IOO-MC_LOCK_ENABLE)
• Locked was caught immediately. The wavefront sensor of MC1 shows usual movement, nothing crazy.
• So the new input matrix is digestable by the system, but what's the efficacy of it?

< Two inspection people taking pictures of ceiling and portable AC unit passed. They rang the doorbell but someone else let them in. They walked out the back door.>

### Testing how good the input matrix for MC1 is:

• We loaded the input matrix butterfly row in C1:SUS-MC1_LOCKIN_INMATRX_1_4 to 8. This matrix is multiplied by C1:SUS-MC1_UL_SEN_IN and so on channels before the calibration to um and application of toher filters.
• We tried to look around on how to load the same filter banks on the signal chain of LOCKIN1 of MC1 but couldn't, so we just manually added gain value of 0.09 in this chain to simulate the calibration factor at the very least.
• We started the oscillator on LOCKIN 1 on MC1 with amplitude 1 and frequency 6 Hz.
• We added butterfly mode actuation output column (UL:1, UR:-1, LL:-1, LR:1), nothing happened to the lock of probably because of low amplitude we put in.
• Now, we plot the ASD of channels like C1:SUS-MC1_SUSPOS_IN1 (for POS, PIT, YAW, SIDE) to see if we see a corresponding peak there. No we don't. See attachment 1.

### Restoring the system:

• Added 0 to the LOCKIN1 column in MC1 output matrix.
• Made LOCK1 oscillator 0 Amplitude, 0 Hz.
• Changed back gain on signal chain of LOCKIN1 on MC1.
• Added 0 to C1:SUS-MC1_LOCKIN_INMATRX_1_4 to 8.
• Switched off the PSL shutter (C1:PSL-PSL_ShutterRqst)
• Switched off IMC autolocker (C1:IOO-MC_LOCK_ENABLE)
• Wrote back the old matrix by scripts/SUS/InMatCalc/testingInMat3.py which used the backup we created.
• Switched ON the PSL shutter (C1:PSL-PSL_ShutterRqst)
• Switched ON IMC autolocker (C1:IOO-MC_LOCK_ENABLE)

I have added a .1", 45deg chamfer to the bottom of the adapter ring. This was added for a new placement of the eq stops, since the barrel screws are hard to access/adjust.

This also required a modification to the eq stop bracket, D960008-v2, with 1/4-20 screws angled at 45 deg to line up with the chamfer.

The issue I am running into is there needs to be a screw on the backside of the ring as well, otherwise the ring would fall backwards into the OSEMs in the event of an earthquake. The only two points of contact are these front two angled screws, a third is needed on the opposite side of the CoM for stability. This would require another bracket mounted at the back of the SOS tower, but there is very little open real estate because of the OSEMs.

Instead of this whole chamfer route, is it possible/easier to just swap the screws for the barrel eq stops? Instead of a socket head cap screw, a SS thumb screw such as this, will provide more torque when turning, and remove the need to use a hex wrench to turn.

15928   Wed Mar 17 09:05:01 2021 Paco, AnchalConfigurationComputers40m Control Room Changes
• Switched positions of allegra and donatella.
• While doing so, the hdmi cable previously used by donatella snapped. We replaced this cable by another unused cable we found connected only on one end to rossa. We should get more HDMI cables if that cable was in use for some other purpose.
• Paco bought a bluetooth speaker/mic that is placed infront of allegra and it's usb adapter is connected to iMac's keyboard in the bottom. With the new camera installed, the 40m video call environment is now complete.
• Again, we have placed allegra's monitor for place holder but it is not working and we need new monitors for it in future whenever it is going to be used.
15927   Wed Mar 17 00:05:26 2021 gautamUpdateLSCDelay line BIO remote control

While Koji is working on the REFL 11 demod board, I took the opportunity to investigate the non-remote-controllability of the delay line in 1Y2, since the TTs have already been disturbed. Here is what I found today.

1. First, I brought over the spare delay line from the rack Chiara sits in over to 1Y2.
• Connected a Marconi to the input, monitored a -3dB pickoff and the delay line output simultaneously on a 300MHz scope.
• With the front panel selector set to "Internal", verified that local (i.e. toggling front panel switches) switchability seems to work 👍
• Set the front panel switch to "External", and connected the D25 cable from the BIO card in 1Y3 to the back panel of the delay line unit - found that I could not remotely change the delay 😒
• I thought it'd be too much of a coincidence if both delay lines have the same failure mode for the remote switching part only, so I decided to investigate further up the signal chain.
2. BIO switching - the CDS BIO bit status MEDM screen seems to respond, indicating that the bits are getting set correctly in software at least. I don't know of any other software indicator for this functionality further down the signal processing chain. So it would seem the BIO card is not actually switching.
3. The Contec DO cards don't actually source the voltage - they just provide a path for current to flow (or isolate said path). I checked that pin 12 of the rear panel D25 connector is at +5 V DC relative to ground as indicated in the schematic (see P1 connector - this connector isn't a Dsub, it is IDE24, so the mapping to the Dsub pins isn't one-to-one, but pin 23 on the former corresponds to pin 12 on the latter), suggesting that the pull up resistors have the necessary voltage applied to them.
4. Made a little LED tester breakout board, and saw no swtiching when I toggled the status of some random bits.
5. Noted that the bench power supply powering this setup (hacky arrangement from 2015 that never got unhacked) shows a current draw of 0A.
• I am not sure what the quiescent draw of these boards is - the datasheet says "Power consumption: 3.3VDC, 450mA", but the recommended supply voltage is "12-24V DC (+/-10%)" not 3.3VDC, so not sure what to make of that.
• To try and get some insight, I took one of the new Contec-32L-PE cards we got from near Jon's CDS test stand (I've labelled the one I took lest there be some fault with it in the future), and connected it to a bench supply (pin 18 = +15V DC, pin1 = GND). But in this condition, the bench supply reports 0A current draw.
6. Ruled out the wrong cable being plugged in - I traced the cable over the cable tray, and seems like it is in fact connecting the BIO card in the c1lsc expansion chassis to the delay line.

So it would seem something is not quite right with this BIO card. The c1lsc expansion chassis, in which this card sits, is notoriously finicky, and this delay line isn't very high priority, so I am deferring more invasive investigation to the next time the system crashes.

* I forgot we have the nice PCB Contec tester board with LEDs - the only downside is that this board has D37 connectors on both ends whereas the delay line wants a D25, necessitating some custom ribbon cable action. But maybe there is a way to use this.

As part of this work, I was in various sensitive areas (1Y3, chiara rack, FE test stand etc) but as far as I can tell, all systems are nominal.

15926   Tue Mar 16 19:13:09 2021 Paco, AnchalUpdateSUSFirst success in Input Matric Diagonalization

After jumping through few hoops, we have one successful result in diagonalizing the input matrix for MC1, MC2 and MC3.

### Code:

• Attachment 2 has the code file contained. For now, we can only guarantee it to work on Donatella in conda base environment. Our code is present in scripts/SUS/InMatCalc
• We mostly follow the steps mentioned in 4886 and the matlab codes in scripts/SUS/peakFit.
• Data is first multiplied with currently used inpur matrix to get time series data in DOF (POS, PIT, YAW, SIDE) basis.
• Then, the peak frequencies of each resonance are identified.
• For getting these results, we did not attempt to fit the peaks with lorentzians and took the maxima point of the PSD to get the peak positions. This is only possible if the current input matrix is good enough. We have to adjust some parameters so that our fitting code works always.
• TF estimate between the sensor data w.r.t UL sensor is taken and the values around the peak frequencies of oscillations are averaged to get the sensing matrix.
• This matrix is normalized along DOF axis (columns in our case) and then inverted.
• After inversion, another normaliation is done along DOF axis (now rows).
• Finally we plot the comparison of ASD in DOF basis when using current input matrix and when using our calculated inpur matrix (diagonalizing).
• You can notice in Attachment 1 that after the diagonalization, each DOF shows resonance at only one and its own resonance frequency while earlier there was some mixing shown.
• Absolute value of the calculated DOF might have changed and we need to calibrate them or apply appropriate gain factors in the DOF basis filter chains.

### Next steps:

• We'll complete our scripts and make them more general to be used for any optic.
• We'll combine all of them into one single script which can be called by medm.
• In parallel, we'll start onwards from step 2 in 15881.
• Anything else that folks can suggest on our first result. Did we actually do it or are we fooling ourselves?
15925   Tue Mar 16 19:04:20 2021 gautamUpdateCDSFront-end testing

Now that I think about it, I may only have backed up the root file system of chiara, and not/home/cds/ (symlinked to /opt/ over NFS). I think we never revived the rsync backup to LDAS after the FB fiasco of 2017, else that'd have been the most convenient way to get files. So you may have to resort to some other technique (e.g. configure the second network interface of the chiara clone to be on the martian network and copy over files to the local disk, and then disconnect the chiara clone from the martian network (if we really want to keep this test stand completely isolated from the existing CDS network) - the /home/cds/ directory is rather large IIRC, but with 2TB on the FB clone, you may be able to get everything needed to get the rtcds system working). It may then be necessary to hook up a separate disk to write frames to if you want to test that part of the system out.

Good to hear the backup disk was able to boot though!

 Quote: And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara. For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success.
15924   Tue Mar 16 16:27:22 2021 JonUpdateCDSFront-end testing

Some progress today towards setting up an isolated subnet for testing the new front-ends. I was able to recover the fb1 backup disk using the Rescatux disk-rescue utility and successfully booted an fb1 clone on the subnet. This machine will function as the boot server and DAQ server for the front-ends under test. (None of these machines are connected to the Martian network or, currently, even the outside Internet.)

Despite the success with the framebuilder, front-ends cannot yet be booted locally because we are still missing the DHCP and FTP servers required for network booting. On the Martian net, these processes are running not on fb1 but on chiara. And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.

For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success. The repair tool I used to recover the fb1 disk does not find a problem with the chiara disk. However, the chiara disk is an external USB drive, so I suspect there could be a compatibility problem with these old (~2010) machines. Some of them don't even recognize USB keyboards pre-boot-up. I may try booting the USB drive from a newer computer.

Edit: I removed one of the new, unused Supermicros from the 1Y2 rack and set it up in the test stand. This newer machine is able to boot the chiara USB disk without issue. Next time I'll continue with the networking setup.

15923   Tue Mar 16 16:02:33 2021 KojiUpdateLSCREFL11 demod retrofitting

I'm going to remove REFL11 demod for the noise check/circuit improvement.

----

• The module was removed (~4pm). Upon removal, I had to loosen AS110 LO/I out/Q out. Check the connection and tighten their SMAs upon restoration of REFL11.
• REFL11 configuration / LO: see below, PD: a short thick SMA cable, I OUT: Whitening CH3, Q OUT: Whitening CH4, I MON daughterboard: CM board IN1 (BNC cable)
• The LO cable for REFL11 was made of soft coax cable (Belden 9239 Low Noise Coax). The vendor specifies that this cable is for audio signals and NOT recommended for RF purposes [Link to Technical Datasheet (PDF)].
I'm going to measure the delay of the cable and make a replacement.
• There is a bunch of PD RF Mon cables connected to many of the demo modules. I suppose that they are connected to the PD calibration system which hasn't been used for 8 years. And the controllers are going to be removed from the rack soon.
I'm going to remove these cables.

----

First I checked the noise levels and the transfer functions of the daughterboard preamp were checked. The CH-1 of the SR785 seemed funky (I can't comprehensively tell yet how it was), so the measurement maybe unreliable.

For the replacement of AD797, I tested OP27 and TLE2027. TLE2027 is similar to OP27, but slightly faster, less noisy, and better in various aspects.

The replacement of the AD797 and whatever-film resistors with LTE2027 and thin-film Rs were straightforward for the I phase channel, while the stabilization of the Q phase channel was a struggle (no matter I used OP27 or TLE2027). It seems that the 1st stage has some kind of instability and I suffered from 3Hz comb up to ~kHz. But the scope didn't show obvious 3Hz noise.

After a quite bit of struggle, I could tame this strange noise by adjusting the feedback capacitor of the 1st stage. The final transfer functions and the noise levels were measured. (To be analyzed later)

----

Now the REFL11 LO cable was replaced from the soft low noise audio coax (Belden 9239) to jacketed solder-soaked coax cable (Belden 1671J - RG405 compatible). The original cable indicated the delay of -34.3deg (@11MHz, 8.64ns) and the loss of 0.189dB.

I took 80-inch 1671J cable and measured the delay to be ~40deg. The length was adjusted using this number and the resulting cable indicated the delay of -34.0deg (@11MHz, 8.57ns) and the loss of 0.117dB.

The REFL11 demod module was restored and the cabling around REFL11 and AS110 were restored, tightened, and checked.

----

I've removed the PD mon cables from the NI RF switch. The open ports were plugged with 50Ohm temirnators.

----

## I ask commissioners to make the final check of the REFL11 performance using CDS.

15922   Tue Mar 16 14:37:36 2021 YehonathanUpdateBHDSOS SmCo magnets Inspection

In the cleanroom, I opened the nickel-plated SmCo magnet box to take a closer look. I handled the magnets with tweezers. I wrapped the tips of the tweezers with some Kapton tape to prevent scratching and magnetization.

I put some magnets on a razor blade and took some close-up pictures of the face of the magnets on both sides. Most of them look like attachment 1.

Some have worn off plating on the edges. The most serious case is shown in attachment 2. Maybe it doesn't matter if we are going to sand them?

I measure the magnetic flux of the magnets by just attaching the gaussmeter flat head to the face of the magnet and move it around until the maximum value is reached.

For envelope #1 out of 3 the values are: (The magnet ordering is in attachment 3):

 Magnet # Max Magnetic Field (kG) 1 2.57 2 2.54 3 2.57 4 2.57 5 2.55 6 2.61 7 2.55 8 2.52 9 2.64 10 2.58

Going to continue tomorrow with the rest of the magnets. I left the magnet box and the gaussmeter under the flow bench in the cleanroom.

15921   Mon Mar 15 20:40:01 2021 ranaConfigurationComputersinstalled QTgrace on donatella for dataviewer

I installed QTgrace using yum on donatella. Both Grace and XMgrace are broken due to some boring fight between the Fedora package maintainers and the (non existent) Grace support team. So I have symlinked it:

controls@donatella|bin> sudo mv xmgrace xmgrace_bak
controls@donatella|bin> sudo ln -s qtgrace xmgrace
controls@donatella|bin> pwd
/usr/bin


I checked that dataviewer works now for realtime and playback. Although the middle click paste on the mouse doesn't work yet.

15920   Mon Mar 15 20:22:01 2021 gautamUpdateASCc1rfm model restarted

On Friday, I felt that the ASC performance when the PRFPMI was locked was not as good as it used to be, so I looked into the situation a bit more. As part of my ASC model revamp in December, I made a bunch of changes to the signal routing, and my suspicion was that the control signals weren't even reaching the ETMs. My log says that I recompiled and reinstalled the c1rfm model (used to pipe the ASC control signals to the ETMs), and indeed, the file was modified on Dec 21. But for whatever reason, the C1RFM.ini (=Dolphin receiver since the ASC control signals are sent to this model over the Dolphin network from the c1ioo machine which hosts the C1:ASC- namespace, and RFM sender to the ETMs, but this path already existed) file never picked up the new channels. Today, I recompiled, re-installed, and restarted the models, and confirmed that the control signals actually make it to the ETMs. So now we can have the QPD-based ASC loops engaged once again for the PRFPMI lock. The CDS system did not crash 🎉 . See Attachments #1-3.

I checked the loop performance in the POX/POY locked config by first deliberately misaligning the ETMs, and then engagin the loops - seems to work (Attachment #4). The loop shapes have to be tweaked a bit and I didn't engage the integrators, hence the DC pointing wasn't recovered. Also, added a line to the script that turns the ASC loops on to set limits for all the loops - in the testing process, one of the loops ran away and I tripped the ETMY watchdog. It has since been recovered. I SDFed a limit of 100cts just to be on the conservative side for model reboot situations - the value in the script can be raised/lowered as deemed necessary (sorry, I don't know the cts-->urad number off the top of my head).

But the hope is this improves the power buildup, and provides stability so that I can begin to commission the AS WFS system a bit.

15919   Mon Mar 15 08:55:45 2021 Paco, AnchalSummarytraining

[Paco, Anchal]

• Found IMC locked upon arrival.
• Since "allegra" was set up as an additional workstation, we tried using it but discovered the monitor ist kaput. For the sake of debugging, we tested VGA and DVI inputs and even the monitor lying around (also labeled "allegra") with no luck. So <ssh> it is for now.

### IMC Input sensing matrix

• Rana joined us and asked us to use Rossa for now so that we can sit socially distantly.
• Attaching some intermediate results on our analysis as pdf and zip file containing all the codes we used.
• We used channels C1:SUS-MC1_USSEN_OUTPUT  (16 Hz channels) and so on which might not be the correct way to do it as Rana pointed out today, we should have used channels like C1:SUS-MC1_SENSOR_UL etc.
• During the input matrix calculation, we used the method of TF estimate (as mentioned in 4886) to calculate the sensor matrix and inverted it and normalized all rows with the maximum absolute value element (we tried few other ways of normalization with no better results either).
• We found the peak frequencies by fitting lorentzian to the sensor data rotated by the current input matrix in the system. We also tried doing this directly on the sensor data (UL for POS, UR for PIT, LR for YAW and SD for SIDE as this seemed to be the case in the old matlab codes) but with no different results.
• The fitted peak frequencies, Q and amplitude values are in fittedPeakFreqs.yml in the attached zip.
15918   Fri Mar 12 21:15:19 2021 gautamUpdateLSCcoronaversary PRFPMi

Attachment #1 - proof that the lock is RF only (A paths are ALS, B paths are RF).

Attachment #2 - CARM OLTF.

Some tuning can be done, the circulating power can be made ~twice as high with some ASC. The vertex is still on 3f control. I didn't get any major characterization done tonight but it's nice to be back here, a year on i guess.

15917   Fri Mar 12 19:44:31 2021 gautamUpdateLSCDelay line

I may want to use the delay line phase shifter in 1Y2 to allow remote actuation of the REFL11 demod phase (for the AO path, not the low bandwidth one). I had this working last Feb, but today, I am unable to remotely change the delay. @Koji, it would be great if you could fix this the next time you are in the lab - I bet it's a busted latch IC or some such thing. I did the non-invasive tests - cable is connected, control bits are changing (at least according to the CDS BIO indicators) and the switch controlling remote/local switching is set correctly. The local switching works just fine.

In the meantime, I will keep trying - I am unconvinced we really need the delay line.

15916   Fri Mar 12 18:10:01 2021 AnchalSummaryComputer Scripts / ProgramsInstalled cds-workstation on allegra

allegra had fresh Debian 10 installed on it already. I installed cds-workstation packages (with the help of Erik von Reis). I checked that command line caget, caput etc were working. I'll see if medm and other things are working next time we visit the lab.

15915   Fri Mar 12 13:48:53 2021 gautamSummarySUSCoil Rs & Ls for PRM/BS/SRM

I didn't repeat Koji's measurement, but he reports the expected ~3.2mH per coil on all the BS and PRM coils.

 Quote: ugh. sounds bad - maybe a short. I suggest measuring the inductance; thats usually a clearer measurement of coil health
15914   Fri Mar 12 13:01:43 2021 ranaSummarySUSCoil Rs & Ls for PRM/BS/SRM

ugh. sounds bad - maybe a short. I suggest measuring the inductance; thats usually a clearer measurement of coil health

15913   Fri Mar 12 12:32:54 2021 gautamSummarySUSCoil Rs & Ls for PRM/BS/SRM

For consistency, today, I measured both the BS and PRM actuator balancing using the same technique and don't find as serious an imbalance for the BS as in the PRM case. The Oplev laser source is common for both BS and PRM, but the QPDs are of course distinct.

BTW, I thought the expected resistance of the coil windings of the OSEM is ~13 ohms, while the BS/PRM OSEMs report ~1-2 ohms. Is this okay?

 Quote: All the PRM coils look well-matched in terms of the inductance. Also, I didn't find a significant difference from BS coils.
15912   Fri Mar 12 11:44:53 2021 Paco, AnchalUpdatetrainingIMC SUS diagonalization in progress

[Paco, Anchal]

- Today we spent the morning shift debugging SUS input matrix diagonalization. MC stayed locked for most of the 4 hours we were here, and we didn't really touch any controls.

15911   Fri Mar 12 11:02:38 2021 gautamUpdateCDScds reboot

I looked into this a bit today morning. I forgot exactly what time we restarted the machines, but looking at the timesyncd logs, it appears that the NTP synchronization is in fact working (log below is for c1sus, similar on other FEs):

-- Logs begin at Fri 2021-03-12 02:01:34 UTC, end at Fri 2021-03-12 19:01:55 UTC. -- Mar 12 02:01:36 c1sus systemd[1]: Starting Network Time Synchronization... Mar 12 02:01:37 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver). Mar 12 02:01:37 c1sus systemd[1]: Started Network Time Synchronization. Mar 12 02:02:09 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).

So, the service is doing what it is supposed to (using the FB, 192.168.113.201, as the ntpserver). You can see that the timesync was done a couple of seconds after the machine booted up (validated against "uptime"). Then, the service is periodically correcting drifts. idk what it means that the time wasn't in sync when we check the time using timedatectl or similar. Anyway, like I said, I have successfully rebooted all the FEs without having to do this weird time adjustment >10 times

I guess what I am saying is, I don't know what action is necessary for "implementing NTP synchronization properly", since the diagnostic logfile seems to indicate that it is doing what it is supposed to.

More worryingly, the time has already drifted in < 24 hours.

 Quote: I want to emphasize the followings: FB's RTC (real-time clock/motherboard clock) is synchronized to NTP time. The other RT machines are not synchronized to NTP time. My speculation is that the RT machine timestamp is produced from the local RT machine time because there is no IRIG-B signal provided. -> If the RTC is more than 1 sec off from the FB time, the timestamp inconsistency occurs. Then it induces "DC" error. For now, we have to use "date" command to match the local RTC time to the FB's time.   So: If NTP synchronization is properly implemented for the RT machines, we will be free from this silly manual time adjustment.
15910   Fri Mar 12 03:28:51 2021 KojiUpdateCDScds reboot

I want to emphasize the followings:

• FB's RTC (real-time clock/motherboard clock) is synchronized to NTP time.
• The other RT machines are not synchronized to NTP time.
• My speculation is that the RT machine timestamp is produced from the local RT machine time because there is no IRIG-B signal provided. -> If the RTC is more than 1 sec off from the FB time, the timestamp inconsistency occurs. Then it induces "DC" error.
• For now, we have to use "date" command to match the local RTC time to the FB's time.

• So: If NTP synchronization is properly implemented for the RT machines, we will be free from this silly manual time adjustment.

15909   Fri Mar 12 03:23:37 2021 KojiUpdateBHDDO card (DO-32L-PE) brought from WB

I've brought 4 DO-32L-PE cards from WB for BHD upgrade for Jon.

ELOG V3.1.3-