40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 200 of 335  Not logged in ELOG logo
ID Date Authordown Type Category Subject
  15912   Fri Mar 12 11:44:53 2021 Paco, AnchalUpdatetrainingIMC SUS diagonalization in progress

[Paco, Anchal]

- Today we spent the morning shift debugging SUS input matrix diagonalization. MC stayed locked for most of the 4 hours we were here, and we didn't really touch any controls.

  15919   Mon Mar 15 08:55:45 2021 Paco, AnchalSummarytraining 

[Paco, Anchal]

  • Found IMC locked upon arrival.
  • Since "allegra" was set up as an additional workstation, we tried using it but discovered the monitor ist kaput. For the sake of debugging, we tested VGA and DVI inputs and even the monitor lying around (also labeled "allegra") with no luck. So <ssh> it is for now.

IMC Input sensing matrix

  • Rana joined us and asked us to use Rossa for now so that we can sit socially distantly.
  • Attaching some intermediate results on our analysis as pdf and zip file containing all the codes we used.
  • We used channels C1:SUS-MC1_USSEN_OUTPUT  (16 Hz channels) and so on which might not be the correct way to do it as Rana pointed out today, we should have used channels like C1:SUS-MC1_SENSOR_UL etc.
  • During the input matrix calculation, we used the method of TF estimate (as mentioned in 4886) to calculate the sensor matrix and inverted it and normalized all rows with the maximum absolute value element (we tried few other ways of normalization with no better results either).
  • We found the peak frequencies by fitting lorentzian to the sensor data rotated by the current input matrix in the system. We also tried doing this directly on the sensor data (UL for POS, UR for PIT, LR for YAW and SD for SIDE as this seemed to be the case in the old matlab codes) but with no different results.
  • The fitted peak frequencies, Q and amplitude values are in fittedPeakFreqs.yml in the attached zip.
Attachment 1: IMC_InputMatrixDiagonalization.pdf
IMC_InputMatrixDiagonalization.pdf IMC_InputMatrixDiagonalization.pdf IMC_InputMatrixDiagonalization.pdf
Attachment 2: inputMatrixCalculationMC.tar
Attachment 3: freeSwingMC.py.tar
Attachment 4: SUSfreeswing_1299514263.txt.tar
  15926   Tue Mar 16 19:13:09 2021 Paco, AnchalUpdateSUSFirst success in Input Matric Diagonalization

After jumping through few hoops, we have one successful result in diagonalizing the input matrix for MC1, MC2 and MC3.


Code:

  • Attachment 2 has the code file contained. For now, we can only guarantee it to work on Donatella in conda base environment. Our code is present in scripts/SUS/InMatCalc
  • We mostly follow the steps mentioned in 4886 and the matlab codes in scripts/SUS/peakFit.
  • Data is first multiplied with currently used inpur matrix to get time series data in DOF (POS, PIT, YAW, SIDE) basis.
  • Then, the peak frequencies of each resonance are identified.
  • For getting these results, we did not attempt to fit the peaks with lorentzians and took the maxima point of the PSD to get the peak positions. This is only possible if the current input matrix is good enough. We have to adjust some parameters so that our fitting code works always.
  • TF estimate between the sensor data w.r.t UL sensor is taken and the values around the peak frequencies of oscillations are averaged to get the sensing matrix.
  • This matrix is normalized along DOF axis (columns in our case) and then inverted.
  • After inversion, another normaliation is done along DOF axis (now rows).
  • Finally we plot the comparison of ASD in DOF basis when using current input matrix and when using our calculated inpur matrix (diagonalizing).
  • You can notice in Attachment 1 that after the diagonalization, each DOF shows resonance at only one and its own resonance frequency while earlier there was some mixing shown.
  • Absolute value of the calculated DOF might have changed and we need to calibrate them or apply appropriate gain factors in the DOF basis filter chains.

Next steps:

  • We'll complete our scripts and make them more general to be used for any optic.
  • We'll combine all of them into one single script which can be called by medm.
  • In parallel, we'll start onwards from step 2 in 15881.
  • Anything else that folks can suggest on our first result. Did we actually do it or are we fooling ourselves?
Attachment 1: IMC_InputMatrixDiagonalization.pdf
IMC_InputMatrixDiagonalization.pdf IMC_InputMatrixDiagonalization.pdf IMC_InputMatrixDiagonalization.pdf
Attachment 2: InMatCalcScripts.zip
  15928   Wed Mar 17 09:05:01 2021 Paco, AnchalConfigurationComputers40m Control Room Changes
  • Switched positions of allegra and donatella.
  • While doing so, the hdmi cable previously used by donatella snapped. We replaced this cable by another unused cable we found connected only on one end to rossa. We should get more HDMI cables if that cable was in use for some other purpose.
  • Paco bought a bluetooth speaker/mic that is placed infront of allegra and it's usb adapter is connected to iMac's keyboard in the bottom. With the new camera installed, the 40m video call environment is now complete.
  • Again, we have placed allegra's monitor for place holder but it is not working and we need new monitors for it in future whenever it is going to be used.
  15937   Thu Mar 18 09:18:49 2021 Paco, AnchalUpdateSUSTesting of new input matrices with new data

[Paco, Anchal]

Since the new generated matrices were created for the measurement made last time, they are of course going to work well for it. We need to test with new independent data to see if it works in general.


  • We have run scripts/SUS/InMatCal/freeSwingMC.py for 1 repition and free swinging duration of 1050s on tmux session FreeSwingMC on Rossa. Started at GPS: 1300118787.
  • Thu Mar 18 09:24:57 2021 : The script ended successfully. IMC is locked back again. Killing the tmux session.
  • Attached are the results of 1-kick test, time series data and the ASD of DOFs for calculated using existing input matrix and our calculated input matrix.
  • The existing one was already pretty good except for maybe the side DOF which was improved on our diagonalization.

[Paco]

After Anchal left for his test, I took the time to set up the iMAC station so that Stephen (and others) can remote desktop into it to use Omnigraffle. For this, I enabled the remote login and remote management settings under "Sharing" in "System Settings". These two should allow authenticated ssh-ing and remote-desktopping respectively. The password is the same that's currently stored in the secrets.

Quickly tested using my laptop (OS:linux, RDP client = remmina + VNC protocol) and it worked. Hopefully Stephen can get it to work too.

Attachment 1: MC_Optics_Kicked_Time_Series_1.pdf
MC_Optics_Kicked_Time_Series_1.pdf
Attachment 2: TEST_Input_Matrix_Diagonalization.pdf
TEST_Input_Matrix_Diagonalization.pdf TEST_Input_Matrix_Diagonalization.pdf TEST_Input_Matrix_Diagonalization.pdf
  15943   Fri Mar 19 10:49:44 2021 Paco, AnchalUpdateSUSTrying coil actuation balance

[Paco, Anchal]

  • We decided to try out the coil actuation balancing after seeing some posts from Gautum about the same on PRM and ETMY.
  • We used diaggui to send swept sine excitation signal to C1:SUS-MC3_ULCOIL_EXC and read it back at C1:SUS-MC3_ASCPIT_IN1. Idea was to create transfer function measurements similar to 15880.
  • We first tried taking the transfer function with excitation amplitude 0f 1, 10, 50, 200 with damping loops on (swept from 10 to 100 Hz lograthmically in 20 points).
  • We found no meaningful measurement and looked like we were just measuring noise.
  • We concluded that it is probably because our damping loops are damping all the excitation down.
  • So we decided to switch off damping and retry.
  • We switched off: C1:SUS-MC3_SUSPOS_SW2 , C1:SUS-MC3_SUSPIT_SW2, C1:SUS-MC3_ASCPIT_SW2, C1:SUS-MC3_ASCYAW_SW2, C1:SUS-MC3_SUSYAW_SW2, and C1:SUS-MC3_SUSSIDE_SW2.
  • We repeated teh above measurements going up in amplitudes of excitation as 1, 10, 20. We saw the oscillation going out of UL_COIL but the swept sine couldn't measure any meaningful transfer function to C1:SUS-MC3_ASCPIT_IN1. So we decided to just stop. We are probably doing something wrong.

Trying to go back to same state:

  • We switch on: C1:SUS-MC3_SUSPOS_SW2 , C1:SUS-MC3_SUSPIT_SW2, C1:SUS-MC3_ASCPIT_SW2, C1:SUS-MC3_ASCYAW_SW2, C1:SUS-MC3_SUSYAW_SW2, and C1:SUS-MC3_SUSSIDE_SW2.
  • But C1:SUS-MC3_ASCYAW_INMON had accumulated about 600 offset and was distrupting the alignment. We switched off C1:SUS-MC3_ASCYAW_SW2 hoping the offset will go away once the optic is just damped with OSEM sensors, but it didn't.
  • Even after minutes, the offset in C1:SUS-MC3_ASCYAW_INMON kept on increasing and crossed beyond 2000 counts limit set in C1:IOO-MC3_YAW filter bank.
  • We tried to unlock the IMC and lock it back again but the offset still persisted.
  • We tried to add bias in YAW DOF by increasing C1:SUS-MC3_YAW_OFFSET, and while it was able to somewhat reduce the WFS C1:SUS-MC3_ASCYAW_INMON offset  but it was misalgning the optic and the lock was lost. So we retracted the bias to 0 and made it zero.
  • We tried to track back where the offset is coming from. In C1IOO_WFS_MASTER.adl, we opened the WFS2_YAW filter bank to see if the sensor is indeed reading the increasing offset.
  • It is quite weird that C1:IOO-WFS2_YAW_INMON is just oscillating but the output in this WFS2_YAW filter bank is slowly increasing offset.
  • We tried to zero the gain and back to 0.1 to see if some holding function is causing it, but that was not the case. The output went back to high negative offset and kept increasing.
  • We don't know what else to do. Only this one WFS YAW output is increasing, everything else is at normal level with no increasing offset or peculiar behavior.
  • We are leaving C1:SUS-MC3_ASCYAW_SW2 off as it is disrupting the IMC lock.

[Jon walked in, asked him for help]

  • Jon suggested to do burt restore on IOO channels.
  • We used (selected through burtgooey):
    burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Mar/19/08:19/c1iooepics.snap -l /tmp/controls_1210319_113410_0.write.log -o /tmp/controls_1210319_113410_0.nowrite.snap -v <
  • No luck, the problem persists.
  15951   Mon Mar 22 11:57:21 2021 Paco, AnchalUpdateSUSTrying coil actuation balance

[Paco, Anchal]

  • For MC coil balancing we will use the ASC (i.e. WFS) error signals since there are no OPLEV inputs (are there OPLEVs at all?).

Test MC1

  • Using the SUS screen LockIns the plan is to feed excitation(s) through the coil outputs, and look at the ASC(Y/P) error signals.
  • A diaggui xml template was saved in /users/Templates/SUS/MC1-actDiag.xml which was based on /users/Templates/SUS/ETMY-actDiag.xml
  • Before running the measurement, we of course want to plug our input matrix, so we ran /scripts/SUS/InMatCalc/writeMatrix.py only to find that it tripped the MC1 Watchdog.
    • The SIDE input seems to have the largest rail, but we just followed the procedure of temporarily increasing the WD max! threshold to allow the damping action and then restoring it.
    • This happened because in latest iteration of our code, we followed an advice from the matlab code to ensure the SIDE OSEM -> SIDE DOF matrix element remains positive, but we found out that MC1 SIDE gain (C1:SUS-MC1_SUSSIDE_GAIN) was set to -8000 (instead of a positive value like all other suspensions).
    • So we decided to try our new input matrix with a positive gain value of 8000 at C1:SUS-MC1_SUSSIDE_GAIN and we were able to stablize the optic and acquire lock, but...
    • We saw that WFS YAW dof started accumulating offset and started disturbing the lock (much like last friday). We disabled the ASC Input button (C1:SUS-MC1_ASCYAW_SW2).
    • This made the lock stable and IMC autolocker was able to lock. But the offset kept on increasing (see attachment 1).
    • After sometime, the offset begain to exponential go to some steady state value which was around -3000.
  • We wrote back the old matrix values and changed the C1:SUS-MC1_SUSSIDE_GAIN back to -8000. But the ASCYAW offset remained to the same position. We're leaving it disabled again as we don't know how to fix this. Hopefully, it will organically come back to small value later in the day like last time (Gautum just reenabled the ASCYAW input and it worked).

Test MC3

  • Defeated by MC1, we moved to MC3.
  • Here, the gain value for C1:SUS-MC3_SUSSIDE_GAIN was already positive (+500) so it could directly take our new matrix.
  • When we switched off watchdog, loaded the new matrix and switched the watchdog back on.
  • The IMC lock was slightly distrupted but remain locked. There was no unusual activity in the WFS sensor values. However, we saw the the SIDE coil output is slowly accumulating offset.
  • So we switched off the watchdog before it will trip itself, wrote back the old matrix and reinstated the status quo.
  • This suggests we need to carefully look back our latest changes of normalization and have new input matriced which keep the system stable other than working on paper with offline data.
Attachment 1: 210322_MC1_ASCY.pdf
210322_MC1_ASCY.pdf
Attachment 2: NewandOldMatrices.tar.gz
  15954   Mon Mar 22 19:07:50 2021 Paco, AnchalUpdateSUSTrying coil actuation balance

We found that following protocol works for changing the input matrices to new matrices:

  • Shut the PSL shutter C1:PSL-PSL_ShutterRqst. Switch off IMC autolocker C1:IOO-MC_LOCK_ENABLE.
  • Switch of the watchdog, C1:SUS-MC1_LATCH_OFF.
  • Update the new matrix. (in case of MC1, we need to change sign of C1:SUS-MC1_SUSSIDE_GAIN for new matrix)
  • Switch on the watchdog back again which enables all the coil outputs. Confirm that the optic is damped with just OSEM sensors.
  • Switch on IMC autolocker C1:IOO-MC_LOCK_ENABLE and open PSL shutter C1:PSL-PSL_ShutterRqst.

We repeated this for MC2 as well and were able to lock. However, we could not do the same for MC3. It was getting unstable as soon as cavity was locked i.e. the WFS were making the lock unstable. However, the unstability was different in different attempts but we didn't try mroe times as we had to go.


Coil actuation balancing:

  • We set LOCKIN1 and LOCKIN2 oscillators at 10.5 Hz anf 13.5 Hz with amplitude of 10 counts.
  • We wrote PIT, YAW and Butterfly actuation vectors (see attached text files used for this) on LOCKIN1 and LOCKIN2 for MC1.
  • We measured C1:SUS-MC1_ASCYAW_IN1 and C1:SUS-MC1_ASCPIT_IN1 and compared it against the case when no excitation was fed.
  • We repeated the above steps for MC2 except that we did not use LOCKIN2. LOCKIN2 was found to already on at oscillator frequency of 0.03Hz with amplitude of 500 counts and was fed to all coils with gain of 1 (so it was effectively moving position DOF at 0.03 Hz.) When we changed it, it became ON back again after we turned on the autolocker, so we guess this must be due to some background script and msut be important so we did not make any changes here. But what is it for?
  • We have gotten some good data for MC1 and MC2 to ponder upon next.
  • MC1 showed no cross coupling at all while MC2 shoed significant cross coupling between PIT and YAW.
  • Both MC1 and MC2 did not show any cross coupling between butterfly actuation and PIT/YAW dof.

On another news, IOO channels died!

  • Infront of us, the medm channels starting with C1:IOO just died. See attachment 8.
  • We are not sure why that happened, but we have reported everything we did up here.
  • This happened around the time we were ready to switch back on the IMC autolocker and open the shutter. But now these channels are dead.
  • All optics were restored with old matrices and settings and are damped in good condition as of now.
  • IMC should lock back as soon as someone can restart the EPICS channels and switch on C1:IOO-MC_LOCK_ENABLE and C1:PSL-PSL_ShutterRqst.
Attachment 1: 20210322_MC1_CoilBalancePIT.pdf
20210322_MC1_CoilBalancePIT.pdf
Attachment 2: 20210322_MC1_CoilBalanceYAW.pdf
20210322_MC1_CoilBalanceYAW.pdf
Attachment 3: 20210322_MC1_CoilBalanceBUTT.pdf
20210322_MC1_CoilBalanceBUTT.pdf
Attachment 4: 20210322_MC2_CoilBalancePIT.pdf
20210322_MC2_CoilBalancePIT.pdf
Attachment 5: 20210322_MC2_CoilBalanceYAW.pdf
20210322_MC2_CoilBalanceYAW.pdf
Attachment 6: 20210322_MC2_CoilBalanceBUTT.pdf
20210322_MC2_CoilBalanceBUTT.pdf
Attachment 7: 20210322_IMC_CoilBalance.tar.gz
Attachment 8: image-e6019a14-9cf3-45f7-8f2c-cc3d7ad1c452.jpg
image-e6019a14-9cf3-45f7-8f2c-cc3d7ad1c452.jpg
  15955   Tue Mar 23 09:16:42 2021 Paco, AnchalUpdateComputersPower cycled C1PSL; restored C1PSL

So actually, it was the C1PSL channels that had died. We did the following to get them back:

  • We went to this page and tried the telnet procedure. But it was unable to find the host.
  • So we followed the next advice. We went to the 1X1 rack and manually hard shut off C1PSL computer by holding down the power button until the LEDs went off.
  • We wait for 5-7 seconds and switched it back on.
  • By the time we were back in control room, the C1PSL channels were back online.
  • The mode cleaner however was struggling to keep the lock. It was going in and out of lock.
  • So we followed the next advice and did burt restore which ran following command:
    burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Mar/22/17:19/c1psl.snap -l /tmp/controls_1210323_085130_0.write.log -o /tmp/controls_1210323_085130_0.nowrite.snap -v 
  • Now the mode cleaner was locked but we found that the input switch of C1IOO-WFS1_PIT and C1IOO-WFS2_PIT filter banks were off. Which meant that only YAW sensors were in loop in the lock.
  • We went back in dataviewer and checked when these channels were shut down. See attachments for time series.
  • It seems this happened yesterday, March 22nd near 1:00 pm (20:00:00 UTC). We can't find any mention of anyone else doing it on elog and we left by 12:15pm.
  • So we shut down the PSL shutter (C1:PSL-PSL_ShutterRqst) and switched off MC autolocker (C1:IOO-MC_LOCK_ENABLE).
  • Switched on C1:IOO-WFS1_PIT_SW1 and C1:IOO-WFS2_PIT_SW1.
  • Turned back on PSL shutter (C1:PSL-PSL_ShutterRqst) and MC autolocker (C1:IOO-MC_LOCK_ENABLE).
  • Mode cleaner locked back easily and now is keeping lock consistently. Everything looks normal.
Attachment 1: MCWFS1and2PITYAW.pdf
MCWFS1and2PITYAW.pdf
Attachment 2: MCWFS1and2PITYAW_Zoomed.pdf
MCWFS1and2PITYAW_Zoomed.pdf
  15961   Thu Mar 25 11:46:31 2021 Paco, AnchalUpdateSUSMC2 Coil Balancing updates

Proof-of-principle

  • We excited PIT and YAW dofs using LOCKIN1 in MC2 on Monday.
  • We analyzed this data in a simple analysis explained in Attachment 1 python notebook (also present at /users/anchal/20210323_AnalyszingCoilActuationBalance/)
  • Basically, we tried to estimate the cross coupling in 2x2 matrix from actuated DOF to sensed DOF, inverted it, and applied it to output matrix to undo the cross coupling.
  • Attachments 2 and 3 show how much we performed in undoing the cross coupling.
  • The ratio of 13.5 Hz peaks shows how much coupling is still present.

Going towards 3x3 Coil balancing:

  • In a conversation with Rana yesterday, we understood that we can use MC_F data as POS sensing data out of the loop.
  • So today, we repreated the excitation measurements while exciting POS, PIT and YAW dofs from LOCKIN1 on MC2 and measuring C1:IOO-MC_F, C1:SUS-MC2_ASCPIT_IN1 and C1:SUS-MC2_ASCPIT_IN2.
  • Data from MC_F is converted into units of um using factor 9.57e-8 um/Hz.
  • We changed the excitation amplitude in order to see cross coupling peaks when they were not visible with low excitation.
  • The data was measured while new calculated input matrix was loaded which from our calculations diagonalized the sensing matrix of OSEMs.

Some major changes:

  • We actually found that the C1:SUS-MC2_ASCPIT_IN1 showed a broadband increase in noise today (from Monday) by factor of about 100 in range 0-20 Hz.
  • We were not sure why this changed from our 22nd March measurement.
  • We checked if the gain values in the loops changed in alst 3 days, but they didn't.
  • Then we realized that the WFS1_PIT and WFS2_PIT switched that we turned ON on Tuesday were the only changes that were made in the loop.
  • We turned back OFF C1:IOO-WFS1_PIT_SW1 and C1:IOO-WFS2_PIT_SW1. This actually brought back the noise level of C1:SUS-MC2_ASCPIT_IN1 down to what it was on Monday.

 

Attachment 1: CoilActuationBalancing.ipynb.tar.gz
Attachment 2: MC2_CoilBalancePITnorm_excSamePIT.pdf
MC2_CoilBalancePITnorm_excSamePIT.pdf
Attachment 3: MC2_CoilBalanceYAWnorm_excSameYAW.pdf
MC2_CoilBalanceYAWnorm_excSameYAW.pdf
Attachment 4: 20210325_IMC_CoilBalance.tar.gz
  15970   Fri Mar 26 11:54:37 2021 Paco, AnchalUpdateSUSMC2 Coil Balancing updates

[Paco, Anchal]

  • Today we spent the morning testing the scripts under ~/c1/scripts/SUS/OutMatCalc/ that automate the procedure (which we have been doing by hand) and catch any "bad" behavior instances that we have identified. In such instances, the script sets up to restore the IMC state smoothly.
  • After some testing and debugging, we managed to get some data for MC2 using ~/c1/scripts/SUS/OutMatCalc/getCrossCouplingData.py
  15972   Mon Mar 29 10:44:51 2021 Paco, AnchalUpdateSUSMC2 Coil Balancing updates

We ran the coil balancing procedure 4 times while iterating through the output matrix optimization.

Attachment 1, pages 1 to 4 show the progression of cross coupling from current output matrix (which is theoretical ideal) to the latest iteration. We plot the sensed DOF ASD which we used to determine the cross coupling when different excitations are fed using the LOCKIN1 feeding 13Hz oscillation of 200 counts amplitude along the vector defined in output matrix. That means, when we change the output matrix, in subsequent tests, we alos change the exciation direction along with it.

Unfortunately, we don't see a very good optimizations over iterations. While we see some peaks going down in sensed PIT and sensed POS (through MC_F), we rather see an increase in cross coupling in the sensed YAW.


Scripts:

  • For running the tests, we used script in scripts/SUS/OutMatCalc/crossCoupleTest.py and wrote commanding scripts in the /users/anchal/20210329_MC2_TestingNewOutMat .
  • The optimization code is at in scripts/SUS/OutMatCalc/outMatOptimize.py.
  • The code reads sensed DOF data using nds2 and calculated cross spectral density among the sensed DOF at the excitation frequencies.
  • This is normalized by the power spectral density of reference data (no excitation) and power spectral density of position data to create a TF estimate.
  • The real values of the sensor matrix thus created is used to get the inverse matrix.
  • The inverse matrix is first normalized along each row by diagonal elements to get 1 there and then multiplied by previous output matrix to create a new output matrix.
  • I guess, reading the code will be a better way of understanding this algorithm.
Attachment 1: MC2OutMatCrossCouple_Old-to-It3.pdf
MC2OutMatCrossCouple_Old-to-It3.pdf MC2OutMatCrossCouple_Old-to-It3.pdf MC2OutMatCrossCouple_Old-to-It3.pdf MC2OutMatCrossCouple_Old-to-It3.pdf
Attachment 2: 20210329_MC2_CrossCoupleTest.tar.gz
  16233   Thu Jul 1 10:34:51 2021 Paco, AnchalSummaryLSCETMY QPD fixed

Paco worked on alignign the beam splitter to get light on the ETMY QPD and was successful in centering it without any other changes in the settings.

  16238   Tue Jul 6 10:47:07 2021 Paco, AnchalUpdateIOORestored MC

MC was unlocked and struggling to recover this morning due to misguided WFS offsets. In order to recover from this kind of issue, we

  1. Cleared the bogus WFS offsets
  2. Used the MC alignment sliders to change MC1 YAW from -0.9860 to -0.8750 until we saw the lowest order mode transmission on the video monitor.
  3. With MC Trans sum at around ~ 500 counts, we lowered the C1:IOO-WFS_TRIGGER_THRESH_ON from 5000 to 500, and the C1:IOO-WFS_TRIGGER_MON from 3.0 to 0.0 seconds and let the WFS integrators work out some nonzero angular control offsets.
  4. Then, the MC Trans sum increased to about 2000 counts but started oscillating slowly, so we restored the delayed loop trigger from 0.0 to 3.0 seconds and saw the MC Trans sum reach its nominal value of ~ 14000 counts over a few minutes.

The MC is now restored and the plan is to let it run for a few hours so the offsets converge; then run the WFS relief script.

  15693   Wed Dec 2 12:35:31 2020 PacoSummaryComputer Scripts / ProgramsTC200 python driver

Given the similarities between the MDT694B (single channel piezo controller) and TC200 (temperature controller) serial interfaces, I added the pyserial driver here

*Warning* this first version of the driver remains untested

  15957   Wed Mar 24 09:23:49 2021 PacoUpdateSUSMC3 new Input Matrix

[Paco]

  • Found IMC locked upon arrival
  • Loaded newest MC3 Input Matrix coefficients using /scripts/SUS/InMatCalc/writeMatrix.py after unlocking the MC, and disabling the watchdog. 
  • Again, the sens signals started increasing after the WD is reenabled with the new Input matrix, so I manually tripped it and restored the old matrix; recovered MC lock.
  • Something is still off with this input matrix that makes the MC3 loop unstable.
  16152   Fri May 21 12:12:11 2021 PacoUpdateNoiseBudgetAUX PDH loop identification

[Anchal, Paco]

We went into 40m to identify where XARM PDH loop control elements are. We didn't touch anything, but this is to note we went in there twice at 10 AM and 11:10 AM.

  16156   Mon May 24 10:19:54 2021 PacoUpdateGeneralZita IOO strip

Updated IOO.strip on Zita to show WFS2 pitch and yaw trends (C1:IOO-WFS2_PIY_OUT16 and C1:IOO-WFS2_YAW_OUT16) and changed the colors slightly to have all pitch trends in the yellow/brown band and all yaw trends in the pink/purple band.

No one says, "Here I am attaching a cool screenshot, becuz else where's the proof? Am I right or am I right?"

Mon May 24 18:10:07 2021 [Update]

After waiting for some traces to fill the screen, here is a cool screenshot (Attachment 1). At around 2:30 PM the MC unlocked, and the BS_Z (vertical) seismometer readout jumped. It has stayed like this for the whole afternoon... The MC eventually caught its lock and we even locked XARM without any issue, but something happened in the 10-30 Hz band. We will keep an eye on it during the evening...

Tue May 25 08:45:33 2021 [Update]

At approximately 02:30 UTC (so 07:30 PM yesterday) the 10-30 Hz seismic step dropped back... It lasted 5 hours, mostly causing BS motion along Z (vertical) as seen by the minute trend data in Attachment 2. Could the MM library have been shaking? Was the IFO snoring during its afternoon nap?

Attachment 1: Screenshot_from_2021-05-24_18-09-37.png
Screenshot_from_2021-05-24_18-09-37.png
Attachment 2: 24and25_05_2021_PEM_BS_10_30.png
24and25_05_2021_PEM_BS_10_30.png
  16176   Wed Jun 2 17:50:50 2021 PacoUpdateEquipment loanBorrow red cart

I borrowed the little red cart 🛒 to help clear the path for new optical tables in B252 West Bridge. Will return once I am done with it.  

Attachment 1: IMG_20210602_172858.jpg
IMG_20210602_172858.jpg
  16180   Thu Jun 3 17:49:46 2021 PacoUpdateEquipment loanBorrow red cart

Returned today.

Quote:

I borrowed the little red cart 🛒 to help clear the path for new optical tables in B252 West Bridge. Will return once I am done with it.  

 

  16219   Tue Jun 22 16:52:28 2021 PacoUpdateSUSADC/Slow channels issues

After sliding the alignment bias around and browsing through elog while searching for "stuck" we concluded the ITMX osems needed to be freed. To do this, the procedure is to slide the alignment bias back and forth ("shaking") and then as the OSEMs start to vary, enable the damping. We did just this, and then restored the alignment bias sliders slowly into their original positions. Attachment 1 shows the ITMX OSEM sensor input monitors throughout this procedure.


At the end, since MC has trouble catching lock after opening PSL shutter, I tried running burt restore the ioo to 2021/Jun/17/06:19/c1iooepics.snap but the problem persists

Attachment 1: shake_and_damp.png
shake_and_damp.png
  16234   Thu Jul 1 11:37:50 2021 PacoUpdateGeneralrestarted c0rga

Physically rebooted c0rga workstation after failing to ssh into it (even as it was able to ping into it...) the RGA seems to be off though. The last log with data on it appears to date back to 2020 Nov 10, but reasonable spectra don't appear until before 11-05 logs. Gautam verified that the RGA was intentionally turned off then.

  16248   Thu Jul 15 14:25:48 2021 PacoUpdateLSCCM board

[gautam, paco]

We tested the CM board by implementing the high bandwidth IR lock (single arm). In preparation for this test we temporarily connected the POY11_Q_MON output to the CM board IN1 input and checked the YARM POY transfer function by running the AA_YARM_TEMPLATE under users/Templates/LSC/LSC_loops/YARM_POY/. We made sure the YARM dither optimized TRY so as to maximize the optical gain stage. Then we proceeded as follows:

  • From the LSC --> CM Servo screen, we controlled the REFL 1 Gain (dB) slider (nominal +25) and MC Servo IN2 Gain (dB) slider (nominal -32 dB) to transfer the low bandwidth (digital) control to the high bandwidth (analog) control of the YARM.
  • During this game, we monitored the C1:LSC-POY11_I_ERR_DQ & C1:LSC-CM_SLOW_OUT_DQ error signal channels for saturation, oscillations, or stability.
  • Once a set of gains was successful in maintaining a stable lock, we measured the OLTF using SR 785 to track the UGF as we mix the two paths.
  • Once the gains have increased, a boost and super-boost stages may be enabled as well.

Ultimately, our ability to progressively increase the control bandwidth of the YARM is a proxy that the CM board is working properly. Attachment 1 shows the OLTF progression as we increased the loop's UGF. Note how as we approached the maximum measured UGF of ~ 22 kHz, our phase margin decreased signifying poor stability.


At the end of this measurement, at about ~ 15:45 I restored the CM board IN1 input and disconnected the POY11_Q_MON

gautam: the conclusion here is that the CM board seems to work as advertised, and it's not solely responsible for not being able to achieve the IR handoff. 

Attachment 1: high_BW_TFs.pdf
high_BW_TFs.pdf
  16254   Thu Jul 22 16:06:10 2021 PacoUpdateLoss MeasurementLoss measurement

[yehonathan, anchal, paco, gautam]

We concluded estimating the XARM and YARM losses. The hardware configuration from yesterday remains, but we repeated the measurements because we realized our REFL55_I_ERR and REFL55_Q_ERR signals representing the PD520 and MC_TRANS were scaled, offset, and rotated in a way that wasn't trivially undone by our postprocessing scripts... Another caveat that we encountered today was the need to add a "macroscopic" misalignment to the ITMs when doing the measurement to avoid any accidental resonances.

The final measurements were done with 16 repetitions, 30 second duration, and the logfiles are under scripts/lossmap_scripts/armLoss/logs/20210722_1423.txt and scripts/lossmap_scripts/armLoss/logs/20210722_1513.txt

Finally, the estimated YARM loss is 39\pm7 ppm, while the estimated XARM loss is 38\pm8 ppm. This is consistent with the inferred PRC gain from Monday and a PRM loss of ~ 2%.


Future measurements may want to look into slow drift of the locked vs misaligned traces (systematic errors?) and a better way of estimating the statistical uncertainty (e.g. by splitting the raw time traces into short segments)

  16257   Mon Jul 26 17:34:23 2021 PacoUpdateLoss MeasurementLoss measurement

[gautam, yehonathan, paco]

We went back to the loss data from last week and more carefully estimated the ARM loss uncertainties.

Before we simply stitched all N=16 repetitions into a single time-series and computed the loss: e.g. see Attachment 1 for such a YARM loss data. The mean and stdev for this long time series give the quoted loss from last time. We knew that the uncertainty was most certainly overestimated, as different realizations need not sample similar alignment conditions and are sensitive to different imperfections (e.g. beam angular motion, unnormalizable power fluctuations, etc...).


Today we analyzed the individual locked/misaligned cycles individually. From each cycle, it is possible to obtain a mean value of the loss as well as a std dev *across the duration of the trace*, but because we have a measurement ensemble, it is also possible to obtain an ensemble averaged mean and a statistical uncertainty estimate *across the independent cycle realizations*. While the mean values don't change much, in the latter estimate we find a much smaller statistical uncertainty. We obtain an XARM loss of 37.6 \pm 2.6 ppm and a YARM loss of 38.9 \pm 0.6 ppm. To make the distinction more clear, Attachment 2 and  Attachment 3 the YARM and XARM loss measurement ensembles respectively with single realization (time-series) standard deviations as vertical error bars, and the 1 sigma statistical uncertainty estimate filled color band. Note that the XARM loss drifts across different realizations (which happen to be ordered in time), which we think arise from inconsistent ASS dither alignment convergence. This is yet to be tested.


For budgeting the excessive uncertainties from a single locked/misaligned cycle, we could look at beam pointing, angular drift, power, and systematic differences in the paths from both reflection signals. We should be able to estimate the power fluctuations by looking at the recorded arm transmissions, the recorded MC transmission, PD technical noise, etc... and we might be able to correlate recorded oplev signals with the reflection data to identify angular drift. We have not done this yet.

Attachment 1: LossMeasurement_RawData.pdf
LossMeasurement_RawData.pdf
Attachment 2: YARM_loss_stats.pdf
YARM_loss_stats.pdf
Attachment 3: XARM_loss_stats.pdf
XARM_loss_stats.pdf
  16266   Thu Jul 29 14:51:39 2021 PacoUpdateOptical LeversRecenter OpLevs

[yehonathan, anchal, paco]

Yesterday around 9:30 pm, we centered the BS, ITMY, ETMY, ITMX and ETMX oplevs (in that order) in their respective QPDs by turning the last mirror before the QPDs. We did this after running the ASS dither for the XARM/YARM configurations to use as the alignment reference. We did this in preparation for PRFPMI lock acquisition which we had to stop due to an earthquake around midnight

  16267   Mon Aug 2 16:18:23 2021 PacoUpdateASCAS WFS MICH commissioning

[anchal, paco]

We picked up AS WFS comissioning for daytime work as suggested by gautam. In the end we want to comission this for the PRFPMI, but also for PRMI, and MICH for completeness. MICH is the simplest so we are starting here.

We started by restoromg the MICH configuration and aligning the AS DC QPD (on the AS table) by zeroing the C1:ASC-AS_DC_YAW_OUT and C1:ASC-AS_DC_PIT_OUT. Since the AS WFS gets the AS beam in transmission through a beamsplitter, we had to correct such a beamsplitters's aligment to recenter the AS beam onto the AS110 PD (for this we looked at the signal on a scope).

We then checked the rotation (R) C1:ASC-AS_RF55_SEGX_PHASE_R and delay (D) angles C1:ASC-AS_RF55_SEGX_PHASE_D (where X = 1, 2, 3, 4 for segment) to rotate all the signal into the I quadrature. We found that this optimized the PIT content on C1:ASC-AS_RF55_I_PIT_OUT and YAW content on C1:ASC-AS_RF55_I_YAW_OUTMON which is what we want anyways.

Finally, we set up some simple integrators for these WFS on the C1ASC-DHARD_PIT and C1ASC-DHARD_YAW filter banks with a pole at 0 Hz, a zero at 0.8 Hz, and a gain of -60 dB (similar to MC WFS). Nevertheless, when we closed the loop by actuating on the BS ASC PIT and ASC YAW inputs, it seemed like the ASC model outputs are not connected to the BS SUS model ASC inputs, so we might need to edit accordingly and restart the model.

  16272   Fri Aug 6 17:10:19 2021 PacoUpdateIMCMC rollercoaster

[anchal, yehonatan, paco]

For whatever reason (i.e. we don't really know) the MC unlocked into a weird state at ~ 10:40 AM today. We first tried to find a likely cause as we saw it couldn't recover itself after ~ 40 min... so we decided to try a few things. First we verified that no suspensions were acting weird by looking at the OSEMs on MC1, MC2, and MC3. After validating that the sensors were acting normally, we moved on to the WFS. The WFS loops were disabled the moment the IMC unlocked, as they should. We then proceeded to the last resort of tweaking the MC alignment a bit, first with MC2 and then MC1 and MC3 in that order to see if we could help the MC catch its lock. This didn't help much initially and we paused at about noon.

At about 5 pm, we resumed since the IMC had remained locked to some higher order mode (TEM-01 by the looks of it). While looking at C1:IOO-MC_TRANS_SUMFILT_OUT on ndscope, we kept on shifting the MC2 Yaw alignment slider (steps = +-0.01 counts) slowly to help the right mode "hop". Once the right mode caught on, the WFS loops triggered and the IMC was restored. The transmission during this last stage is shown in Attachment #1.

Attachment 1: MC2_trans_sum_2021-08-06_17-18-54.png
MC2_trans_sum_2021-08-06_17-18-54.png
  16275   Wed Aug 11 11:35:36 2021 PacoUpdateLSCPRMI MICH orthogonality plan

[yehonathan, paco]

Yesterday we discussed a bit about working on the PRMI sensing matrix.

In particular we will start with the "issue" of non-orthogonality in the MICH actuated by BS + PRM. Yesterday afternoon we played a little with the oscillators and ran sensing lines in MICH and PRCL (gains of 50 and 5 respectively) in the times spanning [1312671582 -> 1312672300], [1312673242 -> 1312677350] for PRMI carrier and [1312673832 -> 1312674104] for PRMI sideband. Today we realize that we could have enabled the notchSensMat filter, which is a notch filter exactly at the oscillator's frequency, in FM10 and run a lower gain to get a similar SNR. We anyways want to investigate this in more depth, so here is our tentative plan of action which implies redoing these measurements:

Task: investigate orthogonality (or lack thereof) in the MICH when actuated by BS & PRM
    1) Run sensing MICH and PRCL oscillators with PRMI Carrier locked (remember to turn NotchSensMat filter on).
    2) Analyze data and establish the reference sensing matrix.
    3) Write a script that performs steps 2 and 3 in a robust and safe way.
    4) Scan the C1:LSC-LOCKIN_OUTMTRX, MICH to BS and PRM elements around their nominal values.
    5) Scan the MICH and PRCL RFPD rotation angles around their nominal values.

We also talked about the possibility that the sensing matrix is strongly frequnecy dependant such that measuring it at 311Hz doesn't give us accurate estimation of it. Is it worthwhile to try and measure it at lower frequencies using an appropriate notch filter?


Wed Aug 11 15:28:32 2021 Updated plan after group meeting

- The problem may be in the actuators since the orthogonality seems fine when actuating on the ITMX/ITMY, so we should instead focus on measuring the actuator transfer functions using OpLevs for example (same high freq. excitation so no OSEM will work > 10 Hz).

  16277   Thu Aug 12 11:04:27 2021 PacoUpdateGeneralPSL shutter was closed this morning

Thu Aug 12 11:04:42 2021 Arrived to find the PSL shutter closed. Why? Who? When? How? No elog, no fun. I opened it, IMC is now locked, and the arms were restored and aligned.

  16280   Mon Aug 16 23:30:34 2021 PacoUpdateCDSAS WFS commissioning; restarting models

[koji, ian, tega, paco]

With the remote/local assistance of Tega/Ian last friday I made changes on the c1sus model by connecting the C1:ASC model outputs (found within a block in c1ioo) to the BS and PRM suspension inputs (pitch and yaw). Then, Koji reviewed these changes today and made me notice that no changes are actually needed since the blocks were already in place, connected in the right ports, but the model probably just wasn't rebuilt...

So, today we ran "rtcds make", "rtcds install" on the c1ioo and c1sus models (in that order) but the whole system crashed. We spent a great deal of time restarting the machines and their processes but we struggled quite a lot with setting up the right dates to match the GPS times. What seemed to work in the end was to follow the format of the date in the fb1 machine and try to match the timing to the sub-second level. This is especially tricky when performed by a human action so the whole task is tedious. We anyways completed the reboot for almost all the models except the c1oaf (which tends to make things crashy) since we won't need it right away for the tasks ahead. One potential annoying issue we found was in manually rebooting c1iscey because one of its network ports is loose (the ethernet cable won't click in place) and it appears to use this link to boot (!!) so for a while this machine just wasn't coming back up.

Finally, as we restored the suspension controls and reopened the shutters, we noticed a great deal of misalignment to the point no reflected beam was coming back to the RFPD table. So we spent some time verifying the PRM alignment and TT1 and TT2 (tip tilts) and it turned out to be mostly the latter pair that were responsible for it. We used the green beams to help optimize the XARM and YARM transmissions and were able to relock the arms. We ran ASS on them, and then aligned the PRM OpLevs which also seemed off. This was done by giving a pitch offset to the input PRM oplev beam path and then correcting for it downstream (before the qpd). We also adjusted the BS OpLev in the end.


Summary; the ASC BS and PRM outputs are now built into the SUS models. Let the AS WFS loops be closed soon!


Addenda by KA
- Upon the RTS restarting,

  • Date/Time adjustment
    sudo date --set='xxxxxx'
  • If the time on the CDS status medm screen for each IOP match with the FB local time, we ran
    rtcds start c1x01
    (or c1x02, etc)
  • Every time we restart the IOPs, fb was restarted by
    telnet fb1 8083
    > shutdown

    and restarted mx_stream from the CDS screen because these actions change the "DC" status.

- Today we once succeeded to restart the vertex machines. However, the RFM signal transmission did fail. So the end two machines were power cycled as well as c1rfm, but this made all the machines in RED again. Hell...

- We checked the PRM oplev. The spot was around the center but was clipped. This made us so confused. Our conclusion was that the oplev was like that before the RTS reboot.

  16287   Mon Aug 23 10:17:21 2021 PacoSummaryComputerssystem reboot glitch

[paco]

At 09:34 PST I noted a glitch in the controls room as the machines went down except for c1ioo. Briefly, the video feeds disappeared from the screens, though the screens themselves didn't lose power. At first I though this was some kind of power glitch, but upon checking with Jordan, it most likely was related to some system crash. Coming back to the controls room, I could see the MC reflection beam swinging, but unfortunately all the FE models came down. I noticed that the DAQ status channels were blank.

I ssh into c1ioo no problem and ran "rtcds stop c1ioo c1als c1omc", then "rtcds restart c1x03" to do a soft restart. This worked, but the DAQ status was still blank. I then tried to ssh into c1sus and c1lsc without success, similarly c1iscex and c1iscey were unreachable. I went and did a hard restart on c1iscex by switching it off, then its extension chassis, then unplugging the power cords, then inverting these steps, and could ssh into it from rossa. I ran "rtcds start c1x01" and saw the same blank DAQ status. I noticed the elog was also down... so nodus was also affected?

[paco, anchal]

Anchal got on zoom to offer some assistance. We discovered that the fb1 and nodus were subject to some kind of system reboot at precisely 09:34. The "systemctl --failed" command on fb1 displayed both the daqd_dc.service and rc-local.service as loaded but failed (inactive). Is it a good idea to try and reboot the fb1 machine? ... Anchal was able to bring elog back up from nodus (ergo, this post).

[paco]

Although it probably needs the DAQ service from the fb1 machine to be up and running, I tried running the scripts/cds/rebootC1LSC.sh script. This didn't work. I tried running sudo systemctl restart daqd_dc from the fb1 machine without success. Running systemctl reset-failed "worked" for daqd_dc and rc-local services on fb1 in the sense that they were no longer output from systemctl --failed, but they remained inactive (dead) when running systemctl status on them. Following from  15303   I succeeded in restarting the daqd services. Turned out I needed to manually start the open-mx and mx services in fb1. I rerun the restartC1LSC script without success. The script fails because some machines need to be rebooted by hand.
 

  16293   Tue Aug 24 18:11:27 2021 PacoUpdateGeneralTime synchronization not really working

tl;dr: NTP servers and clients were never synchronized, are not synchronizing even with ntp... nodus is synchronized but uses chronyd; should we use chronyd everywhere?


Spent some time investigating the ntp synchronization. In the morning, after Anchal set up all the ntp servers / FE clients I tried restarting the rts IOPs with no success. Later, with Tega we tried the usual manual matching of the date between c1iscex and fb1 machines but we iterated over different n-second offsets from -10 to +10, also without success.

This afternoon, I tried debugging the FE and fb1 timing differences. For this I inspected the ntp configuration file under /etc/ntp.conf in both the fb1 and /diskless/root.jessie/etc/ntp.conf (for the FE machines) and tried different combinations with and without nodus, with and without restrict lines, all while looking at the output of sudo journalctl -f on c1iscey. Everytime I changed the ntp config file, I restarted the service using sudo systemctl restart ntp.service . Looking through some online forums, people suggested basic pinging to see if the ntp servers were up (and broadcasting their times over the local network) but this failed to run (read-only filesystem) so I went into fb1, and ran sudo chroot /diskless/root.jessie/ /bin/bash to allow me to change file permissions. The test was first done with /bin/ping which couldn't even open a socket (root access needed) by running chmod 4755 /bin/ping then ssh-ing into c1iscey and pinging the fb1 machine successfully. After this, I ran chmod 4755 /usr/sbin/ntpd so that the ntp daemon would have no problem in reaching the server in case this was blocking the synchronization. I exited the chroot shell and the ntp daemon in c1iscey; but the ntpstat still showed unsynchronised status. I also learned that when running an ntp query with ntpq -p if a client has succeeded in synchronizing its time to the server time, an asterisk should be appended at the end. This was not the case in any FE machine... and looking at fb1, this was also not true. Although the fb1 peers are correctly listed as nodus, the caltech ntp server, and a broadcast (.BCST.) server from local time (meant to serve the FE machines), none appears to have synchronized... Going one level further, in nodus I checked the time synchronization servers by running chronyc sources the output shows

controls@nodus|~> chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* testntp1.superonline.net      1  10   377   280  +1511us[+1403us] +/-   92ms
^+ 38.229.59.9                   2  10   377   206  +8219us[+8219us] +/-  117ms
^+ tms04.deltatelesystems.ru     2  10   377   23m    -17ms[  -17ms] +/-  183ms
^+ ntp.gnc.am                    3  10   377   914  -8294us[-8401us] +/-  168ms

I then ran chronyc clients to find if fb1 was listed (as I would have expected) but the output shows this --

Hostname                   Client    Peer CmdAuth CmdNorm  CmdBad  LstN  LstC
=========================  ======  ======  ======  ======  ======  ====  ====
501 Not authorised

So clearly chronyd succeeded in synchronizing nodus' time to whatever server it was pointed at but downstream from there, neither the fb1 or any FE machines seem to be synchronizing properly. It may be as simple as figuring out the correct ntp configuration file, or switching to chronyd for all machines (for the sake of homogeneity?)

  16298   Wed Aug 25 17:31:30 2021 PacoUpdateCDSFB is writing the frames with a year old date

[paco, tega, koji]

After invaluable assistance from Jamie in fixing this yearly offset in the gps time reported by cat /proc/gps, we managed to restart the real time system correctly (while still manually synchronizing the front end machine times). After this, we recovered the mode cleaner and were able to lock the arms with not much fuss.

Nevertheless, tega and I noticed some weird noise in the C1:LSC-TRX_OUT which was not present in the YARM transmission, and that is present even in the absence of light (we unlocked the arms and just saw it on the ndscope as shown in Attachment #1). It seems to affect the XARM and in general the lock acquisition...

We took some quick spectrum with diaggui (Attachment #2) but it doesn't look normal; there seems to be broadband excess noise with a remarkable 1 kHz component. We will probably look into it in more detail.

Attachment 1: TRX_noise_2021-08-25_17-40-55.png
TRX_noise_2021-08-25_17-40-55.png
Attachment 2: TRX_TRY_power_spectra.pdf
TRX_TRY_power_spectra.pdf
  16300   Thu Aug 26 10:10:44 2021 PacoUpdateCDSFB is writing the frames with a year old date

[paco, ]

We went over the X end to check what was going on with the TRX signal. We spotted the ground terminal coming from the QPD is loosely touching the handle of one of the computers on the rack. When we detached it completely from the rack the noise was gone (attachment 1).

We taped this terminal so it doesn't touch anything accidently. We don't know if this is the best solution since it is probably needs a stable voltage reference. In the Y end those ground terminals are connected to the same point on the rack. The other ground terminals in the X end are just cut.

We also took the PSD of these channels (attachment 2). The noise seem to be gone but TRX is still a bit noisier than TRY. Maybe we should setup a proper ground for the X arm QPD?


We saw that the X end station ALS laser was off. We turned it on and also the crystal oven and reenabled the temperature controller. Green light immidiately appeared. We are now working to restore the ALS lock. After running XARM ASS we were unable to lock the green laser so we went to the XEND and moved the piezo X ALS alignment mirrors until we maximized the transmission in the right mode. We then locked the ALS beams on both arms successfully. It very well could be that the PZT offsets were reset by the power glitch. The XARM ALS still needs some tweaking, its level is ~ 25% of what it was before the power glitch.

Attachment 1: Screenshot_from_2021-08-26_10-09-50.png
Screenshot_from_2021-08-26_10-09-50.png
Attachment 2: TRXTRY_Spectra.pdf
TRXTRY_Spectra.pdf
  16303   Mon Aug 30 17:49:43 2021 PacoSummaryLSCXARM POX OLTF

Used diaggui to get OLTF in preparation for optimal system identification / calibration. The excitation was injected at the control point of the XARM loop C1:LSC-XARM_EXC. Attachment 1 shows the TF (red scatter) taken from 35 Hz to 2.3 kHz with 201 points. The swept sine excitation had an envelope amplitude of 50 counts at 35 Hz, 0.2 counts at 100 Hz, and 0.2 at 200 Hz. In purple continous line, the model for the OLTF using all the digital control filters as well as a simple 1 degree of freedom plant (single pole at 0.99 Hz) is overlaid. Note the disagreement of the OLTF "model" at higher frequencies which we may be able to improve upon using vector fitting.

Attachment 2 shows the coherence (part of this initial measurement was to identify an appropriately large frequency range where the coherence is good before we script it).

Attachment 1: XARM_POX_OLTF.pdf
XARM_POX_OLTF.pdf
Attachment 2: XARM_POX_Coh.pdf
XARM_POX_Coh.pdf
  16307   Thu Sep 2 17:53:15 2021 PacoSummaryComputerschiara down, vac interlock tripped

[paco, koji, tega, ian]

Today in the morning the name server / network file system running in chiara failed. This resulted in donatella/pianosa/rossa shell prompts to hang forever. It also made sitemap crash and even dropping into a bash shell and just listing files from some directory in the file system froze the computer. Remote ssh sessions on nodus also had the same symptoms.

A little after 1 pm, we started debugging this issue with help from Koji. He suggested we hook a monitor, keyboard, and mouse onto chiara as it should still work locally even if something with the NFS (network file system) failed. We did this and then we tried for a while to unmount the /dev/sdc1/ from /home/cds/ (main file system) and mount /dev/sdb1/ from /media/40mBackup (backup copy) such that they swap places. We had no trouble unmounting the backup drive, but only succeeded in unmounting the main drive with the "lazy" unmount, or running "umount -l". Running "df" we could see that the disk space was 100% used, with only ~ 1 GB of free space which may have been the cause for the issue. After swapping these disks by editing the /etc/fstab file to implement the aforementioned swapping, we rebooted chiara and we recovered the shell prompts in all workstations, sitemap, etc... due to the backup drive mounting. We then started investigating what caused the main drive to fill up that quickly, and noted that weirdly now the capacity was at 85% or about 500GB less than before (after reboot and remount), so some large file was probably dumped into chiara that froze the NFS causing the issue.

At this point we tried opening the PSL shutter to recover the IMC. The shutter would not open and we suspected the vacuum interlock was still tripped... and indeed there was an uncleared error in the VAC screen. So with Koji's guidance we walked to the c1vac near the HV station and did the following at ~ 5:13 PM -->

  1. Open V4; apart from a brief pressure spike in PTP2, everything looked ok so we proceeded to
  2. Open V1; P2 spiked briefly and then started to drop. Then, Koji suggested that we could
  3. Close V4; but we saw P2 increasing by a factor of~ 10 in a few seconds, so we
  4. Reopened V4;

We made sure that P1a (main vacuum pressure) was dropping and before continuing we decided to look back to see what the nominal vacuum state was that we should try to restore.

We are currently searching the two systems for diffrences to see if we can narrow down the culprit of the failure.

 

  16313   Thu Sep 2 21:49:03 2021 PacoSummaryComputerschiara down, vac interlock tripped

[tega, paco]

We found the files that took excess space in the chiara filesystem (see Attachment 1). They were error files from the summary pages that were ~ 50 GB in size or so located under /home/cds/caltech/users/public_html/detcharsummary/logs/. We manually removed them and then copied the rest of the summary page contents into the main file system drive (this is to preserve the information backup before it gets deleted by the cron job at the end of today) and checked carefully to identify the actual issue for why these files were as large in the first place.

We then copied the /detcharsummary directory from /media/40mBackup into /home/cds to match the two disks.

Attachment 1: 2021-09-02_21-51-15.png
2021-09-02_21-51-15.png
  16320   Mon Sep 13 09:15:15 2021 PacoUpdateLSCMC unlocked?

Came in at ~ 9 PT this morning to find the IFO "down". The IMC had lost its lock ~ 6 hours before, so at about 03:00 AM. Nothing seemed like the obvious cause; there was no record of increased seismic activity, all suspensions were damped and no watchdog had tripped, and the pressure trends similar to those in recent pressure incidents show nominal behavior (Attachment #1). What happened?

Anyways I simply tried reopening the PSL shutter, and the IMC caught its lock almost immediately. I then locked the arms and everything seems fine for now cool.

Attachment 1: VAC_2021-09-13_09-32-45.png
VAC_2021-09-13_09-32-45.png
  16329   Tue Sep 14 17:19:38 2021 PacoSummaryPEMExcess seismic noise in 0.1 - 0.3 Hz band

For the past couple of days the 0.1 to 0.3 Hz RMS seismic noise along BS-X has increased. Attachment 1 shows the hour trend in the last ~ 10 days. We'll keep monitoring it, but one thing to note is how uncorrelated it seems to be from other frequency bands. The vertical axis in the plot is in um / s

Attachment 1: SEIS_2021-09-14_17-33-12.png
SEIS_2021-09-14_17-33-12.png
  16343   Mon Sep 20 12:20:31 2021 PacoSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

[yehonathan, paco, anchal]

We attempted to find any symptoms for actuation problems in the PRMI configuration when actuated through BS and PRM.

Our logic was to check angular (PIT and YAW) actuation transfer function in the 30 to 200 Hz range by injecting appropriately (f^2) enveloped excitations in the SUS-ASC EXC points and reading back using the SUS_OL (oplev) channels.

From the controls, we first restored the PRMI Carrier to bring the PRM and BS to their nominal alignment, then disabled the LSC output (we don't need PRMI to be locked), and then turned off the damping from the oplev control loops to avoid supressing the excitations.

We used diaggui to measure the 4 transfer functions magnitudes PRM_PIT, PRM_YAW, BS_PIT, BS_YAW, as shown below in Attachments #1 through #4. We used the Oplev calibrations to plot the magnitude of the TFs in units of urad / counts, and verified the nominal 1/f^2 scaling for all of them. The coherence was made as close to 1 as possible by adjusting the amplitude to 1000 counts, and is also shown below. A dip at 120 Hz is probably due to line noise. We are also assuming that the oplev QPDs have a relatively flat response over the frequency range below.

Attachment 1: PRM_PIT_ACT_TF.pdf
PRM_PIT_ACT_TF.pdf
Attachment 2: PRM_YAW_ACT_TF.pdf
PRM_YAW_ACT_TF.pdf
Attachment 3: BS_PIT_ACT_TF.pdf
BS_PIT_ACT_TF.pdf
Attachment 4: BS_YAW_ACT_TF.pdf
BS_YAW_ACT_TF.pdf
  16352   Tue Sep 21 11:13:01 2021 PacoSummaryCalibrationXARM calibration noise

Here are some plots from analyzing the C1:LSC-XARM calibration. The experiment is done with the XARM (POX) locked, a single line is injected at C1:LSC-XARM_EXC at f0 with some amplitude determined empirically using diaggui and awggui tools. For the analysis detailed in this post, f0 = 19 Hz, amp = 1 count, and gain = 300 (anything larger in amplitude would break the lock, and anything lower in frequency would not show up because of loop supression). Clearly, from Attachment #3 below, the calibration line can be detected with SNR > 1.

We read the test point right after the excitation C1:LSC-XARM_IN2 which, in a simplified loop will carry the excitation suppressed by 1 - OLTF, the open loop transfer function. The line is on for 5 minutes, and then we read for another 5 minutes but with the excitation off to have a reference. Both the calibration and reference signal time series are shown in Attachment #1 (decimated by 8). The corresponding ASDs are shown in Attachment #2. Then, we demodulate at 19 Hz and a 30 Hz, 4th-order butterworth LPF, and get an I and Q timeseries (shown in Attachment #3). Even though they look similar, the Q is centered about 0.2 counts, while the I is centered about 0.0. From this time series, we can of course show the noise ASDs in Attachment #3.


The ASD uncertainty bands in the last plot are statistical estimates and depend on the number of segments used in estimating the PSD. A thing to note is that the noise features surrounding the signal ASD around f0 are translated into the ASD in the demodulated signals, but now around dc. I guess from Attachment #3 there is no difference in the noise spectra around the calibration line with and without the excitation. This is what I would have expected from a linear system. If there was a systematic contribution, I would expect it to show at very low frequencies.

Attachment 1: XARM_signal_asd.pdf
XARM_signal_asd.pdf
Attachment 2: XARM_demod_timeseries.pdf
XARM_demod_timeseries.pdf
Attachment 3: XARM_demod_asds.pdf
XARM_demod_asds.pdf
Attachment 4: XARM_cal_0921_timeseries.pdf
XARM_cal_0921_timeseries.pdf
  16358   Thu Sep 23 15:29:11 2021 PacoSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

[Anchal, Paco]

We had a second go at this with an increased number of averages (from 10 to 100) and higher excitation amplitudes (from 1000 to 10000). We did this to try to reduce the relative uncertainty a-la-Bendat-and-Pearsol

\delta G / G = \frac{1}{\gamma \sqrt{n_{\rm avg}}}

where \gamma, n_{\rm avg} are the coherence and number of averages respectively. Before, this estimate had given us a ~30% relative uncertainty and now it has been improved to ~ 10%. The re-measured TFs are in Attachment #1. We did 4 sweeps for each optic (BS, PRM) and removed the 1/f^2 slope for clarity. We note a factor of ~ 4 difference in the magnitude of the coil to angle TFs from BS to PRM (the actuation strength in BS is smaller).


For future reference:

With complex G, we get complex error in G using the formula above. To get uncertainity in magnitude and phase from real-imaginary uncertainties, we do following (assuming the noise in real and imaginary parts of the measured transfer function are incoherent with each other):
G = \alpha + i\beta

\delta G = \delta\alpha + i\delta \beta

\delta |G| = \frac{1}{|G|}\sqrt{\alpha^2 \delta\alpha^2 + \beta^2 \delta \beta^2}

\delta(\angle G) = \frac{1}{|G|^2}\sqrt{\alpha^2 \delta\alpha^2 + \beta^2 \delta\beta^2} = \frac{\delta |G|}{|G|}

Attachment 1: BS_PRM_ANG_ACT_TF.pdf
BS_PRM_ANG_ACT_TF.pdf BS_PRM_ANG_ACT_TF.pdf BS_PRM_ANG_ACT_TF.pdf BS_PRM_ANG_ACT_TF.pdf
  16363   Tue Sep 28 16:31:52 2021 PacoSummaryCalibrationXARM OLTF (calibration) at 55.511 Hz

[anchal, paco]

Here is a demonstration of the methods leading to the single (X)arm calibration with its budget uncertainty. The steps towards this measurement are the following:

  1. We put a single line excitation through the C1:SUS-ETMX_LSC_EXC at 55.511 Hz, amp = 1 counts, gain = 300 (ramptime=10 s).
  2. With the arm locked, we grab a long timeseries of the C1:LSC-XARM_IN1_DQ (error point) and C1:SUS-ETMX_LSC_OUT_DQ (control point) channels.
  3. We assume the single arm loop to have the four blocks shown in Attachment #1, A (actuator + sus), plant (mainly the cavity pole), D (detection + electronics), and K (digital control).
    1. At this point, Anchal made a model of the single arm loop including the appropriate filter coefficients and other parameters. See Attachments #2-3 for the split and total model TFs.
    2. Our line would actually probe a TF from point b (error point) to point d (control point). We multiplied our measurement with open loop TF from b to d from model to get complete OLTF.
    3. Our initial estimate from documents and elog made overall loop shape correct but it was off by an overall gain factor. This could be due to wrong assumption on RFPD transimpedance or analog gains of AA or whitening filters. We have corrected for this factor in the RFPD transimpedance, but this needs to be checked (if we really care).
  4. We demodulate decimated timeseries (final sampling rate ~ 2.048 kHz) and I & Q for both the b and d signals. From this and our model for K, we estimate the OLTF. Attachment #4 shows timeseries for magnitude and phase.
  5. Finally, we compute the ASD for the OLTF magnitude. We plot it in Attachment #5 together with the ASD of the XARM transmission (C1:LSC-TRX_OUT_DQ) times the OLTF to estimate the optical gain noise ASD (this last step was a quick attempt at budgeting the calibration noise).
    1. For each ASD we used N = 24 averages, from which we estimate rms (statistical) uncertainties which are depicted by error bands (\pm \sigma) around the lines.

** Note: We ran the same procedure using dtt (diaggui) to validate our estimates at every point, as well as check our SNR in b and d before taking the ~3.5 hours of data.

Attachment 1: OLTF_Calibration_Scheme.jpg
OLTF_Calibration_Scheme.jpg
Attachment 2: XARM_POX_Lock_Model_TF.pdf
XARM_POX_Lock_Model_TF.pdf
Attachment 3: XARM_OLTF_Total_Model.pdf
XARM_OLTF_Total_Model.pdf
Attachment 4: XARM_OLTF_55p511_Hz_timeseries.pdf
XARM_OLTF_55p511_Hz_timeseries.pdf
Attachment 5: Gmag_55p511_Hz_ASD.pdf
Gmag_55p511_Hz_ASD.pdf
  16369   Thu Sep 30 18:04:31 2021 PacoSummaryCalibrationXARM OLTF (calibration) with three lines

[anchal, paco]

We repeated the same procedure as before, but with 3 different lines at 55.511, 154.11, and 1071.11 Hz. We overlay the OLTF magnitudes and phases with our latest model (which we have updated with Koji's help) and include the rms uncertainties as errorbars in Attachment #1.

We also plot the noise ASDs of calibrated OLTF magnitudes at the line frequencies in Attachment #2. These curves are created by calculating power spectral density of timeseries of OLTF values at the line frequencies generated by demodulated XARM_IN and ETMX_LSC_OUT signals. We have overlayed the TRX noise spectrum here as an attempt to see if we can budget the noise measured in values of G to the fluctuation in optical gain due to changing power in the arms. We multiplied the the transmission ASD with the value of OLTF at those frequencies as the transfger function from normalized optical gain to the total transfer function value.

It is weird that the fluctuations in transmission power at 1 mHz always crosses the total noise in the OLTF value in all calibration lines. This could be an artificat of our data analysis though.

Even if the contribution of the fluctuating power is correct, there is remaining excess noise in the OLTF to be budgeted.

Attachment 1: XARM_OLTF_Model_and_Meas.pdf
XARM_OLTF_Model_and_Meas.pdf XARM_OLTF_Model_and_Meas.pdf
Attachment 2: Gmag_ASD_nb_withTRX.pdf
Gmag_ASD_nb_withTRX.pdf Gmag_ASD_nb_withTRX.pdf Gmag_ASD_nb_withTRX.pdf
  16377   Mon Oct 4 18:35:12 2021 PacoUpdateElectronicsSatellite amp box adapters

[Paco]

I have finished assembling the 1U adapters from 8 to 5 DB9 conn. for the satellite amp boxes. One thing I had to "hack" was the corners of the front panel end of the PCB. Because the PCB was a bit too wide, it wasn't really flush against the front panel (see Attachment #1), so I just filed the corners by ~ 3 mm and covered with kapton tape to prevent contact between ground planes and the chassis. After this, I made DB9 cables, connected everything in place and attached to the rear panel (Attachment #2). Four units are resting near the CAD machine (next to the bench area), see Attachment #3.

Attachment 1: pcb_no_flush.jpg
pcb_no_flush.jpg
Attachment 2: 1U_assembly.jpg
1U_assembly.jpg
Attachment 3: fourunits.jpg
fourunits.jpg
  16383   Tue Oct 5 20:04:22 2021 PacoSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

[Paco, Rana]

We had a look at the BS actuation. Along the way we created a couple of issues that we fixed. A summary is below.

  1. First, we locked MICH. While doing this, we used the /users/Templates/ndscope/LSC/MICH.yml ndscope template to monitor some channels. I edited the yaml file to look at C1:LSC-ASDC_OUT_DQ instead of the REFL_DC. Rana pointed out that the C1:LSC-MICH_OUT_DQ (MICH control point) had a big range (~ 5000 counts rms) and this should not be like that.
  2. We tried to investigate the aforementioned thing by looking at the whitening / uwhitening filters but all the slow epics channels where "white" on the medm screen. Looking under CDS/slow channel monitors, we realized that both c1iscaux and c1auxey were weird, so we tried telnet to c1iscaux without success. Therefore, we followed the recommended wiki procedure of hard rebooting this machine. While inside the lab and looking for this machine, we touched things around the 'rfpd' rack and once we were back in the control room, we couldn't see any light on the AS port camera. But the whitening filter medm screens were back up.
  3. While rana ssh'd into c1auxey to investigate about its status, and burtrestored the c1iscaux channels, we looked at trends to figure out if anything had changed (for example TT1 or TT2) but this wasn't the case. We decided to go back inside to check the actual REFL beams and noticed it was grossly misaligned (clipping)... so we blamed it on the TTs and again, went around and moved some stuff around the 'rfpd' rack. We didn't really connect or disconnect anything, but once we were back in the control room, light was coming from the AS port again. This is a weird mystery and we should systematically try to repeat this and fix the actual issue.
  4. We restored the MICH, and returned to BS actuation problems. Here, we essentially devised a scheme to inject noise at 310.97 Hz and 313.74. The choice is twofold, first it lies outside the MICH loop UGF (~150 Hz), and second, it matches the sensing matrix OSC frequencies, so it's more appropriate for a comparison.
  5. We injected two lines using the BS SUS LOCKIN1 and LOCKIN2 oscilators so we can probe two coils at once, with the LSC loop closed, and read back using the C1:LSC-MICH_IN1_DQ channel. We excited with an amplitude of 1234.0 counts and 1254 counts respectively (to match the ~ 2 % difference in frequency) and noted that the magnitude response in UR was 10% larger than UL, LL, and LR which were close to each other at the 2% level.

[Paco]

After rana left, I did a second pass at the BS actuation. I took TF measurements at the oscilator frequencies noted above using diaggui, and summarize the results below:

TF UL (310.97 Hz) UR (313.74 Hz) LL (310.97 Hz) LR (313.74 Hz)
Magnitude (dB) 93.20 92.20 94.27 93.85
Phase (deg) -128.3 -127.9 -128.4 -127.5

This procedure should be done with PRM as well and using the PRCL instead of MICH.

  16429   Tue Oct 26 16:56:22 2021 PacoSummaryBHDPart I of BHR upgrade - Locked PMC and IMC

[Paco, Ian]

We opened the laser head shutter. Then, we scanned around the PMC resonance and locked it. We then opened the PSL shutter, touched the MC1, MC2 and MC3 alignment (mostly yaw) and managed to lock the IMC. The transmission peaked at ~ 1070 counts (typical is 14000 counts, so at 10% of PSL power we would expect a peak transmission of 1400 counts, so there might still be some room for improvement). The lock was engaged at ~ 16:53, we'll see for how long it lasts.

There should be IR light entering the BSC!!! Be alert and wear laser safety goggles when working there.

We should be ready to move forward into the TT2 + PR3 alignment.

  16437   Thu Oct 28 16:32:32 2021 PacoSummaryBHDPart IV of BHR upgrade - Removal of BSC eastern optics

[Ian, Paco, Anchal]

We turned off the BSC oplev laser by turning the key counterclockwise. Ian then removed the following optics from the east end in the BSC:

  • OM4-PJ (wires were disconnected before removal)
  • GRX_SM1
  • OM3
  • BSOL1

We placed them in the center-front area of the XEND flow bench.

Photos: https://photos.app.goo.gl/rjZJD2zitDgxBfAdA

  16444   Tue Nov 2 16:42:00 2021 PacoSummaryBHD1Y1 rack work

[paco, ian]

After the new 1Y0 rack was placed near the 1Y1 rack by Chub and Anchal, today we worked on the 1Y1 rack. We removed some rails from spaces ~ 25 - 30. We then drilled a pair of ~ 10-32 thru-holes on some L-shaped bars to help support the c1sus2 machine weight. The hole spacing was set to 60 cm; this number is not a constant across all racks. Then, we mounted c1sus2. While doing this, Paco's knee clicked some of the video MUX box buttons (29 and 8 at least). We then opened the rack's side door to investigate the DC power strips on it before removing stuff. We did power off the DC33 supplies on there. No connections were made to allow us to keep building this rack.

When coming back to the control room, we noticed 3/4 video feed (analog) for the Test masses had gone down... why?


Next steps:

  • Remove sorensen (x5) power supplies from top of 1Y1 .. what are they actually powering???
  • Make more bars to support heavy IO exp and acromag chassis.
  • Make all connections (neat).

Update Tue Nov 2 18:52:39 2021

  • After turning Sorensens back up, the ETM/ITM video feed was restored. I will need to hunt the power lines carefully before removing these.
ELOG V3.1.3-