40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 13 of 330  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  15991   Fri Apr 2 14:51:20 2021 AnchalUpdateSUSBug found, need to redo the balancing

Last run gave similar results as the quick run we did earlier. The code has been unable to strike out couplings with POS. We found the bug which is causing this. This was because the sampling rate of MC_F channel is different from the test-point channels used for PIT and YAW. Even though we were aware of it, we made an error in handling it while calculating CSD. Due to this, CSD calculation with POS data was performed by the code with zero padding which made it think that no PIT/YAW <-> POS coupling exist. blushHence our code was only able to fix PIT <-> YAW couplings.

We'll need to do another run with this bug fixed. I'll update this post with details of the new measurement.

 

  16005   Wed Apr 7 17:38:51 2021 AnchalUpdateSUSTrying to uncouple only PIT and YAW first

To test if our method is working at all, we went for the simpler case of just uncoupling PIT and YAW. This is also because the sensor used for these two degrees of freedom is similar (the MC Trans WFS).

We saw a successful decrease in cross-coupling between PIT and YAW over the first 50 iterations that we tried. Here are some results:


Final output matrix:

Output matrix for uncoupling PIT and YAW from eachother
PIT YAW COILS
1.01858 1.16820 UL
0.98107 -0.79706 UR
-0.98107 0.79706 LL
-1.01858 -1.16820 LR

Plots:

  • Attachment 1 shows distance of sensing matrix from identity as iterations go.
  • Attachment 2 shows the off-diagonal elements of sensing matrix as the iterations increase.
    • It is worth noting that PIT -> YAW coupling was the main element that was reduced successfully while the YAW -> PIT was reducing but much more slowly.
    • Most of the remaining cross coupling in the end was from YAW -> PIT.
  • Attachment 3 shows first 10 oscillations in the time series data during excitation of some of the iterations.
  • Attachment 4 shows the cross spectral density of the sensed data during excitation with each other. This has been normalized by reference PSD data (taken with no excitation) of the sensed DOFs involved in the CSD calculation.
  • Attachment 5 shows the TF estimate made by normalizing CSD data column wise by the diagonal elements. The excitation frequency point in these plots become the Sensing matrix in the calculation.
    • One can notice how the PIT -> YAW element is going down in these plots.
    • Even though we are using only the real value of the sensing matrix, the imaginary values are also going down.

Next, tried uncoupling POS and PIT:

  • Next, we tried to uncouple POS and PIT. We expect them to be more coupled than with YAW.
  • At the time of writing this post, 15 iterations of this attempt have been completed and it is not looking good sad.
  • The distance of the sensing matrix from identity is growing at an accelerated rate.
  • The POS output matrix column seems to be trying to go towards the negative of PIT output matrix column! Why? We don't know.
  • We have seen in the past that once POS transforms into PIT or YAW, it just makes the output matrix worse as no feedback actually goes into the POS column. Eventually, the IMC will cease to remain locked.
  • So, I'm cancelling this attempt for now. Will consider more alternatives later.
Attachment 1: SDistanceFromIdentity.pdf
SDistanceFromIdentity.pdf
Attachment 2: SmatIterations.pdf
SmatIterations.pdf
Attachment 3: TimeSeriesPlots.pdf
TimeSeriesPlots.pdf TimeSeriesPlots.pdf TimeSeriesPlots.pdf TimeSeriesPlots.pdf TimeSeriesPlots.pdf TimeSeriesPlots.pdf TimeSeriesPlots.pdf TimeSeriesPlots.pdf
Attachment 4: CSDPlots.pdf
CSDPlots.pdf CSDPlots.pdf CSDPlots.pdf CSDPlots.pdf CSDPlots.pdf CSDPlots.pdf CSDPlots.pdf CSDPlots.pdf
Attachment 5: SmatrixPlots.pdf
SmatrixPlots.pdf SmatrixPlots.pdf SmatrixPlots.pdf SmatrixPlots.pdf SmatrixPlots.pdf SmatrixPlots.pdf SmatrixPlots.pdf SmatrixPlots.pdf
  16017   Mon Apr 12 10:07:35 2021 AnchalUpdateSUSWhat's F2A??

I'm not sure I understand what F2A is? I couldn't find a description of this filter anywhere and don't remember if you have already explained it. Can you describe what is needed to be done again, please? We would keep SUS state space model and seismic transfer functions calculation ready meanwhile.

Quote:

Next we wanna get the F2A filters made since most of the IMC control happens at f < 3 Hz. Once you have the SUS state space model, you should be able to see how this can be done using only the free'swinging eigenfrequencies. Then you should get the closed loop model including the F2A filters and the damping filters to see what the closed loop behavior is like.

 

  16026   Wed Apr 14 13:12:13 2021 AnchalUpdateGeneralSorry, it was me

Sorry about that. It must be me. I'll make sure it doesn't happen again. I was careless to not check back, no further explanation.indecision

  16027   Wed Apr 14 13:16:20 2021 AnchalConfigurationComputers40m Control Room Changes
  • I have confirmed that the old two monitors' backlighting is not working. One can see the impression of the display without any brightness on them. Both old monitors are on the shelf behind.
  • Today we got a monitor and mouse from Mike. I had to change /etc/default/grub GRUB_GFXMODE to 1920x1200@30 on allegra for it to work with the(any) monitor.
  • Allegra is Debian 10 with latest cds-workstation installed on it. It is a good test station to migrate our existing scripts to start using updated cds-workstation configuration.
Quote:
  • Again, we have placed allegra's monitor for place holder but it is not working and we need new monitors for it in future whenever it is going to be used.

 

  16030   Wed Apr 14 16:46:24 2021 AnchalUpdateGeneralIFO State

That makes sense. I assumed that IFO-STATE is configured as you have proposed it to be configured. This could be implemented in later.

Quote:
 

a better way would be to configure the EPICS record to automatically set / unset itself based on some diagnostic channels. For example, the "PMC locked" bit should be set if (i) the PMC REFL is < 0.1 AND (ii) PMC TRANS is >0.65 (the exact thresholds are up for debate). Then we are truly recording the state of the IFO and not relying on some script to write to the bit (I haven't thoguht through if there are some edge cases where we need an unreasonable number of diagnostic channels to determine if we are in a certain state or not).

 

  16031   Wed Apr 14 17:53:38 2021 AnchalUpdateSUSPlan for calculating filter banks for output matrix aka F2A aka F2P

Plan of action

  • Get the transfer functions of the suspension plant from actuated DOF to sensed DOF. We'll verify Bhavini's state-space model and get these transfer functions. Use the model TFs, not measured.
  • For each of POS->POS, PIT->PIT, and YAW->YAW, we'll get the resonant frequency and Q of the resonance from these models. No, forget about the Q.
  • We can correct the resonant frequencies from the measured ones in our free swinging data.
  • Now, we'll repeat the following for each column of output matrix filters (inspired from scripts/SUS/F2Pcalc.py, but not fully understood how/why):
    • Select col (eg. POS)
    • Set f0 to the resonant frequency.
    • Calculate \large f_{UL} = f_0 * \sqrt{G_{UL}} where GUL is the corrected DC gain we got after output matrix optimization earlier. (Not sure how, why?). No, use the SS model.
    • Calculate fUR, fLL, and fLR like above.
    • Set \large Q_{UL} = \sqrt{G_{UL}}   (This just seems like a way of keeping some approximately low Q, ideally we should keep this same to what we got above but that might cause saturation issues like Rana mentioned in the meeting)
    • Then, set the following filter in the output matrix element for UL:
                            G_{UL}\frac{1 + i\frac{f}{f_{UL}Q_{UL}} - \frac{f^2}{f_{UL}^2}}{1 + i\frac{f}{f_{0}} - \frac{f^2}{f_{0}^2}}
      which is in zpk form equivalent to:
                            z: \frac{f_0}{2 Q_{UL}} +/- i f_0 \sqrt{1 - \frac{1}{4Q_{UL}}} \quad, \quad p: \frac{f_0}{2} +/- i f_0 \frac{\sqrt{3}}{2} \quad, \quad k: G_{UL}
    • Repeat the above for UR, LL, LR.
    • Note that this filter function takes values GUL at DC and at high frequencies while it would dip at the resonant frequency for POS with depth and narrowness directly proportional to QUL. No, the DC gain is different from the AC gain.
  • However, the F2P filter plots we found in several places on elog look a bit different. Like here: 40m/4719. One important difference is that the filter magnitude always become 1 after the resonance at higher frequencies. Yes, this is  what we want, since you already did the balancing at high frequencies.
  • A preliminary plot of the above calculation for the 1,1 output matrix filter bank (POS -> UL) is attached in Attachment 1.

Discussion:

  • We can make 12 such filters for the 12 numbers we got for the optimized output matrix. Is that the aim or should we do it only for the POS column as has been done in past?
  • We are not sure how the choice of Q is made in setting the above filter function. We'll think more about it to understand this.
  • We are also not sure how the choice of fUL is made above. It looks like depending on the correction gain, we want to slide the zero positions with respect to the pole positions which are fixed at the resonant frequency as expected. This seems to have some complex explanation.
  • Please let us know if we are planning this right before we dive into these calculations/script writing. Thanks.

Edit Thu Apr 15 08:32:58 2021 :

Comments are from Rana.

Corrected the plot in the attachment. It shows the correct behavior at high frequencies now.

Attachment 1: MC2propF2A_UL.pdf
MC2propF2A_UL.pdf
  16035   Thu Apr 15 11:41:43 2021 AnchalUpdateSUSProposed filters for output matrix aka F2A aka F2P

Here' s aquick update before we leave for lunch. We have managed to calculate some filter that would go on the POS column in MC2 output matrix filter banks aka F2A aka F2P filters. In the afternoon if we can come and work on the IMC, we'll try to load them on the output matrix. We have never done that so it might take some time for us to understand on how to do that. Attached is the bode plot for these proposed filters. Let us know if you have any comments.

Attachment 1: MC2propPOSfb.pdf
MC2propPOSfb.pdf
  16055   Tue Apr 20 18:19:30 2021 AnchalUpdateSUSMC2 coil balanced at DC

Following up from morning's work, I balanced the coils at DC as well. Attachment 1 is screenshot of striptool in which blue and red traces show ASCYAW and ASCPIT outputs when C1:SUS-MC2_LSC_OFFSET was switched by 500 counts. We see very slight disturbance but no real DC offset shown on PIT and YAW due to position step. This data was taken while nominal F2A filter calculated to balance coils at DC was uploaded


I have uploaded the filters on filter banks 7-10 where FM7 is the nominal filter with Q close to 1 and 8-10 are filters with Q 3, 7 and 10 respectively. The transfer function of these filters can be seen in Attachment 2. Note, that the high frequency gain drops a lot when higher Q filters are used.

These filters are designed such that the total DC gain after the application of coil outputs gains for high frequency balancing (as done in morning 16054) balances the coils at DC.


Since I had access to the complete output matrix that balances the coils to less than 1% cross coupling at high frequencies from 16009, I also did a quick test of DC coil balancing with this kind of high frequency balancing. In this case, I uploaded another set of filters which were made at Q close to 1 and gain such that effective DC gain matrix becomes what I got by balancing in the above case. This set of filter also worked as good as the above filters. This completes the proof that we can also use complete matrix for high frequency coil balancing which can be calculated by a script in 20min and works with DC coil balancing as well. In my opinion, this method is more clear and much faster than toggling values in coil output gains where we have only 4 values to optimize 6 cross-coupling parameters. But don't worry, I'm not wasting time on this and will abandon this effort for now, to be taken up in future.


Next up:

  • Tomorrow, we'll finish DC balancing for MC1 and MC3 with the method I practiced today. This should not take much time and should be completed before the meeting.
  • I'll also, calculate and upload the F2A filters for MC1 and MC3.
  • Next, we'll optimize gains in the suspension damping loops by doing step response test (with TRAMP = 0s). We'll look for decaying response (at MC_F, and WFS sensors) with a few oscillations for each step in POS, PIT, and YAW.

Edit Tue Apr 20 21:25:46 2021 :

Corrected the calculation of filters in case of Q different than \large \sqrt{G_{DC}}. There was a bug in the code which I overlooked. I'll correct the filter bank modules tomorrow.


Edit Wed Apr 21 11:06:42 2021 :

I have uploaded the corrected foton filters. Please see attachment 3 for the transfer functions calculated by foton. They match the filters we intended to upload. Only after uploading and closing the foton filter, I realized that the X=7 filter plot (bottom left in attachment 3) does not have dB units on y-axis. It is plotted in linear y-scale (this plot in foton is for phase by default to I guess I forgot to change the scaling when repurposing it for my plot).

Attachment 1: MC2_DC_Coil_Balanced_St.png
MC2_DC_Coil_Balanced_St.png
Attachment 2: IMC_F2A_Params_MC2.pdf
IMC_F2A_Params_MC2.pdf IMC_F2A_Params_MC2.pdf IMC_F2A_Params_MC2.pdf IMC_F2A_Params_MC2.pdf
Attachment 3: UploadedPOS_F2A_Filters.pdf
UploadedPOS_F2A_Filters.pdf
  16068   Wed Apr 21 19:28:03 2021 AnchalUpdatePSLPSL/IFO recovery

[Anchal, Koji]

Removed the top sheet

  • Opened first from the door side so that any dust would spill outside.
  • Then rolled the sheet inward to meet in the middle.
  • Repeated this twice for the 2 HEPA filters.

Removed the sheets on the table

  • Lifted sheet up making sure the top side face outside always.
  • Rolled it sideways halfway through.
  • Cut down the sheet vertically.
  • Slided the doors to the other side and rolled the remaining half.
  • On the door side, the sheets above the ALS optics were simply lifted off.

Restarting PSL

  • Turned on the HEPAs at the max speed
  • Switched on laser to jsut above the threshold
    • Before the 1st eom, power was 20mW 
    • After the EOM/AOM, 18mW. So about 90% transmission through all polarizing optics.
    • We saw the resonances of the PMC but could not lock it even with highest gain available (30 dB).
  • Increased the input power to PMC to 100mW
    • Locked the PMC at 30dB gain
    • The transmitted power was ~50-60 mW. (Had to use power meter suspended by hand only.
    • The right before the IMC (after the 2nd EOM) 48mW. So none of the alignment was lost.
  • Opened the PSL shutter.
  • We were able to see IMC reflection signal.
  • We were also able to see IMC catching lock as the servo was left ON earlier.
  • Switched off the servo.
  • Decided to increase the power while watching PMC Trans/Refl and IMC REFL
  • Injection diode current to innolight was increased slowly to 2.10A. Saw a mod hopping region aroun 1.8A.
  • We recovered the PMC Trans >0.7 V.
  • PZT was near the edge, so moved by one FSR.
  • The PMC refelction signal is still shown in red at around 48 mV.

Back to control room

  • IMC was locked almost immediately by manually finding the lock while keeping IMC WFS off to preserve the offsets from yesterday.
  • Then switch on IMC WFS. Working good.
  • Then unlocked the servo and switched on IMC Autolocker. Lock was caught immediately.

Decided to start locking the arms

  • The arm transmissions were flashing but at 0.2~0.3 level.
  • Decided to adjust TT1 and TT2 Pitch and Yaw to align the light going into the arms.
  • This made TRY ~0.6 / TRX ~0.8 at the peak of the flashing
  • Locked the arms. (By switching on C1:LSC-MODE_SELECT which engages all servos).
  • Used ASS to align Yarm then align Xarm. Procedure:
    • Sitemap > ASC > c1ass
    • Open striptool to look at progress. ! Scripts YARM > striptool.
    • Switch on ASS. ! More Scripts > ON
    • Wait for the TRY to reach to around 0.97.
    • Freeze the outputs. ! Scripts > Freeze Outputs.
    • Offload the offsets to preserve the output. ! More Scripts > OFFLOAD OFFSETS.
    • Switch off ASS. ! More Scripts > OFF
    • Repeted this for XARM.
  • At the end, both XARM and YARM were locked with TRX ~ 0.97 and TRY ~ 0.96.
  16071   Thu Apr 22 08:50:21 2021 AnchalUpdateSUSMC2 Suspension Optimization summary

Yes, during the AC balancing, POS column was set to all 1. This table shows the final values after all the steps. The first 3 columns are DC balancing results when output matrix was changed. While the last column is for AC balancing. During AC balancing, the output matrix was kept to ideal position as you suggested.

Quote:

the POS column should be all 1 for the AC balancing. Where did those non-1 numbers come from?

 

  16077   Thu Apr 22 15:34:54 2021 AnchalUpdatePSLPMC transmission

Koji mentioned that the mode of the laser is different for lower diode currents. So that might be the reason why we got less transmission at the low input power but more afterward.

  16078   Thu Apr 22 15:36:54 2021 AnchalUpdateSUSSettings restored

The mix up was my fault I think. I restored the channels manually instead of using burt restore. Your message suggests that we can set burt to start noticing channel changes at home point and create a .req file that can be used to restore later. We'll try to learn how to do that. Right now, we only know how to burt restore using the existing snapshots from the autoburt directory, but they touch more things than we work on, I think. Or can we just always burt restore it to morning time? If yes, what snapshot files should we use?

  16091   Wed Apr 28 17:09:11 2021 AnchalUpdateSUSTuned Suspension Parameters uploaded for long term behavior monitoring

I have uploaded all the new settings mentioned in 16066 and 16072. The settings were uploaded through a single script present at anchal/20210428_IMC_Tuned_Suspension/uploadNewConfigIMC.py. The settings can be reverted back to old settings through anchal/20210428_IMC_Tuned_Suspension/restoreOldConfigIMC.py. Both these scripts can be run only through python3 in donatella or allegra.


GPSTIME of new settings: 1303690144


New settings include:

  • New input matrices for MC1 and MC2.
  • New Output coil gains for AC balancing on all three optics.
  • Switching ON the FM8 filter modulae (Q=3 F2A filter) in POS column on output matrix of all optics.

We'll wait and watch the performance through summary pages and check back the performance on Monday.

  16094   Thu Apr 29 10:52:56 2021 AnchalUpdateSUSIMC Trans QPD and WFS loops step response test

In 16087 we mentioned that we were unable to do a step response test for WFS loop to get an estimate of their UGF. The primary issue there was that we were not putting the step at the right place. It should go into the actuator directly, in this case, on C1:SUS-MC2_PIT_COMM and C1:SUS-MC2_YAW_COMM. These channels directly set an offset in the control loop and we can see how the error signals first jump up and then decay back to zero. The 'half-time' of this decay would be the inverse of the estimated UGF of the loop. For this test, the overall WFS loops gain,  C1:IOO-WFS_GAIN was set to full value 1. This test is performed in the changed settings uploaded in 16091.

I did this test twice, once giving a step in PIT and once in YAW.

Attachment 1 is the striptool screenshot for when PIT was given a step up and then step down by 0.01.

  • Here we can see that the half-time is roughly 10s for TRANS_PIT and WFS1_PIT corresponding to roughly 0.1 Hz UGF.
  • Note that WFS2 channels were not disturbed significantly.
  • You can also notice that third most significant disturbance was to TRANS_YAW actually followed by WF1 YAW.

Attachment 2 is the striptool screenshot when YAW was given a step up and down by 0.01. Note the difference in x-scale in this plot.

  • Here, TRANS YAW got there greatest hit and it took it around 2 minutes to decay to half value. This gives UGF estimate of about 10 mHz!
  • Then, weirdly, TRANS PIT first went slowly up for about a minutes and then slowly came dome in a half time of 2 minutes again. Why was PIT signal so much disturbed by the YAW offset in the first place?
  • Next, WFS1 YAW can be seen decaying relatively fast with half-life of about 20s or so.
  • Nothing else was disturbed much.

  • So maybe we never needed to reduce WFS gain in our measurement in 16089 as the UGF everywhere were already very low.
  • What other interesting things can we infer from this?
  • Should I sometime repeat this test with steps given to MC1 or MC3 optics?
Attachment 1: PIT_OFFSET_ON_MC2.png
PIT_OFFSET_ON_MC2.png
Attachment 2: YAW_STEP_ON_MC2_complete.png
YAW_STEP_ON_MC2_complete.png
  16095   Thu Apr 29 11:51:27 2021 AnchalSummaryLSCStart of measuring IMC WFS noise contribution in ar cavity length noise

Tried locking the arms

  • Ran IFO > Configure > ! (YARM) > Restore YARM. Nothing happened.
  • Trying to align through tip-tilt:
    • Previous values: TT1: PIT: -1.7845, YAW: -0.2775; TT2: PIT: -1.3376, YAW: -0.0436
    • Couldn't get flashing of light in the arms at all.
    • Restored values to previous values.
  • Noticed that ITMY OPLEV YAWW Error has gone very high overnight while other oplevs remained the same.
  • Trying to change the C1:SUS-ITMY_YAW_OFFSET to bring this oplev yaw error back to near zero.
  • Changed C1:SUS-ITMY_YAW_OFFSET from -34 to 50. OPLEV_YEROR reduced to around -23 from -70.
  • Same thing with BS PIT. OPLEV_PERROR is highlighted in red at -52.
  • Changing C1:SUS-BS_PIT_OFFSET from 55 to 30. This brought OPLEV_PERROR to -15 from -52.
  • Trying to align PRM by changing C1:SUS-PRM_PIT_OFFSET and C1:SUS-PRM_YAW_OFFSET.
  • Inital values: C1:SUS-PRM_PIT_OFFSET: -20 , C1:SUS-PRM_YAW_OFFSET: 39.

Did the WFS step response test on IMC in between while waiting for help. See 16094.


Back to trying arm locking

  • Tried IFO > Configure > ! (XARM) > Restore YARM. Nothing happened.
  • Tried IFO > Configure > ! (YARM) > Restore YARM. Nothing happened again.
  • Tried Movie Capture of AS screen from VIDEO > Movie Capture > AS. But the script failed due to module not present error.

PMC got unlocked

  • Infront of me, PMC got unlocked. I did not go to PMC locking the screen at all since morning.
  • I opened the C1PSL_PMC screen. The PSL Autolocker blinker is not blinking but the switch is set to Enable. 
  • I do not see any automatic effort happening for regaining lock at PMC.
  • I'll try it manually. I was able to get the PMC locked again. C1:PSL-PMC_PMCTRANSPD is showing 0.761 V and C1:PSL-PMC_RFPDDC is showing 0.053 V.
  • Now IMC auto locker seems to be trying to get the lock acquired.
  • It acquired a lock a few times but struggled to keep it on. I reduced C1:IOO-WFS_GAIN to 0 and then the lock could stay on. Seemed like the accumulated offsets were not good.
  • So I cleared the history on WFS1, TRANS and WFS2 filter banks and then ramped the WFS overall gain (C1:IOO-WFS_GAIN) back to 1 and now IMC seems to stay locked in a stable configuration.
  • However, I still don't know what caused the PMC to get unlocked in the first place. Did my repeated arm locking attempts did something to the main laser frequency?

Back to trying arm locking

  • Tried IFO > Configure > ! (YARM) > Restore YARM again. Nothing happened again.
  16100   Thu Apr 29 17:43:48 2021 AnchalUpdateCDSF2A Filters double check

I double checked today and the F2A filters in the output matrices of MC1, MC2 and MC3 in the POS column are ON. I do not get what SDF means? Did we need to add these filters elsewhere?

Quote:

The IMC suspension team should double check their filters are on again. I am not familiar with the settings and I don't think they've been added to the SDF.

Attachment 1: F2AFiltersON.png
F2AFiltersON.png
  16101   Thu Apr 29 17:51:19 2021 AnchalSummaryLSCStart of measuring IMC WFS noise contribution in arm cavity length noise

t Both arms were locked simply by using IFO > Configure > ! (YARM) > Restore YARM. I had to use ASS to improve the TRX/TRY to ~0.95.

I measured C1:LSC-XARM_IN1_DQ and C1:LSC-YARM_IN1_DQ while injecting band limited noise in C1:IOO-WFS1_PIT_EXC using uniform noise with amplitude 1000 along with filter defined by string: cheby1("BandPass",4,1,80,100). I calibrated the control arms signals by 2.44 nm/cts calibration factor directly picked up from 13984.

For the duration of this test, all LIMIT switches in the WFS loops were switched OFF.

I do not see any affect on the arm control signal power spectrums with or without the noise injection. Attachement 1 shows the PSD along with PSD of the injection site IN2 signal. I must be doing something wrong, so would like feedback before I go further.

Attachment 1: WFS1_PIT_exc_80-100Hz_Arms_ASD.pdf
WFS1_PIT_exc_80-100Hz_Arms_ASD.pdf
  16102   Thu Apr 29 18:53:33 2021 AnchalUpdateSUSIMC Suspension Damping Gains Test

With the input matrix, coil ouput gains and F2A filters loaded as in 16091, I tested the suspension loops' step response to offsets in LSC, ASCPIT and ASCYAW channels, before and after applying the "new damping gains" mentioned in 16066 and 16072. If these look better, we should upload the new (higher) damping gains as well. This was not done in 16091.


Note that in the plots, I have added offsets in the different channels to plot them together, hence the units are "au".

Attachment 1: MC1_SUSDampGainTest.pdf
MC1_SUSDampGainTest.pdf MC1_SUSDampGainTest.pdf MC1_SUSDampGainTest.pdf
Attachment 2: MC2_SUSDampGainTest.pdf
MC2_SUSDampGainTest.pdf MC2_SUSDampGainTest.pdf MC2_SUSDampGainTest.pdf
Attachment 3: MC3_SUSDampGainTest.pdf
MC3_SUSDampGainTest.pdf MC3_SUSDampGainTest.pdf MC3_SUSDampGainTest.pdf
  16110   Mon May 3 16:24:14 2021 AnchalUpdateSUSIMC Suspension Damping Gains Test Repeated with IMC unlocked

We repeated the same test with IMC unlocked. We had found these gains when IMC was unlocked and their characterization needs to be done with no light in the cavity. attached are the results. Everything else is same as before.

Quote:

With the input matrix, coil ouput gains and F2A filters loaded as in 16091, I tested the suspension loops' step response to offsets in LSC, ASCPIT and ASCYAW channels, before and after applying the "new damping gains" mentioned in 16066 and 16072. If these look better, we should upload the new (higher) damping gains as well. This was not done in 16091.


Note that in the plots, I have added offsets in the different channels to plot them together, hence the units are "au".


Edit Tue May 4 14:43:48 2021 :

  • Adding zoomed in plots to show first 25s after the step.
  • MC1:
    • Our improvements by new gains are only modest.
    • This optic needs a more careful coil balancing first.
    • Still the ring time is reduced to about 5s for all step responses as opposed to 10s at old gains.
  • MC2:
    • The first page of MC2 might be bit misleading. We have not changed the damping gain for SUSPOS channel, so the longer ringing is probably just an artifact of somthing else. We didn't retake data.
    • In PIT and YAW where we increased the gain by a factor of 3, we see a reduction in ringing lifetime by about half.
  • MC3:
    • We saw the most optimistic improvement on this optic.
    • The gains were unusually low in this optic, not sure why.
    • By increasing SUSPOS gain from 200 to 500, we saw a reduction of ringing halftime from 7-8s to about 2s. Improvements are seen in other DOFs as well.
    • You can notice rightaway that YAW of MC3 keeps oscillating near resonance (about 1 Hz). Maybe more careful feedback shaping is required here.
    • In SUSPIT, we increased gain from 12 to 35 and saw a good reduction in both ringing time and initial amplitude of ringing.
    • In SUSYAW, we only increased the gain to 12 from 8, which still helped a lot in reducing big ringing step response to below 5s from about 12s.

Overall, I would recommend setting the new gains in the suspension loops as well to observe long term effects too.

Attachment 1: MC1_SusDampGainTest.pdf
MC1_SusDampGainTest.pdf MC1_SusDampGainTest.pdf MC1_SusDampGainTest.pdf
Attachment 2: MC2_SusDampGainTest.pdf
MC2_SusDampGainTest.pdf MC2_SusDampGainTest.pdf MC2_SusDampGainTest.pdf
Attachment 3: MC3_SusDampGainTest.pdf
MC3_SusDampGainTest.pdf MC3_SusDampGainTest.pdf MC3_SusDampGainTest.pdf
  16113   Mon May 3 18:59:58 2021 AnchalSummaryGeneralWeird gas leakagr kind of noise in 40m control room

For past few days, a weird sound of decaying gas leakage comes in the 40m control room from the south west corner of ceiling. Attached is an audio capture. This comes about every 10 min or so. 

Attachment 1: 40mNoiseFinal.mp3
  16120   Wed May 5 09:04:47 2021 AnchalUpdateSUSNew IMC Suspension Damping Gains uploaded for long term testing

We have uploaded the new damping gains on all the suspensions of IMC. This completes changing all the configuration to as mentioned in 16066 and 16072. The old setting can be restored by running python3 /users/anchal/20210505_IMC_Tuned_SUS_with_Gains/restoreOldConfigIMC.py from allegra or donatella.

GPSTIME: 1304265872

UTC May 05, 2021 16:04:14 UTC
Central May 05, 2021 11:04:14 CDT
Pacific May 05, 2021 09:04:14 PDT

 

  16125   Thu May 6 16:13:39 2021 AnchalSummaryIMCAngular actuation calibration for IMC mirrors

Here's my first attempt at doing angular actuation calibration for IMC mirrors using the method descibed in /users/OLD/kakeru/oplev_calibration/oplev.pdf by Kakeru Takahashi. The key is to see how much is the cavity mode misaligned from the input mode of beam as the mirrors are moved along PIT or YAW.

There two possible kinds of mismatch:

  • Parallel displacement of cavity mode axis:
    • In this kind of mismatch, the cavity mode is simply away from input mode by some distance \large \beta.
    • This results in transmitted power reduction by the gaussian factor of \large e^{-\frac{\beta^2}{w_0^2}} where \large w_0 is the beam waist of input mode (or nominal waist of cavity).
    • For some mismatch, we can approximate this to
                                                                               \large 1 - \frac{\beta^2}{w_0^2}
  • Angular mismatch of cavity mode axis:
    • The cavity mode axis could be tilted with respect to input mode by some angle \large \alpha.
    • This results in transmitted power reduction by the gaussian factor of \large e^{- \frac{\alpha^2}{\alpha_0^2}}  where \large \alpha_0 is the beam divergence angle of input mode (or nominal waist of cavity) given by \large \frac{\lambda}{\pi w_0}.
    • or some mismatch, we can approximate this to
                                                                                \large 1 - \frac{\alpha^2}{\alpha_0^2}

Kakeru's document goes through cases for linear cavities. For IMC, the mode mismatches are bit different. Here's my take on them:

MC2:

  • MC2 is the easiest case in IMC as it is similar to the end mirror for linear cavity with plane input mirror (the case of which is already studies in sec 0.3.2 in Kaker's document).
  • PIT:
    • When MC2 PIT is changed, the cavity mode simple shifts upwards (or downwards) to the point where the normal from MC2 is horizontal.
    • Since, MC1 and MC3 are plane mirrors, they support this mode just with a different beam spot position, shifted up by \large (R-L)\theta.
    • So the mismatch is simple of the first kind. In my calculations however, I counted the two beams on MC1 and MC3 separately, so the factor is twice as much.
    • Calling the coefficient to square of angular change \large \eta, we get:
                                     \large \eta_{._{2P}} = \frac{2 (R-L)^2}{w_0^2}
    • Here, R is radius of curvature of MC1/3 taken as 21.21m and L is the cavity half-length of IMC taken as 13.545417m.
  • YAW:
    • For YAW, the case is bit more complicated. Similar to PIT, there will be a horizontal shift of the cavity mode by \large (R-L)\theta.
    • But since the MC1 and MC3 mirrors will be fixed, the angle of the two beams from MC1 and MC3 to MC2 will have to shift by \large \theta/2.
    • So the overall coefficient would be:
                                     \large \eta_{._{2Y}} = \frac{2 (R-L)^2}{w_0^2} + \frac{2}{4\alpha_0^2}
    • The factor of 4 in denominator of seconf term on RHS above comes because only half og angular actuation is felt per arm. The factor of 2 in numerator for for the 2 arms.

MC1/3:

  • First, let's establish that the case of MC1 and MC3 is same as the cavity mode must change identically when the two mirrors are moved similarly.
  • YAW:
    • By tilting MC1 by \large \theta, we increase the YAW angle between MC1 and MC3 by \large \theta.
    • Beam spot on both MC1 and MC3 moves by \large (R-L)\theta.
    • The beam angles on both arms get shifted by \large \theta/2.
    • So the overall coefficient would be:
                                     \large \eta_{._{13Y}} = \frac{2 (R-L)^2}{w_0^2} + \frac{2}{4\alpha_0^2}
    • Note, this coefficient is same as MC2, so it si equivalent to moving teh MC2 by same angle in YAW.
  • PIT:
    • I'm not very sure of my caluculation here (hence presented last).
    • Changing PIT on MC1, should change the beam spot on MC2 but not on MC3. Only the angle of MC3-MC2 arm should deflect by \large \theta/2.
    • While on MC1, the beam spot must change by \large (R-L)\theta/2 and the MC1-MC2 arm should deflect by \large \theta/2.
    • So the overall coefficient would be:
                                     \large \eta_{._{13P}} = \frac{(R-L)^2}{4 w_0^2} + \frac{2}{4\alpha_0^2}

Test procedure:

  • We first clicked on MC WFS Relief (on C1:IOO-WFS_MASTER) to reduce the large offsets accumulated on WFS outputs. This script took 10 minutes and reduced the offsets to single digits and IMC remained locked throughout the process.
  • Then we switched off the WFS to freeze the outputs.
  • We moved the MC#_PIT/YAW_OFFSET up and down and measured the C1:IOO-MC_TRANS_SUMFILT_OUT channel as an indicater of IMC mode matching.
  • Attachement 1 are the 6 measurements and there fits to a parabola. Fitting code and plots are thanks to Paco.
  • We got the curvature of parabolas \large \gammafrom these fits in units of 1/cts^2.
  • The \large \eta coefficients calculated above are in units of 1/rad^2.
  • We got the angular actuation calibration from these offsets to physical angular dispalcement in units of rad/cts by \large \sqrt{\gamma / \eta}.
  • AC calibration:
    • I parked the offset to some value to get to the side of parabola. I was trying to reduce transmission from about 14000 cts to 10000-12000 cts in each case.
    • Sent excitation using MC#_ASCPIT/YAW_EXC using awg at 77 Hz and 10000 cts.
    • Measured the cts on transmission channel at 77 Hz. Divided it by 2 and by the dc offset provided. And divided by the amplitude of cts set in excitation. This gives \large \eta_{ac} analogous to above DC case.
    • Then angular actuation calibration at 77 Hz from these offsets to physical angular dispalcement in units of rad/cts by \large \sqrt{\gamma/\eta_{ac}}.
  • Following are the results:
    Optic Act
    Calibration factor at DC [µrad/cts]
    Calibration factor at 77 Hz [prad/cts]
    MC1 PIT 7.931+/-0.029 906.99
    MC1 YAW 5.22+/-0.04 382.42
    MC2 PIT 13.53+/-0.08 869.01
    MC2 YAW 14.41+/-0.21 206.67
    MC3 PIT 10.088+/-0.026 331.83
    MC3 YAW 9.75+/-0.05 838.44

     


  • Note these values are measured with the new settings in effect from 16120. If these are changed, this measurement will not be valid anymore.
  • I believe the small values for MC1 actuation have to do with the fact that coil output gains for MC1 are very weird and small, which limit the actuation strength.
  • TAbove the resonance frequencies, they will fall off by 1/f^2 from the DC value. I've confirmed that the above numbers are of correct order of magnitude atleast.
  • Please let me know if you can point out any mistakes in the calculations above.
Attachment 1: IMC_Ang_Act_Cal_Kakeru_Tests.pdf
IMC_Ang_Act_Cal_Kakeru_Tests.pdf IMC_Ang_Act_Cal_Kakeru_Tests.pdf IMC_Ang_Act_Cal_Kakeru_Tests.pdf IMC_Ang_Act_Cal_Kakeru_Tests.pdf IMC_Ang_Act_Cal_Kakeru_Tests.pdf IMC_Ang_Act_Cal_Kakeru_Tests.pdf
  16139   Thu May 13 19:38:54 2021 AnchalUpdateSUSMC1 Satellite Amplifier Debugged

[Anchal Koji]

Koji and I did a few tests with an OSEM emulator on the satellite amplifier box used for MC1 which is housed on 1X4. This sat box unit is S2100029 D1002812 that was recently characterized by me 15803. We found that the differential output driver chip AD8672ARZ U2A section for the UL PD was not working properly and had a fluctuating offset at no input current from the PD. This was the cause of the ordeal of the morning. The chip was replaced with a new one from our stock. The preliminary test with the OSEM emulator showed that the channel has the correct DC value.

In further testing of the board, we found that the channel 8 LED driver was not working properly. Although this channel is never used in our current cable convention, it might be used later in the future. In the quest of debugging the issue there, we replaced AD8672ARZ at U1 on channel 8. This did not solve the issue. So we opened the front panel and as we flipped the board, we found that the solder blob shorted the legs of the transistor Q1 2N3904. This was replaced and the test with the LED out and GND shorted indicated that the channel is now properly providing a constant current of 35mA (5V at the monitor out).


After the debugging, the UL channel became the least noisy among the OSEM channels! Mode cleaner was able to lock and maintain it.

We should redo the MC1 input matrix optimization and the coil balancing afterward as we did everything based on the noisy UL OSEM values.

Attachment 1: MC1_UL_Channel_Fixed.png
MC1_UL_Channel_Fixed.png
  16147   Thu May 20 10:35:57 2021 AnchalUpdateSUSIMC settings reverted

For future reference, the new settings can be upoaded from a script in the same directory. Run python /users/anchal/20210505_IMC_Tuned_SUS_with_Gains/uploadNewConfigIMC.py from allegra.

Quote:

There isn't any instruction here on how to upload the new settings

  16168   Fri May 28 17:32:48 2021 AnchalSummaryALSSingle Arm Actuation Calibration with IR ALS Beat

I attempted a single arm actuation calibration using IR beatnote (in the directions of soCal idea for DARM calibration)


Measurement and Inferences:

  • I sent 4 excitation signals at C1:SUS-ITM_LSC_EXC wit 30cts at 31Hz, 200cts at 197Hz, 600cts at 619Hz and 1000cts at 1069 Hz.
  • These were sent simultaneously using compose function in python awg.
  • The XARM was locked to mai laser and alignment was optimized with ASS.
  • The Xend Green laser was locked to XARM and alignment was optimized.
    • Sidenote: GTRX is now normalized to give 1 at near maximum power.
    • Green lasers can be locked with script instead of toggling.
    • Script can be called from sitemap->ALS->! Toggle Shutters->Lock X Green
    • Script is present at scripts/ALS/lockGreen.py.
  • C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ was measured for 60s.
  • Also, measured C1:LSC-XARM_OUT_DQ and C1:SUS-ITMX_LSC_OUT_DQ.
  • Attachment 1 shows the measured beatnote spectrum with excitations on in units of m/rtHz.
  • It also shows resdiual displacement contribution PSD of (output referred) XARM_OUT and ITMX_LSC_OUT to the same point in the state space model.
    • Note: that XARM_OUT and ITMX_LSC_OUT (excitation signal) get coherently added in reality and hence the beatnote spectrum at each excitation frequency is lower than both of them.
    • The remaining task is to figure out how to calculate the calibration constant for ITMX actuation from this information.
    • I need more time to understand the mixture of XARM_OUT and ITMX_LSC_OUT in the XARM length node in control loop.
    • Beatnote signal tells us the actual motion of the arm length, not how much ITMX would have actuated if the arm was not locked.
  • Attachment 2 has the A,B,C,D matrices for the full state space model used. These were fed to python controls package to get transfer functions from one point to another in this MIMO.
    • Note, that here I used the calibration of XARM_OUT we measured earlies in 16127.
    • On second thought, maybe I should first send excitation in ETMX_LSC_EXC. Then, I can just measure ETMX_LSC_OUT which includes XARM_OUT due to the lock and use that to get calibration of ETMX actuation directly.

Attachment 1: SingleArmActCalwithIRALSBeat.pdf
SingleArmActCalwithIRALSBeat.pdf
Attachment 2: stateSpaceModel.zip
  16179   Thu Jun 3 17:35:31 2021 AnchalSummaryIMCFixed medm button

I fixed the PSL shutter button on Shutters summary page C1IOO_Mech_Shutter.adl. Now PSL switch changes C1:PSL-PSL_ShutterRqst channel. Earlier it was C1:AUX-PSL_ShutterRqst which doesn't do anything.

 

Attachment 1: C1IOO_Mech_Shutters.png
C1IOO_Mech_Shutters.png
  16197   Thu Jun 10 14:01:36 2021 AnchalSummaryAUXXend Green Laser PDH OLTF measurement loop algebra

Attachment 1 shows the closed loop of Xend Green laser Arm PDH lock loop. Free running laser noise gets injected at laser head after the PZT actuation as \eta. The PDH error signal at output of miser is fed to a gain 1 SR560 used as summing junction here. Used in 'A-B mode', the B port is used for sending in excitation \nu_e e^{st} where s = i\omega.

We have access to three ports for measurement, marked \alpha at output of mixer, \beta at output of SR560, and \gamma at PZT out monitor port in uPDH box. From loop algebra, we get following:

\large \left[ (\alpha - \nu_e) K(s)A(s) + \eta \right ]C(s)D(s) = \alpha

\large \Rightarrow (\alpha - \nu_e) G_{OL}(s) + \eta C(s)D(s) = \alpha, where \large G_{OL}(s) = C(s) D(s) K(s) A(s) is the open loop transfer function of the loop.

\large \Rightarrow \alpha = \eta \frac{C(s) D(s)}{1 - G_{OL}(s)} \quad -\quad \nu_e\frac{G_{OL}(s)}{1 - G_{OL}(s)}

\large \Rightarrow \beta = \eta \frac{C(s) D(s)}{1 - G_{OL}(s)} \quad -\quad \nu_e\frac{1}{1 - G_{OL}(s)}

\large \Rightarrow \gamma = \eta \frac{1}{K(s)} \frac{G_{OL}(s)}{1 - G_{OL}(s)} \quad -\quad \nu_e\frac{K(s)}{1 - G_{OL}(s)}

So measurement of \large G_{OL}(s) can be done in following two ways (not a complete set):

  1. \large G_{OL}(s) \approx \frac{\alpha}{\beta} = \frac{G_{OL}(s) - \frac{\eta C(s)D(s)}{\nu_e}}{1 - \frac{\eta C(s)D(s)}{\nu_e}}, if excitation amplitude is large enough such that \large \frac{\eta C(s)D(s)}{\nu_e} \ll 1over all frequencies.
    • In this method however, note that SR785 would be taking ratio of unsuppresed excitation at \large \alpha with suppressed excitation at \large \beta.
    • If the closed loop gain (suppression) \large 1/(1 - G_{OL}(s))is too much, the excitation signal might drop below noise floor of SR785 while measuring \large \beta.
    • This would then appear as a flat response in the transfer function.
    • This happened with us when we tried to measure this transfer function using this method. Below few hundered Hz, the measurement will become flat at around 40 dB.
    • Increasing the excitation amplitude where suppression is large should ideally work. We even tried to use Auto level reference option in SR785.
    • But the PDH loop gets unlocked as soon as we put exciation above 35 mV at this point in this loop.
  2. \large \frac{G_{OL}(s)}{K(s)} \approx \frac{\alpha}{\gamma} = \frac{G_{OL}(s) - \frac{\eta C(s)D(s)}{\nu_e}}{K(s)\left(1 - \frac{\eta C(s)D(s)}{\nu_e}\right )}, if excitation amplitude is large enough such that \large \frac{\eta C(s)D(s)}{\nu_e} \ll 1over all frequencies.
    • In this method, channel 1 (denominator) on SR785 would remain high in amplitude throughout the measurement avoiding the above issue of suppression below noise floor.
    • We can easily measure the feedback transfer funciton \large K(s) with the loop open. Then multiplying the two measurements should give us estimate of open loop transfer function.
    • This is waht we did in 16194. But we still could not increase the excitation amplitude beyond 35 mV at injection point and got a noisy measurement.
    • We checked yesterday coherence of excitation signal with the three measurment points \large \alpha, \beta, \gamma and it was 1 throughout the frequency region of measurement for excitation amplitudes above 20 mV.
    • So as of now, we are not sure why our signal to noise was so poor in lower frequency measurement.
Attachment 1: AUX_PDH_LOOP.pdf
AUX_PDH_LOOP.pdf
  16200   Mon Jun 14 18:57:49 2021 AnchalUpdateAUXXend is unbearably hot. Green laser is loosing lock in 10's of seconds

Working in Xend with mask on has become unbearable. It is very hot there and I would really like if we fix this issue.


Today, the Xend Green laser was just unable to hold lock for longer than 10's of seconds. The longest I could see it hold lock was for about 2 minutes. I couldn't find anything obviously wrong with it. Attached are noise spectrums of error and control points. The control point spectrum shows good matching with typical free running laser noise.

Are the few peaks above 10 kHz in error point spectrum worrysome? I need to think more about it in a cooler place to make sure.

I wanted to take a high frequency spectrum of error point to make sure that higher harmonics of 250 kHz modulation frequency are not leaking into the PDH box after demodulation. However, the lock could not be maintained long enough to take this final measurement. I'll try again tomorrow morning. It is generally cooler in the mornings.


This post is just an update on what's happening. I need to work more to get some meaningful inferences about this loop.

Attachment 1: XAUX_PDH_Err_In_ASD.pdf
XAUX_PDH_Err_In_ASD.pdf
Attachment 2: XAUX_PZT_Out_Mon_ASD.pdf
XAUX_PZT_Out_Mon_ASD.pdf
  16214   Fri Jun 18 14:53:37 2021 AnchalSummaryPEMTemperature sensor network proposal

I propose we set up a temperature sensor network as described in attachment 1.

Here there are two types of units:

  • BASE-GATEWAY
    • Holds the processor to talk to the network through Modbus TCP/IP protocol.
    • This unit itself has a temperature sensor in it. It is powered by a power adaptor or PoE from the switch.
    • Each base unit can have at max 2 extended temperature probes ENV-TEMP.
  • ENV-TEMP
    • This is just a temperature probe with no other capabilities.
    • It is powered via PoE from the BASE-GATEWAY unit.

Proposal is

  • to put 2 ENV-TEMP sensors (#1 and #2) along the Y-arm at the end and midway. These are powered and read through a BASE-GATEWAY (#A) at the vertex. The BASE-GATEWAY (#A) will serve as temperature sensor for the vertex.
  • We put one ENV-TEMP(#3) at the X-end powered and read through by another BASE-GATEWAY (#B) at the midpoint of X-arm.
  • Both BASE-GATEWAY are connected by ethernet cables to the network switch. That's it.

These sensors can be configured over network by going to their assigned IP addresses. I'm not sure at the moment how to configure the dB files to get them to write on slow EPICS channels.

We will have an unused port on the BASE-GATEWAY (#B) which can be used to run another temperature sensor, maybe at an important rack, PSL table or somewhere else.

In future, if more sensors are required, there are expansion (network switch like) options for BASE-GATEWAY or daisy-chain options for the probes.



Edit Fri Jun 18 16:28:13 2021 :

See this [wiki page](https://wiki-40m.ligo.caltech.edu/Physical_Environment_Monitoring/Thermometers) for updated plan and final quote.

Attachment 1: 40mTempSensors.pdf
40mTempSensors.pdf
  16222   Wed Jun 23 09:05:02 2021 AnchalUpdateSUSMC lock acquired back again

MC was unable to acquire lock because the WFS offsets were cleared to zero at some point and because of that MC was very misaligned to be able to catch back lock. In such cases, one wants the WFS to start accumulating offsets as soon as minimal lock is attained so that the mode cleaner can be automatically aligned. So I did following that worked:

  • Made the C1:IOO-WFS_TRIG_WAIT_TIME (delay in WFS trigger) from 3s to 0s.
  • Reduced C1:IOO-WFS_TRIGGER_THRESH_ON (Switchin on threshold) from 5000 to 1000.
  • Then as soon as a TEM00 was locked with poor efficiency, the WFS loops started aligning the optics to bring it back to lock.
  • After robust lock has been acquired, I restored the two settings I changed above.
Quote:

 


At the end, since MC has trouble catching lock after opening PSL shutter, I tried running burt restore the ioo to 2021/Jun/17/06:19/c1iooepics.snap but the problem persists

 

  16231   Wed Jun 30 15:31:35 2021 AnchalSummaryOptical LeversCentered optical levers on ITMY, BS, PRM and ETMY

When both arms were locked, we found that ITMY optical lever was very off-center. This seems to have happened after the c1susaux rebooting we did in June 17th. I opened the ITMY table and realigned the OPLev beam to the center when the arm was locked. I repeated this process for BS, PRM and ETMY. I did PRM because I've known that we have been keeping its OpLev off. The reason was clear once I opened the table. The oplev reflection beam was hitting the PD box instead of the PD. After correcting, I was able to swithc on PRM opLev loops and saw normal functioning.

  16232   Wed Jun 30 18:44:11 2021 AnchalSummaryLSCTried fixing ETMY QPD

I worked in Yend station, trying to get the ETMY QPD to work properly. When I started, only one (quadrant #3) of the 4 quadrants were seeing any lights. By just changing the beam splitter that reflects some light off to the QPD, I was able to get some amount of light in quadrant #2. However, no amount of steering would show any light in any other quadrants.

The only reason I could think of is that the incoming beam gets partially clipped as it seems to be hitting the beam splitter near the top edge. So for this to work properly, a mirror upstream needs to be adjusted which would change the alignment of TRX photodiode. Without the light on TRX photodiode, there is no lock and there is no light. So one can't steer this beam without lossing lock.

I tried one trick, in which, I changed the YARM lock trigger to POY DC signal. I got it to work to get the lock going even when TRY was covered by a beam finder card. However, this lock was still bit finicky and would loose lock very frequently. It didn't seem worth it to potentially break the YARM locking system for ETMY QPD before running this by anyone and this late in evening. So I reset everything to how it was (except the beam splitter that reflects light to EMTY QPD. That now has equal ligth falling on quadrant #2 and #3.

The settings I temporarily changed were:

  • C1:LSC-TRIG_MTRX_7_10 changed from 0 to -1 (uses POY DC as trigger)
  • C1:LSC-TRIG_MTRX_7_13 changed from 1 to 0 (stops using TRY DC as trigger)
  • C1:LSC-YARM_TRIG_THRESH_ON changed from 0.3 to -22
  • C1:LSC-YARM_TRIG_THRESH_OFF changed from 0.1 to -23.6
  • C1:LSC-YARM_FM_TRIG_THRESH_ON changed from 0.5 to -22
  • C1:LSC-YARM_FM_TRIG_THRESH_OFF changed from 0.1 to -23.6

All these were reverted back to there previous values manually at the end.

 

  16236   Thu Jul 1 16:55:21 2021 AnchalSummaryOptical LeversFixed Centeringoptical levers PRM

This was a mistake. When arms are locked, PRM is misaligned by setting -800 offset in PIT dof of PRM. The oplev is set to function in normal state not this misalgined configuration. I undid my changes today by switching off the offset, realigning the oplev to center and then restoring the single arm locked state. The PRM OpLevs loops are off now.

Quote:

 PRM because I've known that we have been keeping its OpLev off. The reason was clear once I opened the table. The oplev reflection beam was hitting the PD box instead of the PD. After correcting, I was able to swithc on PRM opLev loops and saw normal functioning.

 

  16242   Fri Jul 9 15:39:08 2021 AnchalSummaryALSSingle Arm Actuation Calibration with IR ALS Beat [Correction]

I did this analysis again by just doing demodulation go 5s time segments of the 60s excitation signal. The major difference is that I was not summing up the sine-cosine multiplied signals, so the error associated was a lot more. If I simply multpy the whole beatnote signal with digital LO created at excitation frequency, divide it up in 12 segments of 5 s each, sum them up individually, then take the mean and standard deviation, I get the answer as:
\frac{6.88 \pm 0.05}{f^2} nm/ctsas opposed to \frac{7.32 \pm 0.03}{f^2} nm/ctsthat was calculated using MICH signal earlier by gautum in 13984.

Attachment 1 shows the scatter plot for the complex calibration factors found for the 12 segments.

My aim in the previous post was however to get a time series of the complex calibration factor from which I can take a noise spectral density measurement of the calibration. I'll still look into how I can do that. I'll have to add a low pass filter to integrate the signal. Then the noise spectrum up to the low pass pole frequency would be available. But what would this noise spectrum really mean? I still have to think a bit about it. I'll put another post soon.

Quote:

We attempted to simulate "oscillator based realtime calibration noise monitoring" in offline analysis with python. This helped us in finding about a factor of sqrt(2) that we were missing earlier in 16171. we measured C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ when X-ARM was locked to main laser and Xend green laser was locked to XARM. An excitation signal of amplitude 600 was setn at 619 hz at C1:ITMX_LSC_EXC.

Signal analysis flow:

  • The C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ is calibrated to give value of beatntoe frequency in Hz. But we are interested in the fluctuations of this value at the excitation frequency. So the beatnote signal is first high passed with 50 hz cut-off. This value can be reduced a lot more in realtime system. We only took 60s of data and had to remove first 2 seconds for removing transients so we didn't reduce this cut-off further.
  • The I and Q demodulated beatntoe signal is combined to get a complex beatnote signal amplitude at excitation frequency.
  • This signal is divided by cts amplitude of excitation and multiplied by square of excitation frequency to get calibration factor for ITMX in units of nm/cts/Hz^2.
  • The noise spectrum of absolute value of  the calibration factor is plotted in attachment 1, along with its RMS. The calibration factor was detrended linearly so the the DC value was removed before taking the spectrum.
  • So Attachment 1 is the spectrum of noise in calibration factor when measured with this method. The shaded region is 15.865% - 84.135% percentile region around the solid median curves.

We got a value of \frac{7.3 \pm 3.9}{f^2}\, \frac{nm}{cts}.  The calibration factor in use is from \frac{7.32}{f^2} nm/cts from 13984.

Next steps could be to budget this noise while we setup some way of having this calibration factor generated in realitime using oscillators on a FE model. Calibrating actuation of a single optic in a single arm is easy, so this is a good test setup for getting a noise budget of this calibration method.

 

Attachment 1: ITMX_calibration_With_ALS_Beat.pdf
ITMX_calibration_With_ALS_Beat.pdf
  16261   Tue Jul 27 23:04:37 2021 AnchalUpdateLSC40 meter party

[ian, anchal, paco]

After our second attempt of locking PRFPMI tonight, we tried to resotre XARM and YARM locks to IR by clicking on IFO_CONFIGURE>Restore XARM (POX) and IFO_CONFIGURE>Restore YARM (POY) but the arms did not lock. The green lasers were locked to the arms at maximum power, so the relative alignments of each cavity was ok. We were also able to lock PRMI using IFO_CONFIGURE>Restore PRMI carrier.

This was very weird to us. We were pretty sure that the aligment is correct, so we decided to cehck the POX POY signal chain. There was essentially no signal coming at POX11 and there was a -100 offset on it. We could see some PDH signal on POY11 but not enough to catch the locks.

We tried running IFO_CONFIGURE>LSC OFFSETS to cancel out any dark current DC offsets. The changes made by the script are shown in attachment 1.

We went to check the tables and found no light visible on beam finder cards on POX11 or POY11. We found that ITMX was stuck on one of the coils. We unstuck it using the shaking method. The OPLEVs on ITMX after this could not be switched on as the OPLEV servo were railing to limits. But when we ran Restore XARM (POX) again, they started working fine. Something is done by this script that we are not aware of.

We're stopping here. We still can not lock any of the single arms.


Wed Jul 28 11:19:00 2021 Update:

[gautam, paco]

Gautam found that the restoring of POX/POY failed to restore the whitening filter gains in POX11 / POY11. These are meant to be restored to 30 dB and 18 dB for POX11 and POY11 respectively but were set to 0 dB in detriment of any POX/POY triggering/locking. The reason these are lowered is to avoid saturating the speakers during lock acquisition. Yesterday, burt-restore didn't work because we restored the c1lscepics.snap but said gains are actually in c1lscaux.snap. After manually restoring the POX11 and POY11 whitening filter gains, gautam ran the LSCOffsets script. The XARM and YARM were able to quickly lock after we restored these settings.

The root of our issue may be that we didn't run the CARM & DARM watch script (which can be accessed from the ALS/Watch Scripts in medm). Gautam added a line on the Transition_IR_ALS.py script to run the watch script instead.

Attachment 1: Screenshot_2021-07-27_22-19-58.png
Screenshot_2021-07-27_22-19-58.png
  16264   Wed Jul 28 17:10:24 2021 AnchalUpdateLSCSchnupp asymmetry

[Anchal, Paco]

I redid the measurement of Schnupp asymmetry today and found it to be 3.8 cm \pm 0.9 cm.


Method

  • One of the arms is misalgined both at ITM and ETM.
  • The other arm is locked and aligned using ASS.
  • The SRCL oscillator's output is changed to the ETM of the chosen arm.
  • The AS55_Q channel in demodulation of SRCL oscillator is configured (phase corrected) so that all signal comes in C1:CAL-SENSMAT_SRCL_AS55_Q_DEMOD_I_OUT.
  • The rotation angle of AS55 RFPD is scanned and the C1:CAL-SENSMAT_SRCL_AS55_Q_DEMOD_I_OUT is averaged over 10s after waiting for 5s to let the transients pass.
  • This data is used to find the zero crossing of AS55_Q signal when light is coming from one particular arm only.
  • The same is repeated for the other arm.
  • The difference in the zero crossing phase angles is twice the phase accumulated by a 55 MHz signal in travelling the length difference between the arm cavities i.e. the Schnupp Asymmetry.

I measured a phase difference of 5 \pm1 degrees between the two paths.

The uncertainty in this measurement is much more than gautam's 15956 measurement. I'm not sure yet why, but would look into it.

 

Quote:

I used the Valera technique to measure the Schnupp asymmetry to be \approx 3.5 \, \mathrm{cm}, see Attachment #1. The data points are points, and the zero crossing is estimated using a linear fit. I repeated the measurement 3 times for each arm to see if I get consistent results - seems like I do. Subtle effects like possible differential detuning of each arm cavity (since the measurement is done one arm at a time) are not included in the error analysis, but I think it's not controversial to say that our Schnupp asymmetry has not changed by a huge amount from past measurements. Jamie set a pretty high bar with his plot which I've tried to live up to. 

 

Attachment 1: Lsch.pdf
Lsch.pdf
  16268   Tue Aug 3 20:20:08 2021 AnchalUpdateOptical LeversRecentered ETMX, ITMX and ETMY oplevs at good state

Late elog. Original time 08/02/2021 21:00.

I locked both arms and ran ASS to reach to optimum alignment. ETMY PIT > 10urad, ITMX P > 10urad and ETMX P < -10urad. Everything else was ok absolute value less than 10urad. I recentered these three.

Than I locked PRMI, ran ASS on PRCL and MICH and checked BS and PRM alignment. They were also less than absolute value 10urad.

  16270   Thu Aug 5 14:59:31 2021 AnchalUpdateGeneralAdded temperature sensors at Yend and Vertex too

I've added the other two temperature sensor modules on Y end (on 1Y4, IP: 192.168.113.241) and in the vertex on (1X2, IP: 192.168.113.242). I've updated the martian host table accordingly. From inside martian network, one can go to the browser and go to the IP address to see the temperature sensor status . These sensors can be set to trigger alarm and send emails/sms etc if temperature goes out of a defined range.

I feel something is off though. The vertex sensor shows temperature of ~28 degrees C, Xend says 20 degrees C and Yend says 26 degrees C. I believe these sensors might need calibration.

Remaining tasks are following:

  • Modbus TCP solution:
    • If we get it right, this will be easiest solution.
    • We just need to add these sensors as streaming devices in some slow EPICS machine in there .cmd file and add the temperature sensing channels in a corresponding database file.
  • Python workaround:
    • Might be faster but dirty.
    • We run a python script on megatron which requests temperature values every second or so from the IP addresses and write them on a soft EPICs channel.
    • We still would need to create a soft EPICs channel fro this and add it to framebuilder data acquisition list.
    • Even shorted workaround for near future could be to just write temperature every 30 min to a log file in some location.

[anchal, paco]

We made a script under scripts/PEM/temp_logger.py and ran it on megatron. The script uses the requests package to query the latest sensor data from the three sensors every 10 minutes as a json file and outputs accordingly. This is not a permanent solution.

  16271   Fri Aug 6 13:13:28 2021 AnchalUpdateBHDc1teststand subnetwork now accessible remotely

c1teststand subnetwork is now accessible remotely. To log into this network, one needs to do following:

  • Log into nodus or pianosa. (This will only work from these two computers)
  • ssh -CY controls@192.168.113.245
  • Password is our usual workstation password.
  • This will log you into c1teststand network.
  • From here, you can log into fb1, chiara, c1bhd and c1sus2  which are all part of the teststand subnetwork.

Just to document the IT work I did, doing this connection was bit non-trivial than usual.

  • The martian subnetwork is created by a NAT router which connects only nodus to outside GC network and all computers within the network have ip addresses 192.168.113.xxx with subnet mask of 255.255.255.0.
  • The cloned test stand network was also running on the same IP address scheme, mostly because fb1 and chiara are clones in this network. So every computer in this network also had ip addresses 192.168.113.xxx.
  • I setup a NAT router to connect to martian network forwarding ssh requests to c1teststand computer. My NAT router creates a separate subnet with IP addresses 10.0.1.xxx and suubnet mask 255.255.255.0 gated through 10.0.1.1.
  • However, the issue is for c1teststand, there are now two networks accessible which have same IP addresses 192.168.113.xxx. So when you try to do ssh, it always search in its local c1teststand subnetwork instead of routing through the NAT router to the martian network.
  • To work around this, I had to manually provide an ip router to c1teststand for connecting to two of the computers (nodus and pianosa) in martian network. This is done by:
    ip route add 192.168.113.200 via 10.0.1.1 dev eno1
    ip route add 192.168.113.216 via 10.0.1.1 dev eno1
  • This gives c1teststand specific path for ssh requests to/from these computers in the martian network.
  16273   Mon Aug 9 10:38:48 2021 AnchalUpdateBHDc1teststand subnetwork now accessible remotely

I had to add following two lines in the /etc/network/interface file to make the special ip routes persistent even after reboot:

post-up ip route add 192.168.113.200 via 10.0.1.1 dev eno1
post-up ip route add 192.168.113.216 via 10.0.1.1 dev eno1

  16282   Wed Aug 18 20:30:12 2021 AnchalUpdateASSFixed runASS scripts

Late elog: Original time of work Tue Aug 17 20:30 2021


I locked the arms yesterday remotely and tried running runASS.py scripts (generally ran by clicking Run ASS buttons on IFO OVERVIEW screen of ASC screen). We have known for few weeks that this script stopped working for some reason. It would start the dithering and would optimize the alignment but then would fail to freeze the state and save the alignment.

I found the caget('C1:LSC-TRX_OUT') or caget('C1:LSC-TRY_OUT') were not working in any of the workstations. This is weird since caget was able to acquire these fast channel values earlier and we have seen this script to work for about a month without any issue.

Anyways, to fix this, I just changed the channel name to 'C1:LSC-TRY_OUT16' when the script checks in the end if the arm has indeed been aligned. It was only this step that was failing. Now the script is working fine and I tested them on both arms. On the Y arm, I misaligned the arm by adding bias in yaw by changing C1:SUS-ITMY_YAW_OFFSET from -8 to 22. The script was able to align the arm back.

  16283   Thu Aug 19 03:23:00 2021 AnchalUpdateCDSTime synchornization not running

I tried to read a bit and understand the NTP synchronization implementation in FE computers. I'm quite sure that NTP synchronization should be 'yes' if timesyncd are running correctly in the output of timedatectl in these computers. As Koji reported in 15791, this is not the case. I logged into c1lsc, c1sus and c1ioo and saw that RTC has drifted from the software clocks too which does not happen if NTP synchronization was active. This would mean that almost certainly, if the computers are rebooted, the synchronization will be lost and the models will fail to come online.

My current findings are the following (this should be documented in wiki once we setup everything):

  • nodus is running a NTP server using chronyd. One can check the configuration of this NTP serer in /etc/chornyd.conf
  • fb1 is running an NTP server using ntpd that follows nodus and an IP address 131.215.239.14. This can be seen in /etc/ntp.conf.
  • There are no comments to describe what this other server (131.215.239.14) is. Does the GC network have an NTP server too?
  • c1lsc, c1sus and c1ioo all have systemd-timesyncd.service running with configuration file in /etc/systemd/timesyncd.conf.
  • The configuration file set Servers=ntpserver but echo $ntpserver produces nothing (blank) on these computers and I've been unable to find anyplace where ntpserver is defined.
  • In chiara (our name server), the name server file /etc/hosts does not have any entry for ntpserver either.
  • I think the problem might be that these computers are unable to find the ntpserver as it is not defined anywhere.

The solution to this issue could be as simple as just defining ntpserver in the name server list. But I'm not sure if my understanding of this issue is correct. Comments/suggestions are welcome for future steps.

 

  16285   Fri Aug 20 00:28:55 2021 AnchalUpdateCDSTime synchornization not running

I added ntpserver as a known host name for address 192.168.113.201 (fb1's address where ntp server is running) in the martian host list in the following files in Chiara:

/var/lib/bind/martian.hosts
/var/lib/bind/rev.113.168.192.in-addr.arpa

Note: a host name called ntp was already defined at 192.168.113.11 but I don't know what computer this is.

Then, I restarted the DNS on chiara by doing:

sudo service bind9 restart

Then I logged into c1lsc and c1ioo and ran following:

controls@c1ioo:~ 0$ sudo systemctl restart systemd-timesyncd.service

controls@c1ioo:~ 0$ sudo systemctl status systemd-timesyncd.service -l
● systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
   Active: active (running) since Fri 2021-08-20 07:24:03 UTC; 53s ago
     Docs: man:systemd-timesyncd.service(8)
 Main PID: 23965 (systemd-timesyn)
   Status: "Idle."
   CGroup: /system.slice/systemd-timesyncd.service
           └─23965 /lib/systemd/systemd-timesyncd

Aug 20 07:24:03 c1ioo systemd[1]: Starting Network Time Synchronization...
Aug 20 07:24:03 c1ioo systemd[1]: Started Network Time Synchronization.
Aug 20 07:24:03 c1ioo systemd-timesyncd[23965]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 07:24:35 c1ioo systemd-timesyncd[23965]: Using NTP server 192.168.113.201:123 (ntpserver).
controls@c1ioo:~ 0$ timedatectl
      Local time: Fri 2021-08-20 07:25:28 UTC
  Universal time: Fri 2021-08-20 07:25:28 UTC
        RTC time: Fri 2021-08-20 07:25:31
       Time zone: Etc/UTC (UTC, +0000)
     NTP enabled: yes
NTP synchronized: no
 RTC in local TZ: no
      DST active: n/a

The same output is shown in c1lsc too. The NTP synchronized flag in output of timedatectl command did not change to yes and the RTC is still 3 seconds ahead of the local clock.

Then I went to c1sus to see what was the status output before rstarting the timesyncd service. I got folloing output:

controls@c1sus:~ 0$ sudo systemctl status systemd-timesyncd.service -l
● systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled)
   Active: active (running) since Tue 2021-08-17 04:38:03 UTC; 3 days ago
     Docs: man:systemd-timesyncd.service(8)
 Main PID: 243 (systemd-timesyn)
   Status: "Idle."
   CGroup: /system.slice/systemd-timesyncd.service
           └─243 /lib/systemd/systemd-timesyncd

Aug 20 02:02:18 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 02:36:27 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 03:10:35 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 03:44:43 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 04:18:51 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 04:53:00 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 05:27:08 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 06:01:16 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 06:35:24 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Aug 20 07:09:33 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).

This actually shows that the service was able to find ntpserver correctly at 192.168.113.201 even before I changed the name server file in chiara. So I'm retracting the changes made to name server. They are probably not required.

The configuration files for timesynd.conf are read only even with sudo. I tried changing permissions but that did not work either. Maybe these files are not correctly configured. The man page of timesyncd  says to use field 'NTP' to give the ntp servers. Our files are using field 'Servers'. But since we are not getting any error message, I don't think this is the issue here.

I'll look more into this problem.

  16286   Fri Aug 20 06:24:18 2021 AnchalUpdateCDSTime synchornization not running

I read on some stack exchange that 'NTP synchornized' indicator turns 'yes' in the output of command timedatectl only when RTC clock has been adjusted at some point. I also read that timesyncd does not do the change if the time difference is too much, roughly more than 3 seconds.

So I logged into all FE machines and ran sudo hwclock -w to synchronize them all to the system clocks and then waited if the timesyncd does any correction on RTC. It did not. A few hours later, I found the RTC clocks drifitng again from the system clocks. So even if the timesynd service is running as it should, it si not performing time correction for whatever reason.

Maybe we should try to use some other service?

Quote:
 

The NTP synchronized flag in output of timedatectl command did not change to yes and the RTC is still 3 seconds ahead of the local clock.

 

  16291   Mon Aug 23 22:51:44 2021 AnchalUpdateGeneralTime synchronization efforts

Related elog thread: 16286


I didn't really achieve anything but I'm listing what I've tried.

  • I know now that the timesyncd isn't working because systemd-timesyncd is known to have issues when running on a read-only file system. In particular, the service does not have privileges to change the clock or drift settings at /run/systemd/clock or /etc/adjtime.
  • The workarounds to these problems are poorly rated/reviews in stack exchange and require me to change the /etc/systmd/timesyncd.conf file but I'm unable to edit this file.
  • I know that Paco was able to change these files earlier as the files are now changed and configured to follow a debian ntp pool server which won't work as the FEs do not have internet access. So the conf file needs to be restored to using ntpserver as the ntp server.
  • From system messages, the ntpserver is recognized by the service as shown in the second part of 16285. I really think the issue is in file permissions. the file /etc/adjtime has never been updated since 2017.
  • I got help from Paco on how to edit files for FE machines. The FE machines directories are exported from fb1:/diskless/root.jessie/
  • I restored the /etc/systmd/timesyncd.conf file to how it as before with just servers=ntpserver line. Restarted timesyncd service on all FEs,I tried a few su the synchronization did not happen.
  • I tried a few suggestions from stackexchange but none of them worked. The only rated solution creates a tmpfs directory outside of read-only filesystem and uses that to run timesyncd. So, in my opinion, timesyncd  would never work in our diskless read-only file system FE machines.
  • One issue in an archlinux discussion ended by the questioner resorting to use opennptd from openBSD distribution. The user claimed that opennptd is simple enough that it can run ntp synchornization on a read-only file system.
  • Somehwat painfully, I 'kind of' installed the openntpd tool in the fb1:/diskless/root.jessie directory following directions from here. I had to manually add user group and group for the FEs (which I might not have done correctly). I was not able to get the openntpd daemon to start properly after soe tries.
  • I restored everything back to how it was and restarted timesyncd in c1sus even though it would not do anything really.
Quote:

This time no matter how we try to set the time, the IOPs do not run with "DC status" green. (We kept having 0x4000)

 

  16292   Tue Aug 24 09:22:48 2021 AnchalUpdateGeneralTime synchronization working now

Jamie told me to use chroot to log in into the chroot jail of debian os that are exported for the FEs and install ntp there. I took following steps at the end of which, all FEs have NTP synchronized now.

  • I logged into fb1 through nodus.
  • chroot /diskless/root.jessie /bin/bash took me to the bash terminal for debian os that is exported to all FEs.
  • Here, I ran sudo apt-get install ntp which ran without any errors.
  • I then edited the file in /etc/ntp.conf , i removed the default servers and added following lines for servers (fb1 and nodus ip addresses):
    server 192.113.168.201
    server 192.113.168.201
  • I logged into each FE machine and ran following commands:
    sudo systemctl stop systemd-timesyncd.service; sudo systemctl status systemd-timesyncd.service;
    timedatectl; sleep 2;sudo systemctl daemon-reload;  sudo systemctl start ntp; sleep 2; sudo systemctl status ntp; timedatectl
    sudo hwclock -s
    • The first line ensures that systemd-timesyncd.service is not running anymore. I did not uninstall timesyncd and left its configuration file as it is.
    • The second line first shows the times of local and RTC clocks. Then reloads the daemon services to get ntp registered. Then starts ntp.service and shows it's status. Finally, the timedatectl command shows the synchronized clocks and that NTP synchronization has occured.
    • The last line sets the local clock same as RTC clock. Even though this wasn't required as I saw that the clocks were already same to seconds, I just wanted a point where all the local clocks are synchronized to the ntp server.
  • Hopefully, this would resolve our issue of restarting the models anytime some glitch happens or when we need ot update something in one of them.

Edit Tue Aug 24 10:19:11 2021:

I also disabled timesyncd on all FEs using sudo systemctl disable systemd-timesyncd.service

I've added this wiki page for summarizing the NTP synchronization knowledge.

  16295   Tue Aug 24 22:37:40 2021 AnchalUpdateGeneralTime synchronization not really working

I attempted to install chrony and run it on one of the FE machines. It didn't work and in doing so, I lost the working NTP client service on the FE computers as well. Following are some details:

  • I added the following two mirrors in the apt source list of root.jessie at /etc/apt/sources.list
    deb http://ftp.us.debian.org/debian/ jessie main contrib non-free
    deb-src http://ftp.us.debian.org/debian/ jessie main contrib non-free
  • Then I installed chrony in the root.jessie using
    sudo apt-get install chrony
    • I was getting an error E: Can not write log (Is /dev/pts mounted?) - posix_openpt (2: No such file or directory) . To fix this, I had to run:
      sudo mount -t devpts none "$rootpath/dev/pts" -o ptmxmode=0666,newinstance
      sudo ln -fs "pts/ptmx" "$rootpath/dev/ptmx"
    • Then, I had another error to resolve.
      Failed to read /proc/cmdline. Ignoring: No such file or directory
      start-stop-daemon: nothing in /proc - not mounted?
      To fix this, I had to exit to fb1 and run:
      sudo mount --bind /proc /diskless/root.jessie/proc
    • With these steps, chrony was finally installed, but I immediately saw an error message saying:
      Starting /usr/sbin/chronyd...
      Could not open NTP sockets
  • I figured this must be due to ntp running in the FE machines.  I logged into c1iscex and stopped and disabled the ntp service:
    sudo systemctl stop ntp
    sudo systemctl disable ntp
    • I saw some error messages from the above coomand as FEs are read only file systems:
      Synchronizing state for ntp.service with sysvinit using update-rc.d...
      Executing /usr/sbin/update-rc.d ntp defaults
      insserv: fopen(.depend.stop): Read-only file system
      Executing /usr/sbin/update-rc.d ntp disable
      update-rc.d: error: Read-only file system
    • So I went back to chroot in fb1 and ran the two command sabove that failed:
      /usr/sbin/update-rc.d ntp defaults
      /usr/sbin/update-rc.d ntp disable
    • The last line gave the output:
      insserv: warning: current start runlevel(s) (empty) of script `ntp' overrides LSB defaults (2 3 4 5).
      insserv: warning: current stop runlevel(s) (2 3 4 5) of script `ntp' overrides LSB defaults (empty).
    • I igored this and moved forward.
  • I copied the chronyd.service from nodus to the chroot in fb1 and configured it to use nodus as the server. The I started the chronyd.service

    sudo systemctl status chronyd.service
    but got the saem issue of NTP sockets.

    â—Â chronyd.service - NTP client/server
       Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled)
       Active: failed (Result: exit-code) since Tue 2021-08-24 21:52:30 PDT; 5s ago
      Process: 790 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=1/FAILURE)

    Aug 24 21:52:29 c1iscex systemd[1]: Starting NTP client/server...
    Aug 24 21:52:30 c1iscex chronyd[790]: Could not open NTP sockets
    Aug 24 21:52:30 c1iscex systemd[1]: chronyd.service: control process exited, code=exited status=1
    Aug 24 21:52:30 c1iscex systemd[1]: Failed to start NTP client/server.
    Aug 24 21:52:30 c1iscex systemd[1]: Unit chronyd.service entered failed state.

  • I tried a few things to resolve this, but couldn't get it to work. So I gave up on using chrony and decided to go back to ntp service atleast.

  • I stopped, disabled and checked status of chrony:
    sudo systemctl stop chronyd
    sudo systemctl disable chronyd
    sudo systemctl status chronyd
    This gave the output:

    â—Â chronyd.service - NTP client/server
       Loaded: loaded (/usr/lib/systemd/system/chronyd.service; disabled)
       Active: failed (Result: exit-code) since Tue 2021-08-24 22:09:07 PDT; 25s ago

    Aug 24 22:09:07 c1iscex systemd[1]: Starting NTP client/server...
    Aug 24 22:09:07 c1iscex chronyd[2490]: Could not open NTP sockets
    Aug 24 22:09:07 c1iscex systemd[1]: chronyd.service: control process exited, code=exited status=1
    Aug 24 22:09:07 c1iscex systemd[1]: Failed to start NTP client/server.
    Aug 24 22:09:07 c1iscex systemd[1]: Unit chronyd.service entered failed state.
    Aug 24 22:09:15 c1iscex systemd[1]: Stopped NTP client/server.

  • I went back to fb1 chroot and removed chrony package and deleted the configuration files and systemd service files:
    sudo apt-get remove chrony

  • But when I started ntp daemon service back in c1iscex, it gave error:
    sudo systemctl restart ntp
    Job for ntp.service failed. See 'systemctl status ntp.service' and 'journalctl -xn' for details.

  • Status shows:

    sudo systemctl status ntp
    â—Â ntp.service - LSB: Start NTP daemon
       Loaded: loaded (/etc/init.d/ntp)
       Active: failed (Result: exit-code) since Tue 2021-08-24 22:09:56 PDT; 9s ago
      Process: 2597 ExecStart=/etc/init.d/ntp start (code=exited, status=5)

    Aug 24 22:09:55 c1iscex systemd[1]: Starting LSB: Start NTP daemon...
    Aug 24 22:09:56 c1iscex systemd[1]: ntp.service: control process exited, code=exited status=5
    Aug 24 22:09:56 c1iscex systemd[1]: Failed to start LSB: Start NTP daemon.
    Aug 24 22:09:56 c1iscex systemd[1]: Unit ntp.service entered failed state.

  • I tried to enable back the ntp service by sudo systemctl enable ntp. I got similar error messages of read only filesystem as earlier.
    Synchronizing state for ntp.service with sysvinit using update-rc.d...
    Executing /usr/sbin/update-rc.d ntp defaults
    insserv: warning: current start runlevel(s) (empty) of script `ntp' overrides LSB defaults (2 3 4 5).
    insserv: warning: current stop runlevel(s) (2 3 4 5) of script `ntp' overrides LSB defaults (empty).
    insserv: fopen(.depend.stop): Read-only file system
    Executing /usr/sbin/update-rc.d ntp enable
    update-rc.d: error: Read-only file system

    • I went back to chroot in fb1 and ran:
      /usr/sbin/update-rc.d ntp defaults
      insserv: warning: current start runlevel(s) (empty) of script `ntp' overrides LSB defaults (2 3 4 5).
      insserv: warning: current stop runlevel(s) (2 3 4 5) of script `ntp' overrides LSB defaults (empty).
      and
      /usr/sbin/update-rc.d ntp enable

  • I came back to c1iscex and tried restarting the ntp service but got same error messages as above with exit code 5.

  • I checked c1sus, the ntp was running there. I tested the configuration by restarting the ntp service, and then it failed with same error message. So the remaining three FEs, c1lsc, c1ioo and c1iscey have running ntp service, but they won't be able to restart.

  • As a last try, I rebooted c1iscex to see if ntp comes back online nicely, but it doesn't.

Bottom line, I went to try chrony in the FEs, and I ended up breaking the ntp client services on the computers as well. We have no NTP synchronization in any of the FEs.

Even though Paco and I are learning about the ntp and cds stuff, I think it's time we get help from someone with real experience. The lab is not in a good state for far too long.

Quote:

tl;dr: NTP servers and clients were never synchronized, are not synchronizing even with ntp... nodus is synchronized but uses chronyd; should we use chronyd everywhere?

 

  16322   Mon Sep 13 15:14:36 2021 AnchalUpdateLSCXend Green laser injection mirrors M1 and M2 not responsive

I was showing some green laser locking to Tega, I noticed that changing the PZT sliders of M1/M2 angular position on Xend had no effect on locked TEM01 or TEM00 mode. This is odd as changing these sliders should increase or decrease the mode-matching of these modes. I suspect that the controls are not working correctly and the PZTs are either not powered up or not connected. We'll investigate this in near future as per priority.

  16330   Tue Sep 14 17:22:21 2021 AnchalUpdateCDSAdded temp sensor channels to DAQ list

[Tega, Paco, Anchal]

We attempted to reboot fb1 daqd today to get the new temperature sensor channels recording. However, the FE models got stuck, apparantely due to reasons explaine din 40m/16325. Jamie cleared the /var/logs in fb1 so that FE can reboot. We were able to reboot the FE machines after this work successfully and get the models running too. During the day, the FE machines were shut down manually and brought back on manually, a couple of times on the c1iscex machine. Only change in fb1 is in the /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini where the new channels were added, and some hacking was done by Jamie in gpstime module (See 40m/16327).

ELOG V3.1.3-