40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 139 of 341  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  16997   Wed Jul 13 12:49:25 2022 PacoSummarySUSSUS frozen

[Paco, JC, Yuta]

This morning, while investigating the source of a burning smell, we turned off the c1SUS 1X4 power strip powering the sorensens. After this, we noticed the MC1 refl was not on the camera, and in general other vertex SUS were misaligned even though JC had aligned the IFO in the morning to almost optimum arm cavity flashing. After a c1susaux modbusIOC service restart and burt restore, the problem persisted.

We started to debug the sus rack chain for PRM since the oplev beam was still near its alignment so we could use it as a sensor. The first weird thing we noticed was that no matter how much we "kicked" PRM, we wouldn't see any motion on the oplev. We repeatedly kicked UL coil and looked at the coil driver inputs and outputs, and also verified the eurocard had DC power on which it did. Somehow disconnecting the acromag inputs didn't affect the medm screen values, so that made us suspicious that something was weird with these ADCs.


Because all the slow channels were in a frozen state, we tried restarting c1susaux and the acromag chassis and this fixed the issue.

  17007   Fri Jul 15 19:13:22 2022 PacoSummaryLSCFPMI with REFL/AS55 demod phase adjust

[Yuta, Paco]

  • We first zero the offsets in ASDC, AS55, REFL55, POX11, and POY11 when PSL shutter is closed.
    • After this, we checked the offsets with only ITMX aligned. Some of RFPDs had ~2 counts of offsets, which indicate some RFAM of sidebands, but we decided not to tune Marconi frequencies since the offsets were small enough.
  • We went over the demod phases for AS55, REFL55, POX11, and POY11.
    • For POX11/POY11 first we just minimized the Q in each locked XARM/YARM individually. The newfound values were
      • C1:LSC-POX11_PHASE_R = 106.991
      • C1:LSC-POY11_PHASE_R = -12.820
    • Then we misaligned the XARM by getting rid of the MICH fringe in the ASDC port with ITMX yaw offset, and locked YARM using AS55_Q and REFL55_I and found the demod phase that minimized the AS55_I and REFL55_Q. The newfound values were
      • C1:LSC-AS55_PHASE_R = -65.9586
      • C1:LSC-REFL55_PHASE_R = -78.6254
    • Repeating the above, but now misaligning YARM with ITMY yaw offset, locking XARM with AS55_Q and REFL55_I, we found the demod phases that minimized AS55_1 and REFL55_Q. The newfound values were
      • C1:LSC-AS55_PHASE_R = -61.4361
      • C1:LSC-REFL55_PHASE_R = -71.0434
  • The above demod phases difference, Schnupp asymmetry between X and Y were measured. We repeated the measurement three times to derive the error.
    • Optimal demod phase difference between X arm and Y arm for both AS55 and REFL55 were measured to be -4.5 +/- 0.1 deg, which means that lx-ly = 3.39 +/- 0.05 cm (Marconi frequency: 11.066195 MHz).
  • We measured the gain difference between AS55_Q and POX11/POY11 = -0.5
  • We measured the gain difference between REFL55_I and POX11/POY11 = -2.5

After this, we locked DARM, CARM and MICH using POX11_I, POY11_I and AS55 error signals respectively, and actuating on ETMX, MC2, and BS with NO TRIGGERS (but FM triggers were on for boosts as usual). Under this condition, FM5 is used for lock acquisition, and FM1, FM2, FM3, FM6 are turned on with FM triggers. No FM4 was on. We also noticed:

  • CARM FM6 "BounceRoll" is slightly different than "YARM" FM6 "Bounce". The absent roll resonant gain actually makes it easier to control the CARM, we just had to use YARM filter for locking it.
  • When CARM is controlled, we often just kick the ETMX to bring it near resonance, since the frequency noise drops and we otherwise have to wait long.
  17012   Mon Jul 18 16:39:07 2022 PacoSummaryLSCFPMI locking procedure using REFL55 and AS55

[Yuta, Paco]

In summary, we locked FPMI using REFL55_I, REFL55_Q, and AS55_Q. The key to success was to mix POX11_I and POY11_I in the right way to emulate CARM/DARM, and to find out the correct demodulation phase for AS55.


Procedure

  1. Close PSL shutter and zero offsets in AS55, REFL55, POX11, POY11, and ASDC
    • For ASDC run python3 resetOffsets.py -c C1:LSC-ASDC_IN1, otherwise use the zer offsets on I and Q inputs from the RFPD medm screen.
  2. Lock XARM/YARM using POX/POY to tune demodulation phase.
    • Today, the demode phase in POX11 changed to 104.801, and POY11 to -11.256 deg.
  3. XARM and YARM are used in the following configuration
    • INMAT
      • 0.5 * POX11_I - 0.5 * POY --> XARM
      • 0.5 * POX + 0.5*POY --> YARM
      • REFL55_Q --> MICH (** this should be turned on after POX11/POY11)
    • LSC Filter gains
      • XARM = 0.012
      • YARM = 0.012
      • MICH = +40 (note the sign flip from last time)
    • OUTMAT
      • XARM --> 0.5 * ETMX - 0.5 * ETMY
      • YARM --> MC2
      • MICH --> BS
    • UGFs (sanity check)
      • XARM (DARM) ~ 100 Hz
      • YARM (CARM) ~ 200 Hz
      • MICH (MICH) ~ 40 Hz
  4. Run MICHOpticalGainCalibration.ipynb to see if ASDC vs REFL55_Q looks nice (ellipse in the XY plot), and find any residual offset in REFL55_Q.
    • If the plot doesn't look nice in this regard, the IFO needs to be aligned.
  5. Sensing matrix for CARM/DARM and MICH.
    • With the DARM, CARM and MICH lines on, verify the demod error signals look ok both in mag and phase.
    • For example, we found that CARM error signals were correctly represented by either 0.5 * POX11_I + 0.5 * POY11_I or 0.5 * REFL55_I.
    • Similarly, we found that DARM error signal was correctly represented by either 0.5 * POX11_I - 0.5 * POY11_I or 2.5 * AS55_Q.
    • To find this, we minimized CARM content in AS55_Q, as well as CARM content in REFL55_Q.
  6. We acquired the lock by re-configuring the error point as below:
    • INMAT
      • 0.5*REFL55_I --> YARM (CARM)
      • 2.5 * AS55_Q --> XARM (DARM)
    • During the hand-off trials, we repeatedly ran the sensing matrix and UGF measurements while stopping at various intermediate mixed error points to check how the error signal calibrations changed if at all.
      • Attachment #1 shows the DARM OLTF using POX/POY (blue), only with CARM handoff (green), and after DARM handoff (red)
      • Attachment #2 shows the CARM OLTF using POX/POY (blue), only with CARM handoff (green), and after DARM handoff (red)
      • Attachment #3 shows the MICH OLTF using POX/POY (blue), only with CARM handoff (green), and after DARM handoff (red)
    • The sensing matrix after handoff is below:
Sensing Matrix with the following demodulation phases
{'AS55': 192.8, 'REFL55': 95.63177865911078, 'POX11': 104.80089727128349, 'POY11': -11.256509422276006}
Sensors          	           DARM     	           CARM     	            MICH     	
C1:LSC-AS55_I_ERR_DQ	5.09e-02 (89.6761 deg)	2.03e-01 (-114.513 deg)	1.28e-04 (-28.9254 deg)	
C1:LSC-AS55_Q_ERR_DQ	4.78e-02 (88.7876 deg)	3.61e-03 (-68.7198 deg)	8.34e-05 (-39.193 deg)	
C1:LSC-REFL55_I_ERR_DQ	5.18e-02 (-92.2555 deg)	1.20e+00 (65.2507 deg)	1.15e-04 (-102.027 deg)	
C1:LSC-REFL55_Q_ERR_DQ	1.81e-04 (59.0854 deg)	1.09e-02 (-114.716 deg)	1.77e-05 (-23.6485 deg)	
C1:LSC-POX11_I_ERR_DQ	8.51e-02 (91.2844 deg)	4.77e-01 (67.1709 deg)	7.97e-05 (-72.5252 deg)	
C1:LSC-POX11_Q_ERR_DQ	2.63e-04 (114.584 deg)	1.32e-03 (-113.505 deg)	2.10e-06 (118.146 deg)	
C1:LSC-POY11_I_ERR_DQ	1.58e-01 (-88.9295 deg)	6.16e-01 (67.6098 deg)	8.71e-05 (172.73 deg)	
C1:LSC-POY11_Q_ERR_DQ	2.89e-04 (-89.1114 deg)	1.09e-03 (70.2784 deg)	3.77e-07 (110.206 deg)	

Lock gpstimes:

  1. [1342220242, 1342220260]
  2. [1342220420, 1342220890]
  3. [1342221426, 1342221574]
  4. [1342222753, 1342223230]

Sensitivity estimate (NANB)

Using diaggui, we look at the AS55_Q error point and the DARM control point (C1:LSC-XARM_OUT). We roughly calibrate the error point using the sensing matrix element and actuation gain at the DARM oscillator freq 4.78e-2 / (10.91e-9 / 307.880^2). The control point is calibrated with a 0.95 Hz SUS pole. Attachment #4 shows the sensitivity estimate.

Attachment 1: DARM_07_18_2022_FMPI.pdf
DARM_07_18_2022_FMPI.pdf DARM_07_18_2022_FMPI.pdf DARM_07_18_2022_FMPI.pdf
Attachment 2: CARM_07_18_2022_FPMI.pdf
CARM_07_18_2022_FPMI.pdf CARM_07_18_2022_FPMI.pdf CARM_07_18_2022_FPMI.pdf
Attachment 3: MICH_07_18_2022_FPMI.pdf
MICH_07_18_2022_FPMI.pdf MICH_07_18_2022_FPMI.pdf MICH_07_18_2022_FPMI.pdf
Attachment 4: fpmi_darm_nb_2022_07.pdf
fpmi_darm_nb_2022_07.pdf
  17021   Wed Jul 20 11:58:45 2022 PacoSummaryGeneralJenne laser kaput?

[Paco, Yehonathan, JC]

We were trying to setup the Jenne laser to characterize the response of three 1811s that Yehonathan is using for his WOPA experiment (in QIL). We hooked up a ~ 5 VDC power supply to the bias tee and looked to see if there was any DC response in the REF PD. We used a DB9 breakout board and a DB9 cable, and saw some current being drawn. The DC current was a bit too high (500 mA), so we turned the DC voltage off, and realized the VDC power was reversed, probably along the DB9 cable which we didn't check before. As we flipped the power supply leads and turned power back on, we could no longer see any current even though the voltage was now right (or was it???). We would like to debug this laser, and continue using it if it still works (!), but there is negligible documentation either here or in the wiki, so if there are any known places to look at it would be helpful to know them.

  17022   Wed Jul 20 14:12:07 2022 PacoSummaryGeneralJenne laser kaput!

[Koji, Yehonathan, Paco]

Koji pointed out that this laser was always driven with a current driver (which was not nearby), and after finding it on one of the rolling carts, we hooked up the system but found that the laser driver displayed open circuit near the usual 20mA operating point. We therefore have to conclude that this laser is no more. We will look for a reasonable replacement.

Quote:

[Paco, Yehonathan, JC]

We were trying to setup the Jenne laser to characterize the response of three 1811s that Yehonathan is using for his WOPA experiment (in QIL). We hooked up a ~ 5 VDC power supply to the bias tee and looked to see if there was any DC response in the REF PD. We used a DB9 breakout board and a DB9 cable, and saw some current being drawn. The DC current was a bit too high (500 mA), so we turned the DC voltage off, and realized the VDC power was reversed, probably along the DB9 cable which we didn't check before. As we flipped the power supply leads and turned power back on, we could no longer see any current even though the voltage was now right (or was it???). We would like to debug this laser, and continue using it if it still works (!), but there is negligible documentation either here or in the wiki, so if there are any known places to look at it would be helpful to know them.

 

  17024   Wed Jul 20 18:07:52 2022 PacoUpdateBHDBHD MICH test

[Paco, Yuta, JC]

We did some easy tests on the BHD readout in preparation for BHD MICH. With the arm cavities and LO beam misaligned, but the MICH aligned, we measured the transfer function from C1:LSC-DCPD_A_OUT to C1:LSC-DCPD_B_OUT to get a rough estimate of the gain balance: 1.8 * DCPD_A = DCPD_B. We then locked MICH using REFL55_Q and looked at

  • A=C1:LSC-DCPD_A_OUT
  • B=C1:LSC-DCPD_B_OUT
  • 1.8 * A - B (which we encoded using C1:LSC-PRCL_A_IN1)
  • 1.8 * A + B (which we encoded using C1:LSC-PRCL_B_IN1)

namely the DCPD BHD signals. After turning the MICH_OSC on (2000 gain @ 311.1 Hz), we took some power spectra under the following three configurations:

  1. LO misaligned, no MICH offset.
  2. LO overlap, no MICH offset.
  3. LO overlap and MICH offset.

For 1. the expectation was that since LO is misaligned and the AS port is dark, we would get no signal. In 2., however both A and B would might see some incoherent signal, but still no MICH. Finally in 3. all signals should be able to see MICH, including A-B. Attachment #1 shows the measurements 1, 2, and 3 (offset = -5.0). Then, with increasing offset values, the BHD MICH signals increased as well; discussion to follow.

Attachment 1: BHD_MICH_OSC.pdf
BHD_MICH_OSC.pdf
  17030   Mon Jul 25 09:05:50 2022 PacoSummaryGeneralTesting 950nm laser found in trash pile

[Paco, Yehonathan]

==== Late elog from Friday ====

Koji provided us with a QFLD-950-3S (QPHOTONICS) salvaged from Aidan's junk pile (LD is alive according to him). We tested the Jenne laser setup with this just to decide if we should order another one, and it worked.

The laser driver anode and cathode pins (8/9, 4/5 respectively) on the rear DB9 port from the  ILX Lightwave LDX-3412 driver were connected to the corresponding anode and cathode pins in the laser package (5, and 9; note the numbers are reversed between driver and laser). Then, interlock pins 1 and 2 in the driver were shorted to enable operation. This is all illustrated in Attachments #1-2.

After setting a limit of 27.6 mA current in the driver, we slowly increased the actual current to ~ 19 mA until we could see light on a beam card. We can go ahead and get a 1060 nm replacement.

Attachment 1: PXL_20220722_234600124.jpg
PXL_20220722_234600124.jpg
Attachment 2: PXL_20220722_234551918.jpg
PXL_20220722_234551918.jpg
  17037   Tue Jul 26 20:54:08 2022 PacoUpdateBHDBHD MICH test - LO phase control

[Yuta, Paco]


TL;DR Successfully controlled LO phase, and did BHD-MICH readout with various MICH offsets and LO phases.


Today we implemented a DCPD based LO phase control. First, we remeasured the balancing gain at 311.1 Hz (the MICH oscillator freq) and combined C1:HPC-DCPD_A_OUT with C1:HPC-DCPD_B_OUT to produce the balanced homodyne error signal (A-B). We feed this error signal to C1:HPC-LO_PHASE_IN1 and for the main loop filters we simply recycled the LSC-MICH loop filters FM2 through FM5 (we also copied FM8, but didn't end up using it much). Then, we verified the LO phase can be controlled by actuating either on LO1 or LO2. For LO2, we added an oscillator in the HPC LOCKINS at 318.75 Hz (we kept this on at 1000 counts for the measurements below).

The LO phase control was achieved with a loop gain in the range of 10-30 (we used 20), no offset, and FM4, and FM5 engaged. FM2 can be added to boost, but we usually skipped FM3. Then, we went through a set of measurements similar to the ones described in a previous elog. A key difference with respect to the measurements from before is that we locked MICH using AS55Q (as opposed to REFL55Q). This allowed us to reach higher MICH offsets without losing lock. After turning on the MICH oscillator at 3000 counts, we looked at:

  1. LO misaligned + MICH at dark fringe (offset = -21).
    • Here, we don't expect to see any MICH signal and indeed we don't, except for a small residual peak from perhaps a MICH offset or slightly imbalanced PDs.
  2. LO aligned, but uncontrolled + MICH at dark fringe (offset = -21).
    • Here we would naively expect MICH to show up in A-B, but because of the uncontrolled LO phase, we mostly see the noise baseline (mostly from LO RIN? ...see measurement 3) under which this signal is probably buried. Indeed, the LO fringe increased noise in A, B, and A-B but not in A+B. This is nice. yes
  3. LO aligned, but uncontrolled + MICH with dc readout (offset = +50).
    • Here we expected the MICH signal to show up due to the large offset, and we can indeed see it in A, B, and A+B, but not in A-B. Nevertheless we see almost exactly the same noise level even though we allow some AS light into the BHD readout, so maybe the noise observed in the A-B channel from measurements 2 and 3 is mostly from LO RIN. This needs further investigation...
  4. LO aligned, controlled at no offset + MICH with dc readout (offset = +50).
    • In general here we expected to see a noise reduction in the A-B channel since the LO fringe is stable, and a MICH signal should appear. Furthermore, since LO phase is under control, we expect the LO2 Oscillator to appear which it does for this and the following measurements. Because of the relative freedom, we tried this measurement in two cases:
      1. When feeding back to LO1
        • We actually see MICH in the A-B channel, as expected, after the noise level dropped by ~ 5. We also observed small sidebands +- 1 Hz away from the MICH peak, probably due to local damping in either LO or AS paths.
      2. When feeding back to LO2
        • We also see MICH here, with a slightly better drop in noise (relative to feeding back to LO1). Sidebands persisted here, but around at +- 2 Hz.
  5. LO aligned, controlled (offset = 10) + MICH with dc readout (offset = +50). *
    • Here, we expected the A-B MICH content to increase dramatically, and indeed it does after a little tuning of the LO phase heart. The noise level decreased slightly because LO phase noise is decreased around the optimal point.
  6. LO aligned, controlled (offset = 20) + MICH with dc readout (offset = +30). *
    • Here, we naively expected A+B MICH content to decrease, but A-B remain constant. In order to see this we tried to keep the balance between the offsets, but this was hard. We don't really see much of this effect, so this also needs further investigation. As long as we keep controlling the LO phase using the DCPDs because the offsets tend to reduce the error signal we will have a harder time.

* For these measurements we actuated on LO2 to keep the LO phase under control.

Note that the color code above corresponds to the traces shown in Attachment #1.


What's next?

  • Alignment of LO and AS might be far from optimized, so it should be tried more seriously.
  • What's the actual LO power? How does it compare with AS power at whatever MICH offsets?
  • Try audio dither LO phase control.
    • With MICH offset.
    • Without MICH offset, double demod (after dolphin fix crying)
Attachment 1: 20220726_BICHD.pdf
20220726_BICHD.pdf
  17102   Wed Aug 24 12:02:24 2022 PacoUpdateSUSITMX SUS is sus UL glitches?

[Yehonathan, Paco]

This morning, while attempting to align the IFO to continue with noise-budgeting, we noted the XARM lock was not stable and showed glitches in the C1:LSC-TRX_OUT (arm cavity transmission). Inspecting the SUS screens, we found the ULSEN rms ~ 6 times higher than the other coils so we opened an ndscope with the four face OSEM signals and overlay the XARM transmission. We immediately noticed the ULSEN input is noisy, jumping around randomly and where bigger glitches correlated with the arm cavity transmission glitches. This is appreciated in Attachment #1.


Signal chain investigation

We'll do a full signal investigation on ITMX SUS electronics to try and narrow down the issue, but it seems the glitches come and go... Is this from the gold satamp box? ...

Attachment 1: ITMX_UL_badness_08242022.png
ITMX_UL_badness_08242022.png
  17104   Thu Aug 25 15:24:06 2022 PacoHowToElectronicsRFSoC 2x2 board -- fandango

[Paco, Chris Stoughton, Leo -- remote]

This morning Chris came over to the 40m lab to help us get the RFSoC board going. After checking out our setup, we decided to do a very basic series of checks to see if we can at least get the ADCs to run coherently (independent of the DACs). For this I borrowed the Marconi 2023B from inside the lab and set its output to 1.137 GHz, 0 dBm. Then, I plugged it into the ADC1 and just ran the usual spectrum analyzer notebook on the rfsoc jupyter lab server. Attachment #1 - 2 shows the screen captured PSDs for ADCs 0 and 1 respectively with the 1137 MHz peaks alright.

The fast ADCs are indeed reading our input signals.


Before this simple test, we actually reached out to Leo over at Fermilab for some remote assistance on building up our minimally working firmware. For this, Chris started a new vivado project on his laptop, and realized the rfsoc 2x2 board files are not included in it by default. In order to add them, we had to go into Tools, Settings and add the 2020.1 Vivado Xilinx shop board repository path to the rfsoc2x2 v1.1 files. After a little bit of struggling, uninstalling, reinstalling them, and restarting Vivado, we managed to get into the actual overlay design. In there, with Leo's assistance, we dropped the Zynq MPSoC core (this includes the main interface drivers for the rfsoc 2x2 board). We then dropped an rf converter IP block, which we customized to use the right PLL settings. The settings, from the System Clocking tab were changed to have a 409.6 MHz Reference Clock (default was 122.88 MHz). This was not straightforward, as the default sampling rate of 2.00 GSPS was not integer-related so we had to also update that to 4.096 GSPS. Then, we saw that the max available Clock Out option was 256 MHz (we need to be >= 409.6 MHz), so Leo suggested we dropped a Clocking Wizard block to provide a 512 MHz clock input for the rfdc. The final settings are captured in Attachment # 3. The Clocking Wizard was added, and configured on its Output Clocks tab to provide a Requested Output Freq of 512 MHz. The finall settings of the Clocking wizard are captured in Attachment #4. Finally, we connected the blocks as shown in Attachment #5.

We will continue with this design tomorrow.

Attachment 1: adc0_1137MHz.png
adc0_1137MHz.png
Attachment 2: adc1_1137MHz.png
adc1_1137MHz.png
Attachment 3: rfdc_PLLsettings.png
rfdc_PLLsettings.png
Attachment 4: clockingwiz_settings.png
clockingwiz_settings.png
Attachment 5: blockIPdiag.png
blockIPdiag.png
  17133   Tue Sep 6 17:39:40 2022 PacoUpdateSUSLO1 LO2 AS1 AS4 damping loop step responses

I tuned the local damping gains for LO1, LO2, AS1, and AS4 by looking at step responses in the DOF basis (i.e. POS, PIT, YAW, and SIDE). The procedure was:

  1. Grab an ndscope with the error point signals in the DOF basis, e.g. C1:SUS-LO1_SUSPOS_IN1_DQ
  2. Apply an offset to the relevant DOF using the alignment slider offset (or coil offset for the SIDE DOF) while being careful not to trip the watchdog. The nominal offsets found for this tuning are summarized below:
Alignment/Coil Step sizes
  POS PIT YAW SIDE
LO1 800 300 300 10000
LO2 800 300 400 -10000
AS1 800 500 500 20000
AS4 800 400 400 -10000
  1. Tune the damping gains until the DOF shows a residual Q with ~ 5 or more oscillations.
  2. The new damping gains are below for all optics and their DOFs, and Attachments #1-4 summarize the tuned step responses as well as the other DOFs (cross-coupled).
Local damping gains
  POS PIT YAW SIDE
LO1 10.000 5.000 3.000 40.000
LO2 10.000 3.000 3.000 50.000
AS1 14.000 2.500 3.000 85.000
AS4 15.000 3.100 3.000 41.000

Note that during this test, FM5 has been populated for all these optics with a BounceRoll (notches at 16.6, 23.7 Hz) filter, apart from the Cheby (HF rolloff) and the 0.0:30 filters.

Attachment 1: LO1_Step_Response_Test_2022-09-06_17-19.pdf
LO1_Step_Response_Test_2022-09-06_17-19.pdf LO1_Step_Response_Test_2022-09-06_17-19.pdf LO1_Step_Response_Test_2022-09-06_17-19.pdf LO1_Step_Response_Test_2022-09-06_17-19.pdf
Attachment 2: LO2_Step_Response_Test_2022-09-06_17-30.pdf
LO2_Step_Response_Test_2022-09-06_17-30.pdf LO2_Step_Response_Test_2022-09-06_17-30.pdf LO2_Step_Response_Test_2022-09-06_17-30.pdf LO2_Step_Response_Test_2022-09-06_17-30.pdf
Attachment 3: AS1_Step_Response_Test_2022-09-06_17-53.pdf
AS1_Step_Response_Test_2022-09-06_17-53.pdf AS1_Step_Response_Test_2022-09-06_17-53.pdf AS1_Step_Response_Test_2022-09-06_17-53.pdf AS1_Step_Response_Test_2022-09-06_17-53.pdf
Attachment 4: AS4_Step_Response_Test_2022-09-06_18-16.pdf
AS4_Step_Response_Test_2022-09-06_18-16.pdf AS4_Step_Response_Test_2022-09-06_18-16.pdf AS4_Step_Response_Test_2022-09-06_18-16.pdf AS4_Step_Response_Test_2022-09-06_18-16.pdf
  17142   Thu Sep 15 21:12:53 2022 PacoUpdateBHDLO phase "dc" control

Locked the LO phase with a MICH offset=+91. The LO is midfringe (locked using the A-B zero crossing), so it's far from being "useful" for any readout but we can at least look at residual noise spectra.

I spent some time playing with the loop gains, filters, and overall lock acquisition, and established a quick TF template at Git/40m/measurements/BHD/HPC_LO_PHASE_TF.xml

So far, it seems that actuating on the LO phase through LO2 POS requires 1.9 times more strength (with the same "A-B" dc sensing). After closing the loop by FM4, and FM5, actuating on LO2 with a filter gain of 0.4 closes the loop robustly. Then, FM3 and FM6 can be enabled and the gain stepped up to 0.5 without problem. The measured UGF (Attachment #1) here was ~ 20 Hz. It can be increased to 55 Hz but then it quickly becomes unstable. I added FM1 (boost) to the HPC_LO_PHASE bank but didn't get to try it.

The noise spectra (Attachment #2) is still uncalibrated... but has been saved under Git/40m/measurements/BHD/HPC_residual_noise_spectra.xml

Attachment 1: lophase_oltf.pdf
lophase_oltf.pdf
Attachment 2: lophase_noise_spectra.pdf
lophase_noise_spectra.pdf
  17143   Mon Sep 19 17:02:57 2022 PacoSummaryGeneralPower Outage 220916 -- restored all

Restore lab

[Paco, Tega, JC, Yehonathan]

We followed the instructions here. There were no major issues, apart from the fb1 ntp server sync taking long time after rebooting once.


ETMY damping

[Yehonathan, Paco]

We noticed that ETMY had to much RMS motion when the OpLevs were off. We played with it a bit and noticed two things: Cheby4 filter was on for SUS_POS and the limiter on ULCOIL was on at 0 limit. We turned both off.

We did some damping test and observed that the PIT and YAW motion were overdamped. We tune the gain of the filters in the following way:

SUSSIDE_GAIN 1250->50

SUSPOS_GAIN 200->150

SUSYAW_GAIN 60->30

These action seem to make things better.

  17145   Tue Sep 20 07:03:04 2022 PacoSummaryGeneralPower Outage 220916 -- restored all

[JC, Tega, Paco ]

I would like to mention that during the Vacuum startup, after the AUX pump was turned on, Tega and I were walking away while the pressure decreases. While we were, valves opened on their own. Nobody was near the VAC Desktop during this. I asked Koji if this may be an automatic startup, but he said the valves shouldn't open unless they are explicitely told to do so. Has anyone encountered this before?

Quote:

Restore lab

[Paco, Tega, JC, Yehonathan]

We followed the instructions here. There were no major issues, apart from the fb1 ntp server sync taking long time after rebooting once.


ETMY damping

[Yehonathan, Paco]

We noticed that ETMY had to much RMS motion when the OpLevs were off. We played with it a bit and noticed two things: Cheby4 filter was on for SUS_POS and the limiter on ULCOIL was on at 0 limit. We turned both off.

We did some damping test and observed that the PIT and YAW motion were overdamped. We tune the gain of the filters in the following way:

SUSSIDE_GAIN 1250->50

SUSPOS_GAIN 200->150

SUSYAW_GAIN 60->30

These action seem to make things better.

 

  17150   Wed Sep 21 17:01:59 2022 PacoUpdateBHDBH55 RFPD installed - part I

[Radhika, Paco]

Optical path setup

We realized the DCPD - B beam path was already using a 95:5 beamsplitter to steer the beam, so we are repurposing the 5% pickoff for a 55 MHz RFPD. For the RFPD we are using a gold RFPD labeled "POP55 (POY55)" which was on the large optical table near the vertex. We have decided to test this in-situ because the PD test setup is currently offline.

Radhika used a Y1-1025-45S mirror to steer the B-beam path into the RFPD, but a lens should be added next in the path to focus the beam spot into the PD sensitive area. The current path is illustrated by Attachment #1.

We removed some unused OPLEV optics to make room for the RFPD box, and these were moved to the optics cabinet along Y-arm [Attachment #2].

 


[Anchal, Yehonathan]

PD interfacing and connections

In parallel to setting up the optical path configuration in the ITMY table, we repurposed a DB15 cable from a PD interface board in the LSC rack to the RFPD in question. Then, an SMA cable was routed from the RFPD RF output to an "UNUSED" I&Q demod board on the LSC rack. Lucky us, we also found a terminated REFL55 LO port, so we can draw our demod LO from there. There are a couple (14,15,20,21) ADC free inputs after the WF2 and WF3 whitening filter interfaces.


Next steps

  • Finish alignment of BH55 beam to RFPD
  • Test RF output of RFPD once powered
  • Modify LSC model, rebuild and restart
Attachment 1: IMG_3760.jpeg
IMG_3760.jpeg
Attachment 2: IMG_3764.jpeg
IMG_3764.jpeg
  15857   Wed Mar 3 12:00:58 2021 Paco, AnchalHowToIMCMC_F ASD

[Paco, Anchal]

- Saved BURT backup in /users/anchal/BURTsnaps/
- Copied existing code for mode cleaner noise budget from /users/rana/mat/mc. Will work on this from home to convert it inot new pynb way.

Get baseline IMC measurements (passive):
- MC_F:
  - What is MC_F? Let's find out.
  - On MC_F Cal window titled 'C1IOO-MC_FREQ', we turned off ON/OFF and back on again.
  - Using diaggui, we measured ASD of MC_F channel in units of counts/rtHz.

[Rana, Paco]

- Using diaggui, measured ASD from a template (under /users/Templates) and overlay the 1/f noise of the NPRO (Attachment 1)

[Anchal, Paco]

- WFS Master
  - Went through the schematic and tried to understand what is happening.
  - Accidentally switched on MC WF relief (python 3). Bunch of things were displayed on a terminal for a while and then we Ctrl-C it.
  - The only thing we noticed that change is a slight increase in WFS1 Yaw, and a corresponding decrease in WFS1 Pitch, WFS2 Pitch, and WFS2 Yaw.
  - We need to find out what this script does.


Future work:

  • Create an automated script for taking MC_F_DQ spectrum and refer it against reference trace.
  • Use pynb to create a noise budget for mode cleaner.
  • Identify excess noise between 10-40 Hz.
  • Configure output matrix in WFS Master to reduce the noise. Automate this process as well.
Attachment 1: 20210303_MC_F_Spectrum.pdf
20210303_MC_F_Spectrum.pdf
Attachment 2: 20210303_MC_F_Spectrum.tar.gz
  15861   Thu Mar 4 10:54:12 2021 Paco, AnchalSummaryLSCPOY11 measurement, tried to lock Green Yend laser

[Paco, Anchal]

- First ran burtgooey as last time.

- Installed pyepics on base environment of donatella

ASS XARM:
- Clicked on ON in the drop down of "! More Scripts" below "! Scripts XARM" in C1ASS.adl
- Clicked on "Freeze Outputs" in the same menu after some time.
- Noticed that the sensing and output matrix of ASS on XARM and YARM look very different. The reason probably is because the YARM outputs have 4 TT1/2 P/Y dof instead of BS P/Y on the XARM. What are these TT1/2?

(Probably, unrelated but MC Unlocked and kept on trying to lock for about 10 minutes attaining the lock eventually.)

Locking XARM:
- From scripts/XARM we ran lockXarm.py from outside any conda environment using python command.
- Weirdly, we see that YARM is locked??? But XARM is not. Maybe this script is old.
- C1:LSC-TRY-OUTPUT went to around 0.75 (units unknown) while C1:LSC-TRX-OUTPUT is fluctuating around 0 only.

POY11 Spectrum measurement when YARM is locked:
- Created our own template as we couldn't find an existing one in users/Templates.
- Template file and data in Attachment 2.
- It is interesting to see most of the noise is in I quadrature with most noise in 10 to 100 Hz.
- Given the ARM is supposed to be much calmer than MC, this noise should be mostly due to the mode cleaner noise.
- We are not sure what units C1:LSC-POY11_I_ERR_DQ have, so Y scale is shown with out units.


Trying to lock Green YEND laser to YARM:
- We opened the Green Y shutter.
- We ensured that when temperature slider og green Y is moved up, the beatnote goes up.
- ARM was POY locked from previous step.
- Ran script scripts/YARM/Lock_ALS_YARM.py from outside any conda environment using python command.
- This locked green laser but unlocked the YARM POY.

Things moving around:
- Last step must have made all the suspension controls unstable.
- We see PRM and SRM QPDs moving a lot.
- Then we did burt restore to /opt/rtcds/caltech/c1/burt/autoburt/today/08:19/*.snap to go back to the state before we started changing things today.

[Paco left for vaccine appointment]

- However the unstable state didn't change from restore. I see a lot of movement in ITMX/Y. PRM and BS also now. Movement in WFS1 and MC2T as well.
 - I closed PSL shutter as well to hopefully disengage any loops that are still running unstably.
 - But at this point, it seems that the optics are just oscillating and need time to come back to rest. Hopefully we din't cause too much harm today :(.
 


My guess on what happened:

  • Us using the Lock_ALS_YARM.py probably created an unstable configuration in LSC matrix and was the start of the issue.
  • On seeing PRM fluctuate so much, we thought we should just burst restore everything. But that was a hammer to the problem.
  • This hammer probably changed the suspension position values suddenly causing an impulse to all the optics. So everything started oscillating.
  • Now MC WFS is waiting for MC to lock before it stablizes the mode cleaner. But MC autolocker is unable to lock because the optics are oscillating. Chicken-egg issue.
  • I'm not aware of how manually one can restore the state now. My only known guess is that if we wait for few hours, everything should calm back enough that MC can be locked and WFS servo can be switched on.
Attachment 1: 20210304_POY11_Spectrum_YARMLocked.pdf
20210304_POY11_Spectrum_YARMLocked.pdf
Attachment 2: 20210304_POY11_Spectrum_YARMLocked.tar.gz
  15862   Thu Mar 4 11:59:25 2021 Paco, AnchalSummaryLSCWatchdog tripped, Optics damped back

Gautam came in and noted that the optics damping watchdogs had been tripped by a >5 magnitude earthquake somewhere off the coast of Australia. So, under guided assistance, we manually damped the optics using following:

  • Using the scripts/SUS/reEnableWatchdogs.py script we re-enabled all the watchdogs.
  • Everything except SRM was restored to stable state.
  • Then we clicked on SRM in SUS-> Watchdogs, disabled the Oplevs, shutdown the watchdog.
  • We changed the threshold for watchdog temporarily to 1000 to allow damping.
  • We enabled all the coil outputs  manually. Then enabled watchdog by clicking on Normal.
  • Once the SRM was damped, we shutdown the watchdog, brought back the threshold to 215 and restarted it.

Gautum also noticed that MC autolocker got turned OFF by me (Anchal), we turned it back on and MC engaged the lock again. All good, no harm done.

  15877   Mon Mar 8 12:01:02 2021 Paco, AnchalSummarytrainingInvestigate how-to XARM locking

[Paco, Anchal]

- Started zoom stream; thanks to whoever installed it!
- Spent some time trying to understand how anything we did last thursday lead to the sensing matrix change, but still cannot figure it out. 
- Tracking back on our actions, at ~10:30 we ran burt Restore with the 08:19/.*snap and in lack of a better suspect, we blame it on that action for now.

# ARM locking??
- Reading (not running) the scripts/XARM/lockXarm.py script and try to understand the workflow. It is pretty confusing that the result was to lock Yarm last time.
- It looks like this script was a copy of lockYarm.py, and was never updated (there's a chance we ran it for the first time last thursday)
- *Is there a script to lock the Arms?* Or should we write one? To write one, we first attempt a manual procedure;
    1. No need to change RFPD InMTRX
    2. All filters inputs / outputs are enabled 
    3. Outputs from XARM and YARM in the Output matrix are already going to ETMX and ETMY
      - Maybe we can have the ARM lock engage by changing the MC directly?
    4. Change C1:SUS-MC2_POS_OFFSET from -38 to -0, and enable C1:SUS-MC2_POS_OFFSET_ON
    5. Manually scan MC2_POS_OFFSET to 250 (nothing happens), then -250, then back to -38 (WFS1 PIT and YAW changed a little, but then returned to their nominal values)
      - Or maybe we need to provide the right gain...
    6. Disabled C1:SUS-MC2_POS_OFFSET_ON (back to nominal state)
    7. Look into manually changing C1:LSC-XARM_GAIN;
      From the command line using python:
      >> import epics
      >> ch_name = 'C1:LSC-XARM_GAIN'
      >> epics.caput(ch_name, 0.155) # nominal = 0.150
      - Could be unrelated, but we noted a slow spike on C1:PSL-FSS_PCDRIVE (definitely from before we changed anything)
      - Still nothing is happening
    8. Changed the gain to 0.175, then back to 0.150, no effect... then 0.2, 0.3 ...
      - Stop and check SUS_Watchdogs (should not have changed?) and everything remains nominal
      - Revert all changes symmetrically.
      - Could we have missed enabling FM1?
      - Briefly lost MC lock, but it came back on its own (probably unrelated)

- Wrap it up for the day. In summary; no harm done to our knowledge.

  15884   Tue Mar 9 10:57:06 2021 Paco, AnchalSummaryIMCXARM lock and POX spectra

[Paco, Anchal]

- Upon arrival, MC is locked, and we can see light in MON5 (PRM) (usually dark).

# XARM locking
- Read through "XARM POX" script (path='/cvs/cds/rtcds/caltech/c1/burt/c1configure/c1configureXarm')
- Before running the script, we noticed the PRM watchdog is down, so we manually repeat the procedure from last time, but see more swinging even though the watchdog is active.
- Run a reEnablePRMWatchdogs.py script (a copy of reEnableWatchdogs.py with optics=['PRM']), which had the same effect. 
- We manually disable the watchdog to recover the state we first encountered, and wait for the beam in MON5 to come to rest.
    - The question is; is it fine to lock Xarm with PRM watchdog down?
    - To investigate this, we look at the effect of the offset on the unwatchdog-PRM.
    - Manually change 'PRM_POS_OFFSET' to 200, and -800 (which is the value used in the script) with no effect on the PRM swinging.
- Moving on, run IFO > CONFIGURE > ! (X Arm) > RESTORE XARM (XARM POX), and ... success.

# MC-POX noise spectra
- With XARM locked, open diaggui and take spectra for C1:LSC-POX11_I_ERR_DQ, C1:LSC-POX11_Q_ERR_DQ, C1:IOO-MC_F_DQ
- Lost XARM lock while we were figuring out unit conversions...
    - Assuming 2.631e-13 m/counts (6941) and using 37.79 m (arm length), 1064.1 nm wavelength, we get a calibration factor of 2.631e-13 * c / (2*L*lambda) ~ 0.9809 Hz/count 
    - (FAQ?, how to find/compute/measure the correct calibration factors?)
- Relock XARM, retake spectra. Attachment 1 has plots for POX11_I/Q_ERR_DQ spectrum (cts/rtHz, we couldn't find relevant calibration) and MC_F_DQ in (Hz/rtHz from referring to 15576, we couldn't get the units to show on y scale.)

# MC-POY noise spectra (attempt)
- Now, run IFO > CONFIGURE > ! (Y Arm) > RESTORE YARM (YARM POY), and XARM locks (why?)
    - Could PRM watchdog being down be the cause? 
- Try C1ASS > (YARM) ! More Scripts > ON, and looked at YARM PIT/YAW striptool. 
- C1ASS > (YARM) ! Freeze Outputs, then OFF
- Go back to IFO > CONFIGURE > ! (Y Arm) > Align YARM  (ASS ON: Unfreeze), try running this then Freeze, then OFF Zero Outputs.
- Try RESTORE YARM (POY) again, still not working.
- Try RESTORE YARM ALS, then try again after opening the shutter, but also fail to lock AUX.
    - Is the PRM WD behind some evil misalignment? Will move forward with XARM bc it is happy.

# ARM locking
- Attempted the IFO > CONFIGURE > ! (X Arm) > RESTORE Xarm (XARM ALS) but green failed to lock and we lost XARM lock.
- Try to recover XARM lock... success. It's nice to have a (repeatable) checkpoint.
- Attempt YARM lock. Not successful. It just seems like the lock Triggers are not raised (misalignment?)
    - From C1SUS_ETMY, try changing the bias "C1:SUS-ETMY_YAW_OFFSET" manually to reduce the OPLEV_YERROR. Changed from -47 to -57.
    - Retry YARM lock script... no luck
    - From C1SUS_PRM, try changing the bias "C1:SUS-PRM_PIT_OFFSET" manually to reduce OPLEV errors. Changed from 34 to 22 with no effect, then realized the coil outputs are disabled because the WD is down...
    - So we do the following BIAS changes "C1:SUS-PRM_PIT_OFFSET" = 34 > 770 and "C1:SUS-PRM_YAW_OFFSET" = 134 > -6
    - Enable all Coil Outputs, turn WD to Normal, turn OPLEVs ON, (this time the beam does not swing like crazy).
    - Fine tune BIASes "C1:SUS-PRM_PIT_OFFSET" = 770 > 805  and "C1:SUS-PRM_YAW_OFFSET" = -6 > 65
        - Saw YARM locking briefly, then unlocking, but we stopped once the OPLEV_ERRs no longer overloaded (from magnitudes > 50 to ~ 40).
- Retry YARM lock... no luck
    - From C1SUS_ETMY, try changing the bias "C1:SUS-ETMY_PIT_OFFSET" from -1 to 6. 

Stop for the day. Leave XARM locked, MC locked. 

Attachment 1: 20210309_POX11_Spec_XARMLocked.pdf
20210309_POX11_Spec_XARMLocked.pdf
Attachment 2: 20210309_XARM_Locked.tar.gz
  15893   Wed Mar 10 11:46:22 2021 Paco, AnchalSummaryIMCIMC free swinging prep

[Paco, Anchal]

# Initial State
- MC is locked. The PRM monitor shows some oscillations.
- POP monitor shows light flashing once in a while.
- AS monitor shows one beam along with some other flashing beam around it.
- PRM Watchdog is tripped and shutdown. Everything else is normal except for overload on SRM OpLevs.
- Donatella got a mouse promotion

# Reenabling PRM watchdog:
- The custom reEnablePRMWatchdog.py has been deleted.
- Tried enabling the coil outputs manually and switching watchdog to Normal.
- Again saw large fluctuations like yesterday.
- Probably still the same issue of how current calculated actuations to the coils is in range -600 to -900 and gives and impulse to the optics when suddenly turned on.
- Waiting for PRM to damp down a little.
- Today we plan to change the position bias on PRM C1:SUS-PRM_POS_OFFSET instead of changing biases in pitch and yaw.
- Changing C1:SUS-PRM_POS_OFFSET from 0 to +/- 100 without enabling the coils, it seems upper and lower coils are anticorrelated with just changing the position. So going back to changing pitch.
- Changing C1:SUS-PRM_PIT_OFFSET from 0 -> 780. Switched on watchdog to normal.
- PRM damped down. OpLev errors are also within range.
- Enabled both OpLevs.

# Try locking Y-Arm
- IFO>CONFIGURE>YARM>Restore YARM (POY) using Donatella. See a bunch of python error messages in the call complaining about unable to find some python 2 files. Closed it with Ctrl-C after a stuck state.
- Tried running it on Pianosa, the script ran without error but Y-Arm didn't lock.

# Try locking X-Arm
- IFO>CONFIGURE>XARM>Restore XARM (POX) on Donatella. Again a bunch of OSError messages. Donatella is not configured properly to run scripts.
- Tried running it on Piasnosa, the script ran without error but X-Arm didn't lock.
- This might mean that both arms are misaligned or the BS/PRM is misaligned.
- Moving around C1:SUS-PRM_PIT_OFFSET and C1:SUS-PRM_YAW_OFFSET in order to see if the transmitted light is misalgined. Both arms are set to acquire lock if possible. No luck.

# Hypothesis: The Arm cavity is not aligned within itself (ITM-ETM)
- Will try to lock X-Arm with green light while tuning the ETMX. Hopefully the BS and ITM are aligned so that once we align ETMX to get a green lock, the IR will also lock from the other side.
- Running IFO>CONFIGURE>XARM>Restore XARM (ALS) on Pianosa. No lock, moving forward with tunning ETMX pitch and yaw offsets. Nothing changed. Brought back to same values.

[Rana joined, Anchal moved to Rossa from Pianosa]

# Moving on to IMC suspensions characterization:
- Closed the PSL shutter, to our suprise, the MC was still locked. We thought this would take away any light from IMC but it doesn't. Maybe the IFO Overview needs to show the schematic in a way where this doesn't happen: "No light from any laser entering the MC but it still is locked with a resonating field inside."
- Shutting IMCR shutter (hoping that would unlock the IMC), still nothing happend.
- Tried shutting PSL shutter from Rossa, nothing happened to MC lock still.
- Closed shutter IOO>Lock MC> Close PSL and this unlocked the IMC. Found out that this shutter channel is C1:PSL-PSL_ShutterRqst while the one from the sitemap>Shutter>PSL changes C1:AUX-PSL_ShutterRqst. Some clarification on these medm screens would be nice.
- Disabled the MC autolocked from IOO>Lock MC screen (C1:IOO-MC_LOCK_ENABLE).
- Checked the scripts/SUS/freeswing.py to understand how kick is delivered and optic is left to swing freely.
- Next, we are looking at the C1SUS_MC1 screen to understand what channels to read during data acquisition.
- In sensor matrix, we see INMON for each sensor which is probably raw counts data from the OSEMs. Rana mentioned that OSEM data comes out in units of microns. These are C1:SUS-MC1_ULSEN_OUTPUT (and so on for UR, LL, LR, SD).

- In prep for finishing, recovered Autolocker by first opening the PSL mechanical shutter, then re-enabling the Autolocker. The IMC lock didn't immediately recover, and we saw some fuzz on the PSL-FSS_FAST trace, so we closed the shutter again, waited a minute, then re-opened it and MC caught its lock.
 

  15897   Wed Mar 10 15:35:25 2021 Paco, AnchalSummaryIMCIMC free swinging experiment set to trigger at 5:00 am

A tmux session named "MCFreeSwingTest" will run on Rossa. This session is running script scripts/SUS/freeSwingMC.py (also attached) which will trigger at 5:00 am to impart 30000 counts kick to MC1, MC2, and MC3 after shutting PSL shutter and disabling the MC autolocker. It will let them freely swing for 1050 sec and will repeat 15 times to allow some averaging. In the end, it will undo all the changes it does and switches on autolocker on IMC. The script is set to restore any changes in case it fails at any point or a Ctrl-C is detected.

Attachment 1: freeSwingMC.py.zip
  15902   Thu Mar 11 08:13:24 2021 Paco, AnchalUpdateSUSIMC First Free Swing Test failed due to typo, restarting now

[Paco, Anchal]

The triggered code went on at 5:00 am today but a last minute change I made yesterday to increase number of repititions had an error and caused the script to exit putting everything back to normal. So as we came in the morning, we found the mode cleaner locked continuously after one free swing attempt at 5:00 am. I've fixed the script and ran it for 2 hours starting at 8;10 am. Our plan is to get some data atleast to play with when we are here. If the duration is not long enough, we'll try to run this again tomorrow morning. The new script is running on same tmux session 'MCFreeSwingTest' on Rossa

10:13 the script finished and IMC recovered lock.

Thu Mar 11 10:58:27 2021

The test ran succefully with the mode cleaner optics coming back to normal in the end of it. We wrote some scripts to read data and analyze it. More will come in future posts. No other changes were made today to the systems.

  15912   Fri Mar 12 11:44:53 2021 Paco, AnchalUpdatetrainingIMC SUS diagonalization in progress

[Paco, Anchal]

- Today we spent the morning shift debugging SUS input matrix diagonalization. MC stayed locked for most of the 4 hours we were here, and we didn't really touch any controls.

  15919   Mon Mar 15 08:55:45 2021 Paco, AnchalSummarytraining 

[Paco, Anchal]

  • Found IMC locked upon arrival.
  • Since "allegra" was set up as an additional workstation, we tried using it but discovered the monitor ist kaput. For the sake of debugging, we tested VGA and DVI inputs and even the monitor lying around (also labeled "allegra") with no luck. So <ssh> it is for now.

IMC Input sensing matrix

  • Rana joined us and asked us to use Rossa for now so that we can sit socially distantly.
  • Attaching some intermediate results on our analysis as pdf and zip file containing all the codes we used.
  • We used channels C1:SUS-MC1_USSEN_OUTPUT  (16 Hz channels) and so on which might not be the correct way to do it as Rana pointed out today, we should have used channels like C1:SUS-MC1_SENSOR_UL etc.
  • During the input matrix calculation, we used the method of TF estimate (as mentioned in 4886) to calculate the sensor matrix and inverted it and normalized all rows with the maximum absolute value element (we tried few other ways of normalization with no better results either).
  • We found the peak frequencies by fitting lorentzian to the sensor data rotated by the current input matrix in the system. We also tried doing this directly on the sensor data (UL for POS, UR for PIT, LR for YAW and SD for SIDE as this seemed to be the case in the old matlab codes) but with no different results.
  • The fitted peak frequencies, Q and amplitude values are in fittedPeakFreqs.yml in the attached zip.
Attachment 1: IMC_InputMatrixDiagonalization.pdf
IMC_InputMatrixDiagonalization.pdf IMC_InputMatrixDiagonalization.pdf IMC_InputMatrixDiagonalization.pdf
Attachment 2: inputMatrixCalculationMC.tar
Attachment 3: freeSwingMC.py.tar
Attachment 4: SUSfreeswing_1299514263.txt.tar
  15926   Tue Mar 16 19:13:09 2021 Paco, AnchalUpdateSUSFirst success in Input Matric Diagonalization

After jumping through few hoops, we have one successful result in diagonalizing the input matrix for MC1, MC2 and MC3.


Code:

  • Attachment 2 has the code file contained. For now, we can only guarantee it to work on Donatella in conda base environment. Our code is present in scripts/SUS/InMatCalc
  • We mostly follow the steps mentioned in 4886 and the matlab codes in scripts/SUS/peakFit.
  • Data is first multiplied with currently used inpur matrix to get time series data in DOF (POS, PIT, YAW, SIDE) basis.
  • Then, the peak frequencies of each resonance are identified.
  • For getting these results, we did not attempt to fit the peaks with lorentzians and took the maxima point of the PSD to get the peak positions. This is only possible if the current input matrix is good enough. We have to adjust some parameters so that our fitting code works always.
  • TF estimate between the sensor data w.r.t UL sensor is taken and the values around the peak frequencies of oscillations are averaged to get the sensing matrix.
  • This matrix is normalized along DOF axis (columns in our case) and then inverted.
  • After inversion, another normaliation is done along DOF axis (now rows).
  • Finally we plot the comparison of ASD in DOF basis when using current input matrix and when using our calculated inpur matrix (diagonalizing).
  • You can notice in Attachment 1 that after the diagonalization, each DOF shows resonance at only one and its own resonance frequency while earlier there was some mixing shown.
  • Absolute value of the calculated DOF might have changed and we need to calibrate them or apply appropriate gain factors in the DOF basis filter chains.

Next steps:

  • We'll complete our scripts and make them more general to be used for any optic.
  • We'll combine all of them into one single script which can be called by medm.
  • In parallel, we'll start onwards from step 2 in 15881.
  • Anything else that folks can suggest on our first result. Did we actually do it or are we fooling ourselves?
Attachment 1: IMC_InputMatrixDiagonalization.pdf
IMC_InputMatrixDiagonalization.pdf IMC_InputMatrixDiagonalization.pdf IMC_InputMatrixDiagonalization.pdf
Attachment 2: InMatCalcScripts.zip
  15928   Wed Mar 17 09:05:01 2021 Paco, AnchalConfigurationComputers40m Control Room Changes
  • Switched positions of allegra and donatella.
  • While doing so, the hdmi cable previously used by donatella snapped. We replaced this cable by another unused cable we found connected only on one end to rossa. We should get more HDMI cables if that cable was in use for some other purpose.
  • Paco bought a bluetooth speaker/mic that is placed infront of allegra and it's usb adapter is connected to iMac's keyboard in the bottom. With the new camera installed, the 40m video call environment is now complete.
  • Again, we have placed allegra's monitor for place holder but it is not working and we need new monitors for it in future whenever it is going to be used.
  15937   Thu Mar 18 09:18:49 2021 Paco, AnchalUpdateSUSTesting of new input matrices with new data

[Paco, Anchal]

Since the new generated matrices were created for the measurement made last time, they are of course going to work well for it. We need to test with new independent data to see if it works in general.


  • We have run scripts/SUS/InMatCal/freeSwingMC.py for 1 repition and free swinging duration of 1050s on tmux session FreeSwingMC on Rossa. Started at GPS: 1300118787.
  • Thu Mar 18 09:24:57 2021 : The script ended successfully. IMC is locked back again. Killing the tmux session.
  • Attached are the results of 1-kick test, time series data and the ASD of DOFs for calculated using existing input matrix and our calculated input matrix.
  • The existing one was already pretty good except for maybe the side DOF which was improved on our diagonalization.

[Paco]

After Anchal left for his test, I took the time to set up the iMAC station so that Stephen (and others) can remote desktop into it to use Omnigraffle. For this, I enabled the remote login and remote management settings under "Sharing" in "System Settings". These two should allow authenticated ssh-ing and remote-desktopping respectively. The password is the same that's currently stored in the secrets.

Quickly tested using my laptop (OS:linux, RDP client = remmina + VNC protocol) and it worked. Hopefully Stephen can get it to work too.

Attachment 1: MC_Optics_Kicked_Time_Series_1.pdf
MC_Optics_Kicked_Time_Series_1.pdf
Attachment 2: TEST_Input_Matrix_Diagonalization.pdf
TEST_Input_Matrix_Diagonalization.pdf TEST_Input_Matrix_Diagonalization.pdf TEST_Input_Matrix_Diagonalization.pdf
  15943   Fri Mar 19 10:49:44 2021 Paco, AnchalUpdateSUSTrying coil actuation balance

[Paco, Anchal]

  • We decided to try out the coil actuation balancing after seeing some posts from Gautum about the same on PRM and ETMY.
  • We used diaggui to send swept sine excitation signal to C1:SUS-MC3_ULCOIL_EXC and read it back at C1:SUS-MC3_ASCPIT_IN1. Idea was to create transfer function measurements similar to 15880.
  • We first tried taking the transfer function with excitation amplitude 0f 1, 10, 50, 200 with damping loops on (swept from 10 to 100 Hz lograthmically in 20 points).
  • We found no meaningful measurement and looked like we were just measuring noise.
  • We concluded that it is probably because our damping loops are damping all the excitation down.
  • So we decided to switch off damping and retry.
  • We switched off: C1:SUS-MC3_SUSPOS_SW2 , C1:SUS-MC3_SUSPIT_SW2, C1:SUS-MC3_ASCPIT_SW2, C1:SUS-MC3_ASCYAW_SW2, C1:SUS-MC3_SUSYAW_SW2, and C1:SUS-MC3_SUSSIDE_SW2.
  • We repeated teh above measurements going up in amplitudes of excitation as 1, 10, 20. We saw the oscillation going out of UL_COIL but the swept sine couldn't measure any meaningful transfer function to C1:SUS-MC3_ASCPIT_IN1. So we decided to just stop. We are probably doing something wrong.

Trying to go back to same state:

  • We switch on: C1:SUS-MC3_SUSPOS_SW2 , C1:SUS-MC3_SUSPIT_SW2, C1:SUS-MC3_ASCPIT_SW2, C1:SUS-MC3_ASCYAW_SW2, C1:SUS-MC3_SUSYAW_SW2, and C1:SUS-MC3_SUSSIDE_SW2.
  • But C1:SUS-MC3_ASCYAW_INMON had accumulated about 600 offset and was distrupting the alignment. We switched off C1:SUS-MC3_ASCYAW_SW2 hoping the offset will go away once the optic is just damped with OSEM sensors, but it didn't.
  • Even after minutes, the offset in C1:SUS-MC3_ASCYAW_INMON kept on increasing and crossed beyond 2000 counts limit set in C1:IOO-MC3_YAW filter bank.
  • We tried to unlock the IMC and lock it back again but the offset still persisted.
  • We tried to add bias in YAW DOF by increasing C1:SUS-MC3_YAW_OFFSET, and while it was able to somewhat reduce the WFS C1:SUS-MC3_ASCYAW_INMON offset  but it was misalgning the optic and the lock was lost. So we retracted the bias to 0 and made it zero.
  • We tried to track back where the offset is coming from. In C1IOO_WFS_MASTER.adl, we opened the WFS2_YAW filter bank to see if the sensor is indeed reading the increasing offset.
  • It is quite weird that C1:IOO-WFS2_YAW_INMON is just oscillating but the output in this WFS2_YAW filter bank is slowly increasing offset.
  • We tried to zero the gain and back to 0.1 to see if some holding function is causing it, but that was not the case. The output went back to high negative offset and kept increasing.
  • We don't know what else to do. Only this one WFS YAW output is increasing, everything else is at normal level with no increasing offset or peculiar behavior.
  • We are leaving C1:SUS-MC3_ASCYAW_SW2 off as it is disrupting the IMC lock.

[Jon walked in, asked him for help]

  • Jon suggested to do burt restore on IOO channels.
  • We used (selected through burtgooey):
    burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Mar/19/08:19/c1iooepics.snap -l /tmp/controls_1210319_113410_0.write.log -o /tmp/controls_1210319_113410_0.nowrite.snap -v <
  • No luck, the problem persists.
  15951   Mon Mar 22 11:57:21 2021 Paco, AnchalUpdateSUSTrying coil actuation balance

[Paco, Anchal]

  • For MC coil balancing we will use the ASC (i.e. WFS) error signals since there are no OPLEV inputs (are there OPLEVs at all?).

Test MC1

  • Using the SUS screen LockIns the plan is to feed excitation(s) through the coil outputs, and look at the ASC(Y/P) error signals.
  • A diaggui xml template was saved in /users/Templates/SUS/MC1-actDiag.xml which was based on /users/Templates/SUS/ETMY-actDiag.xml
  • Before running the measurement, we of course want to plug our input matrix, so we ran /scripts/SUS/InMatCalc/writeMatrix.py only to find that it tripped the MC1 Watchdog.
    • The SIDE input seems to have the largest rail, but we just followed the procedure of temporarily increasing the WD max! threshold to allow the damping action and then restoring it.
    • This happened because in latest iteration of our code, we followed an advice from the matlab code to ensure the SIDE OSEM -> SIDE DOF matrix element remains positive, but we found out that MC1 SIDE gain (C1:SUS-MC1_SUSSIDE_GAIN) was set to -8000 (instead of a positive value like all other suspensions).
    • So we decided to try our new input matrix with a positive gain value of 8000 at C1:SUS-MC1_SUSSIDE_GAIN and we were able to stablize the optic and acquire lock, but...
    • We saw that WFS YAW dof started accumulating offset and started disturbing the lock (much like last friday). We disabled the ASC Input button (C1:SUS-MC1_ASCYAW_SW2).
    • This made the lock stable and IMC autolocker was able to lock. But the offset kept on increasing (see attachment 1).
    • After sometime, the offset begain to exponential go to some steady state value which was around -3000.
  • We wrote back the old matrix values and changed the C1:SUS-MC1_SUSSIDE_GAIN back to -8000. But the ASCYAW offset remained to the same position. We're leaving it disabled again as we don't know how to fix this. Hopefully, it will organically come back to small value later in the day like last time (Gautum just reenabled the ASCYAW input and it worked).

Test MC3

  • Defeated by MC1, we moved to MC3.
  • Here, the gain value for C1:SUS-MC3_SUSSIDE_GAIN was already positive (+500) so it could directly take our new matrix.
  • When we switched off watchdog, loaded the new matrix and switched the watchdog back on.
  • The IMC lock was slightly distrupted but remain locked. There was no unusual activity in the WFS sensor values. However, we saw the the SIDE coil output is slowly accumulating offset.
  • So we switched off the watchdog before it will trip itself, wrote back the old matrix and reinstated the status quo.
  • This suggests we need to carefully look back our latest changes of normalization and have new input matriced which keep the system stable other than working on paper with offline data.
Attachment 1: 210322_MC1_ASCY.pdf
210322_MC1_ASCY.pdf
Attachment 2: NewandOldMatrices.tar.gz
  15954   Mon Mar 22 19:07:50 2021 Paco, AnchalUpdateSUSTrying coil actuation balance

We found that following protocol works for changing the input matrices to new matrices:

  • Shut the PSL shutter C1:PSL-PSL_ShutterRqst. Switch off IMC autolocker C1:IOO-MC_LOCK_ENABLE.
  • Switch of the watchdog, C1:SUS-MC1_LATCH_OFF.
  • Update the new matrix. (in case of MC1, we need to change sign of C1:SUS-MC1_SUSSIDE_GAIN for new matrix)
  • Switch on the watchdog back again which enables all the coil outputs. Confirm that the optic is damped with just OSEM sensors.
  • Switch on IMC autolocker C1:IOO-MC_LOCK_ENABLE and open PSL shutter C1:PSL-PSL_ShutterRqst.

We repeated this for MC2 as well and were able to lock. However, we could not do the same for MC3. It was getting unstable as soon as cavity was locked i.e. the WFS were making the lock unstable. However, the unstability was different in different attempts but we didn't try mroe times as we had to go.


Coil actuation balancing:

  • We set LOCKIN1 and LOCKIN2 oscillators at 10.5 Hz anf 13.5 Hz with amplitude of 10 counts.
  • We wrote PIT, YAW and Butterfly actuation vectors (see attached text files used for this) on LOCKIN1 and LOCKIN2 for MC1.
  • We measured C1:SUS-MC1_ASCYAW_IN1 and C1:SUS-MC1_ASCPIT_IN1 and compared it against the case when no excitation was fed.
  • We repeated the above steps for MC2 except that we did not use LOCKIN2. LOCKIN2 was found to already on at oscillator frequency of 0.03Hz with amplitude of 500 counts and was fed to all coils with gain of 1 (so it was effectively moving position DOF at 0.03 Hz.) When we changed it, it became ON back again after we turned on the autolocker, so we guess this must be due to some background script and msut be important so we did not make any changes here. But what is it for?
  • We have gotten some good data for MC1 and MC2 to ponder upon next.
  • MC1 showed no cross coupling at all while MC2 shoed significant cross coupling between PIT and YAW.
  • Both MC1 and MC2 did not show any cross coupling between butterfly actuation and PIT/YAW dof.

On another news, IOO channels died!

  • Infront of us, the medm channels starting with C1:IOO just died. See attachment 8.
  • We are not sure why that happened, but we have reported everything we did up here.
  • This happened around the time we were ready to switch back on the IMC autolocker and open the shutter. But now these channels are dead.
  • All optics were restored with old matrices and settings and are damped in good condition as of now.
  • IMC should lock back as soon as someone can restart the EPICS channels and switch on C1:IOO-MC_LOCK_ENABLE and C1:PSL-PSL_ShutterRqst.
Attachment 1: 20210322_MC1_CoilBalancePIT.pdf
20210322_MC1_CoilBalancePIT.pdf
Attachment 2: 20210322_MC1_CoilBalanceYAW.pdf
20210322_MC1_CoilBalanceYAW.pdf
Attachment 3: 20210322_MC1_CoilBalanceBUTT.pdf
20210322_MC1_CoilBalanceBUTT.pdf
Attachment 4: 20210322_MC2_CoilBalancePIT.pdf
20210322_MC2_CoilBalancePIT.pdf
Attachment 5: 20210322_MC2_CoilBalanceYAW.pdf
20210322_MC2_CoilBalanceYAW.pdf
Attachment 6: 20210322_MC2_CoilBalanceBUTT.pdf
20210322_MC2_CoilBalanceBUTT.pdf
Attachment 7: 20210322_IMC_CoilBalance.tar.gz
Attachment 8: image-e6019a14-9cf3-45f7-8f2c-cc3d7ad1c452.jpg
image-e6019a14-9cf3-45f7-8f2c-cc3d7ad1c452.jpg
  15955   Tue Mar 23 09:16:42 2021 Paco, AnchalUpdateComputersPower cycled C1PSL; restored C1PSL

So actually, it was the C1PSL channels that had died. We did the following to get them back:

  • We went to this page and tried the telnet procedure. But it was unable to find the host.
  • So we followed the next advice. We went to the 1X1 rack and manually hard shut off C1PSL computer by holding down the power button until the LEDs went off.
  • We wait for 5-7 seconds and switched it back on.
  • By the time we were back in control room, the C1PSL channels were back online.
  • The mode cleaner however was struggling to keep the lock. It was going in and out of lock.
  • So we followed the next advice and did burt restore which ran following command:
    burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Mar/22/17:19/c1psl.snap -l /tmp/controls_1210323_085130_0.write.log -o /tmp/controls_1210323_085130_0.nowrite.snap -v 
  • Now the mode cleaner was locked but we found that the input switch of C1IOO-WFS1_PIT and C1IOO-WFS2_PIT filter banks were off. Which meant that only YAW sensors were in loop in the lock.
  • We went back in dataviewer and checked when these channels were shut down. See attachments for time series.
  • It seems this happened yesterday, March 22nd near 1:00 pm (20:00:00 UTC). We can't find any mention of anyone else doing it on elog and we left by 12:15pm.
  • So we shut down the PSL shutter (C1:PSL-PSL_ShutterRqst) and switched off MC autolocker (C1:IOO-MC_LOCK_ENABLE).
  • Switched on C1:IOO-WFS1_PIT_SW1 and C1:IOO-WFS2_PIT_SW1.
  • Turned back on PSL shutter (C1:PSL-PSL_ShutterRqst) and MC autolocker (C1:IOO-MC_LOCK_ENABLE).
  • Mode cleaner locked back easily and now is keeping lock consistently. Everything looks normal.
Attachment 1: MCWFS1and2PITYAW.pdf
MCWFS1and2PITYAW.pdf
Attachment 2: MCWFS1and2PITYAW_Zoomed.pdf
MCWFS1and2PITYAW_Zoomed.pdf
  15961   Thu Mar 25 11:46:31 2021 Paco, AnchalUpdateSUSMC2 Coil Balancing updates

Proof-of-principle

  • We excited PIT and YAW dofs using LOCKIN1 in MC2 on Monday.
  • We analyzed this data in a simple analysis explained in Attachment 1 python notebook (also present at /users/anchal/20210323_AnalyszingCoilActuationBalance/)
  • Basically, we tried to estimate the cross coupling in 2x2 matrix from actuated DOF to sensed DOF, inverted it, and applied it to output matrix to undo the cross coupling.
  • Attachments 2 and 3 show how much we performed in undoing the cross coupling.
  • The ratio of 13.5 Hz peaks shows how much coupling is still present.

Going towards 3x3 Coil balancing:

  • In a conversation with Rana yesterday, we understood that we can use MC_F data as POS sensing data out of the loop.
  • So today, we repreated the excitation measurements while exciting POS, PIT and YAW dofs from LOCKIN1 on MC2 and measuring C1:IOO-MC_F, C1:SUS-MC2_ASCPIT_IN1 and C1:SUS-MC2_ASCPIT_IN2.
  • Data from MC_F is converted into units of um using factor 9.57e-8 um/Hz.
  • We changed the excitation amplitude in order to see cross coupling peaks when they were not visible with low excitation.
  • The data was measured while new calculated input matrix was loaded which from our calculations diagonalized the sensing matrix of OSEMs.

Some major changes:

  • We actually found that the C1:SUS-MC2_ASCPIT_IN1 showed a broadband increase in noise today (from Monday) by factor of about 100 in range 0-20 Hz.
  • We were not sure why this changed from our 22nd March measurement.
  • We checked if the gain values in the loops changed in alst 3 days, but they didn't.
  • Then we realized that the WFS1_PIT and WFS2_PIT switched that we turned ON on Tuesday were the only changes that were made in the loop.
  • We turned back OFF C1:IOO-WFS1_PIT_SW1 and C1:IOO-WFS2_PIT_SW1. This actually brought back the noise level of C1:SUS-MC2_ASCPIT_IN1 down to what it was on Monday.

 

Attachment 1: CoilActuationBalancing.ipynb.tar.gz
Attachment 2: MC2_CoilBalancePITnorm_excSamePIT.pdf
MC2_CoilBalancePITnorm_excSamePIT.pdf
Attachment 3: MC2_CoilBalanceYAWnorm_excSameYAW.pdf
MC2_CoilBalanceYAWnorm_excSameYAW.pdf
Attachment 4: 20210325_IMC_CoilBalance.tar.gz
  15970   Fri Mar 26 11:54:37 2021 Paco, AnchalUpdateSUSMC2 Coil Balancing updates

[Paco, Anchal]

  • Today we spent the morning testing the scripts under ~/c1/scripts/SUS/OutMatCalc/ that automate the procedure (which we have been doing by hand) and catch any "bad" behavior instances that we have identified. In such instances, the script sets up to restore the IMC state smoothly.
  • After some testing and debugging, we managed to get some data for MC2 using ~/c1/scripts/SUS/OutMatCalc/getCrossCouplingData.py
  15972   Mon Mar 29 10:44:51 2021 Paco, AnchalUpdateSUSMC2 Coil Balancing updates

We ran the coil balancing procedure 4 times while iterating through the output matrix optimization.

Attachment 1, pages 1 to 4 show the progression of cross coupling from current output matrix (which is theoretical ideal) to the latest iteration. We plot the sensed DOF ASD which we used to determine the cross coupling when different excitations are fed using the LOCKIN1 feeding 13Hz oscillation of 200 counts amplitude along the vector defined in output matrix. That means, when we change the output matrix, in subsequent tests, we alos change the exciation direction along with it.

Unfortunately, we don't see a very good optimizations over iterations. While we see some peaks going down in sensed PIT and sensed POS (through MC_F), we rather see an increase in cross coupling in the sensed YAW.


Scripts:

  • For running the tests, we used script in scripts/SUS/OutMatCalc/crossCoupleTest.py and wrote commanding scripts in the /users/anchal/20210329_MC2_TestingNewOutMat .
  • The optimization code is at in scripts/SUS/OutMatCalc/outMatOptimize.py.
  • The code reads sensed DOF data using nds2 and calculated cross spectral density among the sensed DOF at the excitation frequencies.
  • This is normalized by the power spectral density of reference data (no excitation) and power spectral density of position data to create a TF estimate.
  • The real values of the sensor matrix thus created is used to get the inverse matrix.
  • The inverse matrix is first normalized along each row by diagonal elements to get 1 there and then multiplied by previous output matrix to create a new output matrix.
  • I guess, reading the code will be a better way of understanding this algorithm.
Attachment 1: MC2OutMatCrossCouple_Old-to-It3.pdf
MC2OutMatCrossCouple_Old-to-It3.pdf MC2OutMatCrossCouple_Old-to-It3.pdf MC2OutMatCrossCouple_Old-to-It3.pdf MC2OutMatCrossCouple_Old-to-It3.pdf
Attachment 2: 20210329_MC2_CrossCoupleTest.tar.gz
  16233   Thu Jul 1 10:34:51 2021 Paco, AnchalSummaryLSCETMY QPD fixed

Paco worked on alignign the beam splitter to get light on the ETMY QPD and was successful in centering it without any other changes in the settings.

  16238   Tue Jul 6 10:47:07 2021 Paco, AnchalUpdateIOORestored MC

MC was unlocked and struggling to recover this morning due to misguided WFS offsets. In order to recover from this kind of issue, we

  1. Cleared the bogus WFS offsets
  2. Used the MC alignment sliders to change MC1 YAW from -0.9860 to -0.8750 until we saw the lowest order mode transmission on the video monitor.
  3. With MC Trans sum at around ~ 500 counts, we lowered the C1:IOO-WFS_TRIGGER_THRESH_ON from 5000 to 500, and the C1:IOO-WFS_TRIGGER_MON from 3.0 to 0.0 seconds and let the WFS integrators work out some nonzero angular control offsets.
  4. Then, the MC Trans sum increased to about 2000 counts but started oscillating slowly, so we restored the delayed loop trigger from 0.0 to 3.0 seconds and saw the MC Trans sum reach its nominal value of ~ 14000 counts over a few minutes.

The MC is now restored and the plan is to let it run for a few hours so the offsets converge; then run the WFS relief script.

  5418   Thu Sep 15 16:45:59 2011 PaulUpdateSUSITMY and SRM Oplev status

Today I worked on getting the ITMY and SRM oplevs back in working order. I aligned the SRM path back onto the QPD. I put excitations on the ITMY and SRM in pitch and yaw and observed the beam at the QPDs to check for clipping. They looked clean from clipping.

 
Measurements of the beam power at various points:
 
Straight after the laser - 7.54mW
After the BS in the SRM path - 1.59mW
After the BS in the ITMY path - 3.24mW
Incident on the SRM QPD - 0.03mW
Incident on the ITMY QPD - 0.25mW
 
Counts registered from the QPD sum channels:
 
SRM QPD SUM dark count - 1140
SRM QPD SUM bright count - 3250
 
ITMY QPD SUM dark count - 150
ITMY QDP SUM bright count - 12680
 
The power incident on the SRM QPD seems very low with respect to the ITMY QPD. Is the SRM mirror coating not very reflective for the He-Ne laser?There are some back reflections from lenses, which we should be careful of to avoid scattering.
  5422   Thu Sep 15 18:24:54 2011 PaulUpdateSUSITMY and SRM Oplev current status - comparison with ITMY

Just to find out where we are currently, I plotted the ITMY and SRM oplev spectra along with the ETMY oplev spectra. ETMY seems to be very good, so comparing with this seemed useful, so we know how much we have to improve by. The SRM power spectrum appears to be around 2 orders of magnitude higher than ETMY over pretty much the whole measurement band. The ITMY power spectrum is not so bad as the SRM above about 60Hz. Next thing to do is to check the dark noise level for the ITMY and SRM QPDs.

Attachment 1: oplev_spectra_comparison.pdf
oplev_spectra_comparison.pdf
  5423   Thu Sep 15 18:31:27 2011 PaulUpdateSUSITMY and SRM Oplev current status - comparison with ITMY

Quote:

Just to find out where we are currently, I plotted the ITMY and SRM oplev spectra along with the ETMY oplev spectra. ETMY seems to be very good, so comparing with this seemed useful, so we know how much we have to improve by. The SRM power spectrum appears to be around 2 orders of magnitude higher than ETMY over pretty much the whole measurement band. The ITMY power spectrum is not so bad as the SRM above about 60Hz. Next thing to do is to check the dark noise level for the ITMY and SRM QPDs.

 The title of this post should of course have been " ... - comparison with ETMY" not " ... - comparison with ITMY"

  5427   Thu Sep 15 22:26:32 2011 PaulUpdateSUSITMY Oplev QPD dark noise PSD

 I took a dark noise measurement for the ITMY QPD, for comparison with measurements of the oplev noise later on. Initially I was plotting the data from test points after multiplication by the oplev matrix (i.e. the OLPIT_IN1 / OLYAW_IN1), but found that the dark noise level seemed higher than the bright noise level (!?). Kiwamu realised that this is because at that test point the data is already divided by QPD SUM, thus making the dark noise level appear to be greater than the bright level, since QPD SUM is much smaller for the dark measurements. The way around this was to record the direct signals from each quadrant before the division. I took a power spectrum of the dark noise from each quadrant, then added them in quadrature, then divided by QPD SUM at the end to get an uncalibrated PSD. Next I will convert these into the equivalent for pitch and yaw noise spectra. To calibrate the plots in radians per root Hz requires some specific knowledge of the oplev path, so I won't do this until I have adjusted the path.

Attachment 1: ITM_dark_QPD_PSD.pdf
ITM_dark_QPD_PSD.pdf
  5429   Fri Sep 16 00:08:30 2011 PaulUpdateSUSITMY Oplev QPD dark and bright noise spectra

 I tried again at plotting the ITMY_QPD noise spectra in for dark and bright operation. Before we had the strange situation where the dark noise seemed higher, but Kiwamu noticed this was caused by dividing by the SUM before the testpoint I was looking at. This time I tried just multiplying by the measured SUM for bright and dark to normalise the spectra against each other. The results looks more reasonable now, the dark noise is lower than the bright noise for a start! However, the dark noise spectrum now doesn't look the same as the one I showed in my previous post.

Attachment 1: ITMY_oplev_dark_noise_vs_bright_noise.pdf
ITMY_oplev_dark_noise_vs_bright_noise.pdf
  5432   Fri Sep 16 14:03:53 2011 PaulUpdateSUSSRM oplev QPD noise measurement

 I checked the dark and bright noise of the SRM oplev QPD. The SRM QPD has a rather high dark level for SUM of 478 counts. The dark noise for the SRM QPD looked a little high in the plot against the bright noise (see first attachment), so I plotted the dark noise with the ITMY QPD dark noise (see second attachment). It seems that the SRM QPD has a much higher dark noise level than the ITMY! In case anyone is wondering, to make these traces I record the data from the pitch and yaw test points, then multiply by the SUM (to correct for the fact that the test point signal has already been divided by SUM). I will check the individual quadrants of the SRM QPD to see if one in particular is very noisy. If so, we/I should probably fix it.

Attachment 1: SRM_oplev_dark_noise_vs_bright_noise.pdf
SRM_oplev_dark_noise_vs_bright_noise.pdf
Attachment 2: SRM_ITMY_QPD_dark_noise_comparison.pdf
SRM_ITMY_QPD_dark_noise_comparison.pdf
  5436   Fri Sep 16 16:34:54 2011 PaulUpdateSUSITMY SRM oplev telescope plan

I've calculated a suitable collimating telescope for the ITMY/SRM oplev laser, based on the specs for the soon-to-arrive 2mW laser (model 1122/P) available here: http://www.jdsu.com/ProductLiterature/hnlh1100_ds_cl_ae.pdf

Based on the fact that the 'beam size' value and 'divergence angle' value quoted don't match up, I am assuming that the beam radius value of 315um is _not_ the waist size value, but rather the beam size at the output coupler. From the divergence angle I calculated a 155um waist, (zR = 12cm). This gives the quoted beam size of about 316um at a distance of 8.5" away from the waist. This makes me think that the output coupler is curved and the waist is at the back of the laser, or at least 8.5" from the output coupler.

The collimating telescope gives a waist of size 1142um (zR=6.47m) at a distance of 1.427m away from the original laser waist, using the following lens combo:

 

L1 f=-0.15 @ 0.301m

L2 f=0.3 @ 0.409m

 

This should be fine to get a small enough spot size (1-2mm) on the QPDs.

 

Attachment 1: ITMY_SRM_telescope.png
ITMY_SRM_telescope.png
  5437   Fri Sep 16 17:09:07 2011 PaulUpdateSUSITMX oplev plan

 I just drew a basic picture of how the ITMX oplev path could be reworked to minimise the number of optics in the path. Only possible problem with this might be the turning mirror onto the ITMX getting in the way of the collimating lenses. Should be easy to solve though. Does anyone know if there is a ITMX pick off beam I should be careful to avoid?

Attachment 1: ITMX_oplev_plan.png
ITMX_oplev_plan.png
  5442   Fri Sep 16 22:11:21 2011 PaulUpdateSUSITMY transfer function

First of all I moved the lenses on the ITMY/SRM oplev path to get a smaller spot size on the QPDs. I couldn't get the beam analyzer to work though, so I don't know quite how successful this was. The software brought up the error "unable to connect to framegrabber" or something similar. I don't think the signal from the head was being read by the software. I will try to get the beam analyzer working soon so that we can characterize the other oplev lasers and get decent spot sizes on the QPDs. I searched the elog for posts about the analyzer, and found that it has been used recently, so maybe I'm just doing something wrong in using it. 

After this I measured the transfer function for the ITMY oplev yaw. I did a swept sine excitation of the ITMY in yaw with an amplitude of 500, and recorded the OSEM yaw values and the oplev yaw values. This should show a flat response, as both the QPD and the OSEMS should have flat frequency response in the measurement band. This measurement should therefore just yield a calibration from OSEM yaw to oplev yaw. If the OSEM yaw values were already calibrated for radians, we would then immediately have a calibration from oplev yaw values to radians. However, as far as I'm aware, there is not a calibration factor available from OSEM yaw values to radians. Anyway, the TF I measured did not appear to be very flat (see attached plot). Kiwamu suggested I should check the correlation between the OSEM measurements and the oplev QPD measurements - if the correlation is less than 1 the TF is not reliable. Indeed the coherence was poor for this measurement. This was probably because at frequencies above the pendulum frequency, the excitation amplitude of 500 was not enough to cause a measurable change in the optic angle. So, the plot attached is not very useful yet, but I learned something while making it.

 

Attachment 1: ITMY_osem_to_oplev_TF.pdf
ITMY_osem_to_oplev_TF.pdf
  5443   Fri Sep 16 22:51:52 2011 PaulUpdateSUSCalibration plan for the oplevs

 In order to estimate the amount of noise that the oplevs are injecting into the GW channel, we first need to calibrate oplev signals in terms of angular change in the optic. I said in my previous post that there wasn't a calibration factor for OSEM values to radians, but I found that Kakeru had estimated this in 2009 - see entry 1413. However, Kakeru found that this was quite a rough estimate, and that it didn't agree with his calibrated oplev values well. He does quote the 2V/mm calibration factor for the OSEM readings though - does anyone know the provenance of this factor? I searched for OSEM calibration and found nothing.

 
Kiwamu and Suresh suggested a way to calibrate the oplevs without needing to calibrate the OSEMs in the way that Kakeru describes in entry 1413. This should give a calibration for the OSEMs _and_ the oplevs in fact. The method should be as follows:
 
1) Change the coil driver values in DC to give tip or tilt the optic. Measure the resulting change in spot position at a known distance from the optic, perhaps just using a ruler. Record the spot position and OSEM values for each coil driver value. This will definitely require a smaller spot size, so I'll implement the new telescopes first.
 
2) Knowing the length of the lever arm from the optic to the spot measurement position, we can calibrate the OSEM values to radians.
 
3) We can now put the beam onto the oplev QPD, and either change the coil driver values again in the same way (but over a smaller range), or excite the test mass in pitch or yaw, this time measuring both the OSEM values and the oplev QPD values. Since we can already convert from OSEM values to radians, we can now convert from oplev values to radians too.
 
4) I should be careful to consider the input sensing matrix for both the OSEMs and the oplevs in these measurements. Should I divide those out of the calibration to avoid that if they change the calibration factor changes too?
  5457   Mon Sep 19 12:23:30 2011 PaulUpdateSUSITMY and SRM oplev beam size reduced + next steps

I replaced the lenses that were there with a -150mm lens followed by a +250mm lens. This gave a significantly reduced beam size at the QPDs. With the beam analyzer up and running it should be possible to optimize this later this afternoon. Next I will remove the SRM QPD from the path and make measurements of the beam spot position movement and corresponding OSEM values for different DC mirror offsets. I will then repeat the process for ITMY.

  5458   Mon Sep 19 13:13:10 2011 PaulUpdateSUSITMY oplev available for use: SRM not for the moment

 I've got the bench set up for the measurement of the beam spot change with DC SRM alignment offsets. The ITMY oplev is aligned and fine to use, but the SRM one isn't until further notice (probably a couple of hours).

  5460   Mon Sep 19 15:30:22 2011 PaulUpdateSUSSRM oplev OSEM yaw calibration curve

 I made the first measurements towards oplev calibration measurements: calibrating the oplevs in SRM YAW. The measurements seemed fine, I had a range of between -1.5 and 1.5 in SRM DC alignment before clipping on mirrors on the oplev bench became a problem. This seemed to be plenty to get a decent fit for the spot position against DC alignment value - see attached plot. The fitted gradient was -420um oplev yaw count. I calculated oplev yaw values as UL+LL-UR-LR. Pitch next.

Attachment 1: SRM_YAW_calib_curve.png
SRM_YAW_calib_curve.png
ELOG V3.1.3-