40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 340 of 354  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Dateup Author Type Category Subject
  17104   Thu Aug 25 15:24:06 2022 PacoHowToElectronicsRFSoC 2x2 board -- fandango

[Paco, Chris Stoughton, Leo -- remote]

This morning Chris came over to the 40m lab to help us get the RFSoC board going. After checking out our setup, we decided to do a very basic series of checks to see if we can at least get the ADCs to run coherently (independent of the DACs). For this I borrowed the Marconi 2023B from inside the lab and set its output to 1.137 GHz, 0 dBm. Then, I plugged it into the ADC1 and just ran the usual spectrum analyzer notebook on the rfsoc jupyter lab server. Attachment #1 - 2 shows the screen captured PSDs for ADCs 0 and 1 respectively with the 1137 MHz peaks alright.

The fast ADCs are indeed reading our input signals.


Before this simple test, we actually reached out to Leo over at Fermilab for some remote assistance on building up our minimally working firmware. For this, Chris started a new vivado project on his laptop, and realized the rfsoc 2x2 board files are not included in it by default. In order to add them, we had to go into Tools, Settings and add the 2020.1 Vivado Xilinx shop board repository path to the rfsoc2x2 v1.1 files. After a little bit of struggling, uninstalling, reinstalling them, and restarting Vivado, we managed to get into the actual overlay design. In there, with Leo's assistance, we dropped the Zynq MPSoC core (this includes the main interface drivers for the rfsoc 2x2 board). We then dropped an rf converter IP block, which we customized to use the right PLL settings. The settings, from the System Clocking tab were changed to have a 409.6 MHz Reference Clock (default was 122.88 MHz). This was not straightforward, as the default sampling rate of 2.00 GSPS was not integer-related so we had to also update that to 4.096 GSPS. Then, we saw that the max available Clock Out option was 256 MHz (we need to be >= 409.6 MHz), so Leo suggested we dropped a Clocking Wizard block to provide a 512 MHz clock input for the rfdc. The final settings are captured in Attachment # 3. The Clocking Wizard was added, and configured on its Output Clocks tab to provide a Requested Output Freq of 512 MHz. The finall settings of the Clocking wizard are captured in Attachment #4. Finally, we connected the blocks as shown in Attachment #5.

We will continue with this design tomorrow.

Attachment 1: adc0_1137MHz.png
adc0_1137MHz.png
Attachment 2: adc1_1137MHz.png
adc1_1137MHz.png
Attachment 3: rfdc_PLLsettings.png
rfdc_PLLsettings.png
Attachment 4: clockingwiz_settings.png
clockingwiz_settings.png
Attachment 5: blockIPdiag.png
blockIPdiag.png
  17105   Thu Aug 25 16:05:51 2022 YehonathanUpdateSUSTrying to fix some SUS

I tried to lock the Y/X arms to take some noise budget. However, we noticed that TRX/Y were oscillating coherently together (by tens of percent), meaning some input optics, essentially PR2/3 are swinging. There was no way I could do noise budgeting in this situation.

I set out to debug these optics. First, I notice side motion of PR2 is very weakly damped .

The gain of the side damping loop (C1:SUS-PR2_SUSSIDE_GAIN) was increased from 10 to 150 which seem to have fixed the issue. Attachment 1 shows the current step response of  the PR2 DOFs. The residual Qs look good but there is still some cross-couplings, especially when kicking POS. Need to do some balancing there.

PR3 fixing was less successful in the beginning. I increased the following gains:

C1:SUS-PR3_SUSPOS_GAIN: 0.5 -> 30

C1:SUS-PR3_SUSPIT_GAIN: 3 -> 30

C1:SUS-PR3_SUSYAW_GAIN: 1 -> 30

C1:SUS-PR3_SUSSIDE_GAIN: 10 -> 50

But the residual Q was still > 10. Then I checked the input matrix and noticed that UL->PIT is -0.18 while UR->PIT is 0.39. I changed UL->PIT (C1:SUS-PR3_INMATRIX_2_1) to +0.18. Now the Q became 7. I continue optimizing the gains.

Was able to increase C1:SUS-PR3_SUSSIDE_GAIN: 50 -> 100.

Attachment 2 shows the step response of PR3. The change of the entry of the input matrix was very ad-hoc, it would probably be good to run a systematic tuning. I have to leave now, but the IFO is in a very misaligned state. PR3/2 should be moved to bring it back.

Attachment 1: PR2_Step_Response_Test_2022-08-25_16-23.pdf
PR2_Step_Response_Test_2022-08-25_16-23.pdf PR2_Step_Response_Test_2022-08-25_16-23.pdf PR2_Step_Response_Test_2022-08-25_16-23.pdf PR2_Step_Response_Test_2022-08-25_16-23.pdf
Attachment 2: PR3_Step_Response_Test_2022-08-25_18-37.pdf
PR3_Step_Response_Test_2022-08-25_18-37.pdf PR3_Step_Response_Test_2022-08-25_18-37.pdf PR3_Step_Response_Test_2022-08-25_18-37.pdf PR3_Step_Response_Test_2022-08-25_18-37.pdf
  17106   Thu Aug 25 16:39:31 2022 CiciUpdateGeneralI have learned the absolute basics of github

I have now added code/data to my github repository. (it's the little victories)

  17107   Fri Aug 26 12:46:07 2022 CiciUpdateGeneralProgress on fitting PZT resonances

Here is an update of how fitting the resonances is going - I've been modifying parameters by hand and seeing the effect on the fit. Still a work in progress. Magnitude is fitting pretty well, phase is very confusing. Attempted vectfit again but I can't constrain the number of poles and zeros with the code I have and I still get a nonsensical output with 20 poles and 20 zeros. Here is a plot with my fit so far, and a zip file with my moku data of the resonances and the code I'm using to plot.

Attachment 1: PZT_fit.zip
Attachment 2: AUX_PZT_Actuator_narrow_fit_1.pdf
AUX_PZT_Actuator_narrow_fit_1.pdf
  17108   Fri Aug 26 14:05:09 2022 TegaUpdateComputersrack reshuffle proposal for CDS upgrade

[Tega, Jamie]

Here is a proposal for what we would like to do in terms of reshuffling a few rack-mounted equipments for the CDS upgrade. 

  • Frequency Distribution Amp - Move the unit from 1X7 to 1X6 without disconnecting the attached cables. Then disconnect power and signal cables one at a time to enable optimum rerouting for strain relief and declutter.  

 

  • GPS Receiver Tempus LX - Move the unit from 1X7 to 1X6 without disconnecting the attached cables. Then disconnect power and signal cables one at a time to enable optimum rerouting for strain relief and declutter.

 

  • PEM & ADC Adapter - Move the unit from 1X7 to 1X6 without disconnecting the attached cables. Disconnect the single signal cable from the rear of the ADC adapter to allow for optimum rerouting for strain relief.

 

  • Martian Network Switch - Make a note of all connections, disconnect them, move the switch to 1X7 and reconnect ethernet cables. 

 

  • MARTIAN NETWORK SWITCH CONNECTIONS
    # LABEL # LABEL
    1 Tempus LX (yellow,unlabeled) 13 FB1
    2 1Y6 HUB 14 FB
    3 C0DCU1 15 NODUS
    4 C1PEM1 16  
    5 RFM-BYPASS 17 CHIARA
    6 MEGATRON/PROCYON 18  
    7 MEGATRON 19 CISC/C1SOSVME
    8 BR40M 20 C1TESTSTAND [blue/unlabelled]
    9 C1DSCL1EPICS0 21 JetStar [blue/unlabelled]
    10 OP340M 22 C1SUS [purple]
    11 C1DCUEPICS 23 unknown [88/purple/goes to top-back rail]
    12 C1ASS 24 unknown [stonewall/yellow/goes to top-front rail]

     

I believe all of this can be done in one go followed by CDS validation. Please comment so we can improve the plan. Should we move FB1 to 1X7 and remove old FB & JetStor during this work?

Attachment 1: Reshuffling proposal

Attachment 2: Front of 1X7 Rack

Attachment 3: Rear of 1X7 Rack

Attachment 4: Front of 1X6 Rack

Attachment 5: Rear of 1X6 Rack

Attachment 6: Martian switch connections

Attachment 1: rack_change_proposal.pdf
rack_change_proposal.pdf
Attachment 2: IMG_20220826_131331042.jpg
IMG_20220826_131331042.jpg
Attachment 3: IMG_20220826_131153172.jpg
IMG_20220826_131153172.jpg
Attachment 4: IMG_20220826_131428125.jpg
IMG_20220826_131428125.jpg
Attachment 5: IMG_20220826_131543818.jpg
IMG_20220826_131543818.jpg
Attachment 6: IMG_20220826_142620810.jpg
IMG_20220826_142620810.jpg
  17109   Sun Aug 28 23:14:22 2022 JamieUpdateComputersrack reshuffle proposal for CDS upgrade

@tega This looks great, thank you for putting this together.  The rack drawing in particular is great.  Two notes:

  1. In "1X6 - proposed" I would move the "PEM AA + ADC Adapter" down lower in the rack, maybe where "Old FB + JetStor" are, after removing those units since they're no longer needed.  That would keep all the timing stuff together at the top without any other random stuff in between them.  If we can't yet remove Old FB and the JetStor then I would move the VME GPS/Timing chassis up a couple units to make room for the PEM module between the VME chassis and FB1.
  2. We'll eventually want to move FB1 and Megatron into 1X7, since it seems like there will be room there.  That will put all the computers into one rack, which will be very nice.  FB1 should also be on the KVM switch as well.

I think most of this work can be done with very little downtime.

  17110   Mon Aug 29 13:33:09 2022 JCUpdateGeneralLab Cleanup

The machine shop looked a mess this morning, so I cleaned it up. All power tools are now placed in the drawers in the machine shop. Let me know if there are any questions of where anything here is placed. 

Attachment 1: EDE63209-D556-41F1-9BF2-89CD78E3D7B7.jpeg
EDE63209-D556-41F1-9BF2-89CD78E3D7B7.jpeg
  17111   Mon Aug 29 15:15:46 2022 TegaUpdateComputers3 FEs from LLO got delivered today

[JC, Tega]

We got the 3 front-ends from LLO today. The contents of each box are:

  1. FE machine
  2. OSS adapter card for connecting to I/O chassis
  3. PCI riser cards (x2)
  4. Timing Card and cable
  5. Power cables, mounting brackets and accompanying screws
Attachment 1: IMG_20220829_145533452.jpg
IMG_20220829_145533452.jpg
Attachment 2: IMG_20220829_144801365.jpg
IMG_20220829_144801365.jpg
  17112   Mon Aug 29 18:25:12 2022 CiciUpdateGeneralTaking finer measurements of the actuator transfer function

Took finer measurements of the x-arm aux laser actuator tranfer function (10 kHz - 1 MHz, 1024 pts/decade) using the Moku.

--------------------------------------

I took finer measurements using the moku by splitting the measurement into 4 sections (10 - 32 (~10^4.5) kHz, 32 - 100 kHz, 100 - 320 kHz, 320 - 1000 kHz) and then grouping them together. I took 25 measurements of each ( + a bonus in case my counting was off), plotted them in the attached notebook, and calculated/plotted the standard deviation of the magnitude (normalized for DC offset). Could not upload to the ELOG as .pdf, but the pdf's are in the .zip file.

--------------------------------------

Next steps are to do the same stdev calculation for phase, which shouldn't take long, and to use the vectfit of this better data to create a PZT inversion filter.

Attachment 1: PZT_TF_fine.png
PZT_TF_fine.png
Attachment 2: PZT_TF_fine_mag_stdev.png
PZT_TF_fine_mag_stdev.png
Attachment 3: ATF_fine.zip
  17113   Tue Aug 30 15:21:27 2022 TegaUpdateComputers3 FEs from LHO got delivered today

[Tega, JC]

We received the remaining 3 front-ends from LHO today. They each have a timing card and an OSS host adapter card installed. We also receive 3 dolphin DX cards. As with the previous packages from LLO, each box contains a rack mounting kit for the supermicro machine.

Attachment 1: IMG_20220830_144925325.jpg
IMG_20220830_144925325.jpg
Attachment 2: IMG_20220830_142307495.jpg
IMG_20220830_142307495.jpg
Attachment 3: IMG_20220830_143059443.jpg
IMG_20220830_143059443.jpg
  17114   Wed Aug 31 00:32:00 2022 KojiUpdateGeneralSOS and other stuff in the clean room

Salvage these (and any other things). Wrap and double-pack nicely. Put the labels. Store them and record the location. Tell JC the location.

Attachment 1: PXL_20220831_015137480.jpg
PXL_20220831_015137480.jpg
Attachment 2: PXL_20220831_015203941.jpg
PXL_20220831_015203941.jpg
Attachment 3: PXL_20220831_015230288.jpg
PXL_20220831_015230288.jpg
Attachment 4: PXL_20220831_015249451.jpg
PXL_20220831_015249451.jpg
Attachment 5: PXL_20220831_015424383.jpg
PXL_20220831_015424383.jpg
Attachment 6: PXL_20220831_015433113.jpg
PXL_20220831_015433113.jpg
  17115   Wed Aug 31 00:46:56 2022 KojiUpdateGeneralVertex Lab area to be cleaned

As marked up in the photos.

 

Attachment 5: The electronics units removed. Cleaning half way down. (KA)

Attachment 6: Moved most of the units to 1X3B rack ELOG 17125 (KA)

Attachment 1: PXL_20220831_015758208.jpg
PXL_20220831_015758208.jpg
Attachment 2: PXL_20220831_015805113.jpg
PXL_20220831_015805113.jpg
Attachment 3: PXL_20220831_015816628.jpg
PXL_20220831_015816628.jpg
Attachment 4: PXL_20220831_015826533.jpg
PXL_20220831_015826533.jpg
Attachment 5: PXL_20220831_015844401.jpg
PXL_20220831_015844401.jpg
Attachment 6: PXL_20220831_015854055.jpg
PXL_20220831_015854055.jpg
Attachment 7: PXL_20220831_015903704.jpg
PXL_20220831_015903704.jpg
Attachment 8: PXL_20220831_015919376.jpg
PXL_20220831_015919376.jpg
Attachment 9: PXL_20220831_021708873.jpg
PXL_20220831_021708873.jpg
  17116   Wed Aug 31 01:22:01 2022 KojiUpdateGeneralAlong the X arm part 1

 

Attachment 5: RF delay line was accommodated in 1X3B. (KA)

Attachment 1: PXL_20220831_015945610.jpg
PXL_20220831_015945610.jpg
Attachment 2: PXL_20220831_020024783.jpg
PXL_20220831_020024783.jpg
Attachment 3: PXL_20220831_020039366.jpg
PXL_20220831_020039366.jpg
Attachment 4: PXL_20220831_020058066.jpg
PXL_20220831_020058066.jpg
Attachment 5: PXL_20220831_020108313.jpg
PXL_20220831_020108313.jpg
Attachment 6: PXL_20220831_020131546.jpg
PXL_20220831_020131546.jpg
Attachment 7: PXL_20220831_020145029.jpg
PXL_20220831_020145029.jpg
Attachment 8: PXL_20220831_020203254.jpg
PXL_20220831_020203254.jpg
Attachment 9: PXL_20220831_020217229.jpg
PXL_20220831_020217229.jpg
  17117   Wed Aug 31 01:24:48 2022 KojiUpdateGeneralAlong the X arm part 2

 

 

Attachment 1: PXL_20220831_020307235.jpg
PXL_20220831_020307235.jpg
Attachment 2: PXL_20220831_020333966.jpg
PXL_20220831_020333966.jpg
Attachment 3: PXL_20220831_020349163.jpg
PXL_20220831_020349163.jpg
Attachment 4: PXL_20220831_020355496.jpg
PXL_20220831_020355496.jpg
Attachment 5: PXL_20220831_020402798.jpg
PXL_20220831_020402798.jpg
Attachment 6: PXL_20220831_020411566.jpg
PXL_20220831_020411566.jpg
Attachment 7: PXL_20220831_020419923.jpg
PXL_20220831_020419923.jpg
Attachment 8: PXL_20220831_020439160.jpg
PXL_20220831_020439160.jpg
Attachment 9: PXL_20220831_020447841.jpg
PXL_20220831_020447841.jpg
  17118   Wed Aug 31 01:25:37 2022 KojiUpdateGeneralAlong the X arm part 3

 

 

Attachment 1: PXL_20220831_020455209.jpg
PXL_20220831_020455209.jpg
Attachment 2: PXL_20220831_020534639.jpg
PXL_20220831_020534639.jpg
Attachment 3: PXL_20220831_020556512.jpg
PXL_20220831_020556512.jpg
Attachment 4: PXL_20220831_020606964.jpg
PXL_20220831_020606964.jpg
Attachment 5: PXL_20220831_020615854.jpg
PXL_20220831_020615854.jpg
Attachment 6: PXL_20220831_020623018.jpg
PXL_20220831_020623018.jpg
Attachment 7: PXL_20220831_020640973.jpg
PXL_20220831_020640973.jpg
Attachment 8: PXL_20220831_020654579.jpg
PXL_20220831_020654579.jpg
Attachment 9: PXL_20220831_020712893.jpg
PXL_20220831_020712893.jpg
  17119   Wed Aug 31 01:30:53 2022 KojiUpdateGeneralAlong the X arm part 4

Behind the X arm tube

Attachment 1: PXL_20220831_020757504.jpg
PXL_20220831_020757504.jpg
Attachment 2: PXL_20220831_020825338.jpg
PXL_20220831_020825338.jpg
Attachment 3: PXL_20220831_020856676.jpg
PXL_20220831_020856676.jpg
Attachment 4: PXL_20220831_020934968.jpg
PXL_20220831_020934968.jpg
Attachment 5: PXL_20220831_021030215.jpg
PXL_20220831_021030215.jpg
  17120   Wed Aug 31 01:53:39 2022 KojiUpdateGeneralAlong the Y arm part 1

 

 

Attachment 1: PXL_20220831_021118213.jpg
PXL_20220831_021118213.jpg
Attachment 2: PXL_20220831_021133038.jpg
PXL_20220831_021133038.jpg
Attachment 3: PXL_20220831_021228013.jpg
PXL_20220831_021228013.jpg
Attachment 4: PXL_20220831_021242520.jpg
PXL_20220831_021242520.jpg
Attachment 5: PXL_20220831_021258739.jpg
PXL_20220831_021258739.jpg
Attachment 6: PXL_20220831_021334823.jpg
PXL_20220831_021334823.jpg
Attachment 7: PXL_20220831_021351076.jpg
PXL_20220831_021351076.jpg
Attachment 8: PXL_20220831_021406223.jpg
PXL_20220831_021406223.jpg
Attachment 9: PXL_20220831_021426110.jpg
PXL_20220831_021426110.jpg
  17121   Wed Aug 31 01:54:45 2022 KojiUpdateGeneralAlong the Y arm part 2
Attachment 1: PXL_20220831_021459596.jpg
PXL_20220831_021459596.jpg
Attachment 2: PXL_20220831_021522069.jpg
PXL_20220831_021522069.jpg
Attachment 3: PXL_20220831_021536313.jpg
PXL_20220831_021536313.jpg
Attachment 4: PXL_20220831_021544477.jpg
PXL_20220831_021544477.jpg
Attachment 5: PXL_20220831_021553458.jpg
PXL_20220831_021553458.jpg
Attachment 6: PXL_20220831_021610724.jpg
PXL_20220831_021610724.jpg
Attachment 7: PXL_20220831_021618209.jpg
PXL_20220831_021618209.jpg
Attachment 8: PXL_20220831_021648175.jpg
PXL_20220831_021648175.jpg
  17122   Wed Aug 31 11:39:48 2022 YehonathanUpdateLSCUpdated XARM noise budget

{Radhika, Paco, Yehonathan}

For educational purposes we update the XARM noise budget and add the POX11 calibrated dark noise contribution (attachment).

Attachment 1: Screenshot_2022-08-31_11-38-46.png
Screenshot_2022-08-31_11-38-46.png
  17123   Wed Aug 31 12:57:07 2022 ranaSummaryALScontrol of ALS beat freq from command line -easy

The PZT sweeps that we've been making to characterize the ALS-X laser should probably be discarded - the DFD was not setup correctly for this during the past few months.

Since the DFD only had a peak-peak range of ~5 MHz, whenever the beat frequency drifts out of the linear range (~2-3 MHz), the data would have an arbitrary gain. Since the drift was actually more like 50 MHz, it meant that the different parts of a single sweep could have some arbitrary gain and sign !!! This is not a good way to measure things.

I used an ezcaservo to keep the beat frequency fixed. The attacehed screenshot shows the command line. We read back the unwrapped beat frequency from the phase tracker, and feedback on the PSL's NPRO temperature. During this the lasers were not locked to any cavities (shutters closed, but servos not disabled).

For the purposes of this measurement, I reduced the CAL factor in the phase tracker screen so that the reported FINE_PHASE_OUT is actually in kHz, rather than Hz on this plot. So the green plot is moving by 10's of MHz. When the servo is engaged, you can see the SLOWDC doing some action. We think the calibration of that channel is ~1 GHz/V, so 0.1 SLOWDC Volts should be ~100 MHz. I think there's a factor of 2 missing here, but its close.

As you can see in the top plot, even with the frequency stabilized by this slow feedback (-1000 to -600 seconds), the I & Q outputs are going through multiple cycles, and so they are unusable for even a non serious measurement.

The only way forward is to use less of a delay in the DFD: I think Anchal has been busily installing this shorter cable (hopefully, its ~3-5 m long so that the linear range is more. I think a 10 m cable is too long.), and the sweeps taken later today should be more useful.

Attachment 1: Untitled.png
Untitled.png
  17125   Wed Aug 31 16:11:37 2022 KojiUpdateGeneralVertex Lab area to be cleaned

The analog electronics units piled up along the wall was moved into 1X3B rack which was basically empty. (Attachments 1/2/4)

We had a couple of unused Sun Machines. I salvaged VMIC cards (RFM and Fast fiber networking? for DAQ???) and gave them to Tega.
Attachment 3 shows the eWastes collected this afternoon.

Attachment 1: PXL_20220831_224457839.jpg
PXL_20220831_224457839.jpg
Attachment 2: PXL_20220831_224816960.jpg
PXL_20220831_224816960.jpg
Attachment 3: PXL_20220831_225408183.jpg
PXL_20220831_225408183.jpg
Attachment 4: PXL_20220901_021729652.jpg
PXL_20220901_021729652.jpg
  17126   Thu Sep 1 09:00:02 2022 JCConfigurationDaily ProgressLocked both arms and aligned Op Levs

Each morning now, I am going to try to align both arms and lock. Along with that, sometime at towards the end of each week, we should align the OpLevs. This is a good habit that should be practiced more often, not only by me. As for the Y Arm, Yehonathan and I had to adjust the gain to 0.15 in order to stabilize the lock.

Attachment 1: Daily.pdf
Daily.pdf
Attachment 2: Daily.pdf
Daily.pdf
  17127   Fri Sep 2 13:30:25 2022 Ian MacMillanSummaryComputersQuantization Noise Calculation Summary

P. P. Vaidyanathan wrote a chapter in the book "Handbook of Digital Signal Processing: Engineering Applications" called "Low-Noise and Low-Sensitivity Digital Filters" (Chapter 5 pg. 359).  I took a quick look at it and wanted to give some thoughts in case they are useful. The experts in the field would be Leland B. JacksonP. P. VaidyanathanBernard Widrow, and István Kollár.  Widrow and Kollar  wrote the book "Quantization Noise Roundoff Error in Digital Computation, Signal Processing, Control, and Communications" (a copy of which is at the 40m). it is good that P. P. Vaidyanathan is at Caltech.

Vaidyanathan's chapter is serves as a good introduction to the topic of quantization noise. He starts off with the basic theory similar to my own document on the topic. From there, there are two main relevant topics to our goals.

The first interesting thing is using Error-Spectrum Shaping (pg. 387). I have never investigated this idea but the general gist is as poles and zeros move closer to the unit circle the SNR deteriorates so this is a way of implementing error feedback that should alleviate this problem. See Fig. 5.20 for a full realization of a second-order section with error feedback.

The second starts on page 402 and is an overview of state space filters and gives an example of a state space realization (Fig. 5.26). I also tested this exact realization a while ago and found that it was better than the direct form II filter but not as good as the current low-noise implementation that LIGO uses. This realization is very close to the current realization except uses one less addition block.

Overall I think it is a useful chapter. I like the idea of using some sort of error correction and I'm sure his other work will talk more about this stuff. It would be useful to look into.

One thought that I had recently is that if the quantization noise is uncorrelated between the two different realizations then connecting them in parallel then averaging their results (as shown in Attachment 1) may actually yield lower quantization noise. It would require double the computation power for filtering but it may work. For example, using the current LIGO realization and the realization given in this book it might yield a lower quantization noise. This would only work with two similarly low noise realizations. Since it would be randomly sampling two uniform distributions and we would be going from one sample to two samples the variance would be cut in half, and the ASD would show a 1/√2 reduction if using realizations with the same level of quantization noise. This is only beneficial if the realization with the higher quantization noise only has less than about 1.7 times the one with the lower noise. I included a simple simulation to show this in the zip file in attachment 2 for my own reference.

Another thought that I had is that the transpose of this low-noise state-space filter (Fig. 5.26) or even of LIGO's current filter realization would yield even lower quantization noise because both of their transposes require one less calculation.

Attachment 1: averagefiltering.pdf
averagefiltering.pdf
Attachment 2: AveragingFilter.py.zip
  17128   Fri Sep 2 15:26:42 2022 YehonathanUpdateGeneralSOS and other stuff in the clean room

{Paco, Yehonathan}

BHD Optics box was put into the x-arm last clean cabinet. (attachment 5)

OSEMs were double bagged in a labeled box on the x-arm wire racks. (attachment 1)

SOS Parts (wire clamps, winches, suspension blocks, etc.) were put in a box on the x-arm wire rack. (attachment 3)

2"->3" optic adapter parts were put in a box and stored on the xarm wire rack. (attachment 3)

Magnet gluing parts box was labeled and stored on the xarm rack. (attachment 2)

TT SUS with the optics were stored on the flow bench at the x end. Note: one of the TT SUS was found unsuspended. (attachment 4)

InVac parts were double bagged and stored in a labeled box on the x arm wire rack. (attachment 2)

Attachment 1: osem_sus.jpeg
osem_sus.jpeg
Attachment 2: magnet_gluing.jpeg
magnet_gluing.jpeg
Attachment 3: 2-3inch_adapter_parts.jpeg
2-3inch_adapter_parts.jpeg
Attachment 4: TTs.jpeg
TTs.jpeg
Attachment 5: BHD_optics.jpeg
BHD_optics.jpeg
  17129   Fri Sep 2 15:30:10 2022 AnchalUpdateGeneralAlong the X arm part 1

[Anchal, Radhika]

Attachment 2: The custom cables which were part of the intermediate setup between old electronics architecture and new electronics architecture were found.
These include:

  • 2 DB37 cables with custom wiring at their connectors to connect between vacuum flange and new Sat amp box, marked J4-J5 and J6-J7.
  • 2 DB15 to dual head DB9 (like a Hydra) cables used to interface between old coil drivers and new sat amp box.

A copy of these cables are in use for MC1 right now. These are spare cables. We put them in a cardboard box and marked the box appropriately.
The box is under the vacuum tube along Yarm near the center.

 

  17130   Fri Sep 2 15:35:19 2022 AnchalUpdateGeneralAlong the Y arm part 2

[Anchal, Radhika]

The cables in USPS open box were important cables that are part of the new electronics architecture. These are 3 ft D2100103 DB15F to DB9M Reducer Cable that go between coil driver output (DB15M on back) to satellite amplifier coil driver in (DB9F on the front). These have been placed in a separate plastic box, labeled, and kept with the rest of the D-sub cable plastic boxes that are part of the upgrade wiring behind the tube on YARM across 1Y2. I believe JC would eventually store these dsub cable boxes together somewhere later.

  17131   Fri Sep 2 15:40:25 2022 AnchalSummaryALSDFD cable measurements

[Anchal, Yehonathan]

I laid down another temporary cable from Xend to 1Y2 (LSC rack) for also measuring the Q output of the DFD box. Then to get a quick measurement of these long cable delays, we used Moku:GO in oscillator mode, sent 100 ns pulses at a 100 kHz rate from one end, and measured the difference between reflected pulses to get an estimate of time delay. The other end of long cables was shorted and left open for 2 sets of measurements.

I-Mon Cable delay: (955+/- 6) ns / 2 = 477 +/- 3 ns

Q-Mon Cable delay: (535 +/- 6) ns / 2 = 267 +/- 3 ns

Note: We were underestimating the delay in I-Mon cable by about a factor of 2.

I also took the opportunity to take a delay time measurement of DFD delayline. Since both ends of cable were present locally, it made more sense to simply take a transfer function to get a clean delay measurement. This measurement resulted with value of 197.7 +/- 0.1 ns. See attached plot. Data and analysis here.

Attachment 1: ICableOpenEnd_2022-09-02_3_08_34_PM.png
ICableOpenEnd_2022-09-02_3_08_34_PM.png
Attachment 2: ICableShortedEnd_2022-09-02_3_04_43_PM.png
ICableShortedEnd_2022-09-02_3_04_43_PM.png
Attachment 3: QCableOpenEnd_2022-09-02_3_09_49_PM.png
QCableOpenEnd_2022-09-02_3_09_49_PM.png
Attachment 4: QCableShortedEnd_2022-09-02_3_00_55_PM.png
QCableShortedEnd_2022-09-02_3_00_55_PM.png
Attachment 5: DFD_Delay_Measurement.pdf
DFD_Delay_Measurement.pdf
  17132   Tue Sep 6 09:57:26 2022 JCSummaryGeneralLab cleaning

DB9 Cables have been assorted and placed behind the Y-Arm. Long BNC Cables and Ethernet Cables have been stored under the Y-Arm. 

Quote:

We held the lab cleaning for the first time since the campus reopening (Attachment 1).
Now we can use some of the desks for the people to live! Thanks for the cooperation.

We relocated a lot of items into the lab.

  • The entrance area was cleaned up. We believe that there is no 40m lab stuff left.
    • BHD BS optics was moved to the south optics cabinet. (Attachment 2)
    • DSUB feedthrough flanges were moved to the vacuum area (Attachment 3)
  • Some instruments were moved into the lab.
    • The Zurich instrument box
    • KEPCO HV supplies
    • Matsusada HV supplies
  • We moved the large pile of SUPERMICROs in the lab. They are around MC2 while the PPE boxes there were moved behind the tube around MC2 area. (Attachment 4)
  • We have moved PPE boxes behind the beam tube on XARM behind the SUPERMICRO computer boxes. (Attachment 7)
  • ISC/WFS left over components were moved to the pile of the BHD electronics.
    • Front panels (Attachment 5)
    • Components in the boxes (Attachment 6)

We still want to make some more cleaning:

- Electronics workbenches
- Stray setup (cart/wagon in the lab)
- Some leftover on the desks
- Instruments scattered all over the lab
- Ewaste removal

 

Attachment 1: 982146B2-02E5-4C19-B137-E7CC598C262F.jpeg
982146B2-02E5-4C19-B137-E7CC598C262F.jpeg
Attachment 2: 0FBB61AC-E882-458D-A891-7B11F35588FF.jpeg
0FBB61AC-E882-458D-A891-7B11F35588FF.jpeg
  17133   Tue Sep 6 17:39:40 2022 PacoUpdateSUSLO1 LO2 AS1 AS4 damping loop step responses

I tuned the local damping gains for LO1, LO2, AS1, and AS4 by looking at step responses in the DOF basis (i.e. POS, PIT, YAW, and SIDE). The procedure was:

  1. Grab an ndscope with the error point signals in the DOF basis, e.g. C1:SUS-LO1_SUSPOS_IN1_DQ
  2. Apply an offset to the relevant DOF using the alignment slider offset (or coil offset for the SIDE DOF) while being careful not to trip the watchdog. The nominal offsets found for this tuning are summarized below:
Alignment/Coil Step sizes
  POS PIT YAW SIDE
LO1 800 300 300 10000
LO2 800 300 400 -10000
AS1 800 500 500 20000
AS4 800 400 400 -10000
  1. Tune the damping gains until the DOF shows a residual Q with ~ 5 or more oscillations.
  2. The new damping gains are below for all optics and their DOFs, and Attachments #1-4 summarize the tuned step responses as well as the other DOFs (cross-coupled).
Local damping gains
  POS PIT YAW SIDE
LO1 10.000 5.000 3.000 40.000
LO2 10.000 3.000 3.000 50.000
AS1 14.000 2.500 3.000 85.000
AS4 15.000 3.100 3.000 41.000

Note that during this test, FM5 has been populated for all these optics with a BounceRoll (notches at 16.6, 23.7 Hz) filter, apart from the Cheby (HF rolloff) and the 0.0:30 filters.

Attachment 1: LO1_Step_Response_Test_2022-09-06_17-19.pdf
LO1_Step_Response_Test_2022-09-06_17-19.pdf LO1_Step_Response_Test_2022-09-06_17-19.pdf LO1_Step_Response_Test_2022-09-06_17-19.pdf LO1_Step_Response_Test_2022-09-06_17-19.pdf
Attachment 2: LO2_Step_Response_Test_2022-09-06_17-30.pdf
LO2_Step_Response_Test_2022-09-06_17-30.pdf LO2_Step_Response_Test_2022-09-06_17-30.pdf LO2_Step_Response_Test_2022-09-06_17-30.pdf LO2_Step_Response_Test_2022-09-06_17-30.pdf
Attachment 3: AS1_Step_Response_Test_2022-09-06_17-53.pdf
AS1_Step_Response_Test_2022-09-06_17-53.pdf AS1_Step_Response_Test_2022-09-06_17-53.pdf AS1_Step_Response_Test_2022-09-06_17-53.pdf AS1_Step_Response_Test_2022-09-06_17-53.pdf
Attachment 4: AS4_Step_Response_Test_2022-09-06_18-16.pdf
AS4_Step_Response_Test_2022-09-06_18-16.pdf AS4_Step_Response_Test_2022-09-06_18-16.pdf AS4_Step_Response_Test_2022-09-06_18-16.pdf AS4_Step_Response_Test_2022-09-06_18-16.pdf
  17134   Wed Sep 7 14:32:15 2022 AnchalUpdateSUSUpdated SD coil gain to keep all coil actuation signals digitally same magnitude

The coil driver actuation strength for face coils was increased by 13 times in LO1, LO2, SR2, AS1, AS4, PR2, and PR3.

After the change the damping strenghth of POS, PIT, and YAW were reduced, but not SIDE coil output filter module requires higher digital input to cause same force at the output. This wouldn't be an issue until we try to correct for cross coupling at output matrix where we would want to give similar order of magnitude coefficients for SIDE coil as well. So I did the following changes to make input to all coil output filters (Face and side) same for these particular suspensions:

  • C1:SUS-LO1_SUSSIDE_GAIN 40.0 -> 3.077
    C1:SUS-AS1_SUSSIDE_GAIN 85.0 -> 6.538
    C1:SUS-AS4_SUSSIDE_GAIN 41.0 -> 3.154
    C1:SUS-PR2_SUSSIDE_GAIN 150.0 -> 11.538
    C1:SUS-PR3_SUSSIDE_GAIN 100.0 -> 7.692
    C1:SUS-LO2_SUSSIDE_GAIN 50.0 -> 3.846
    C1:SUS-SR2_SUSSIDE_GAIN 140.0 -> 10.769
  • C1:SUS-LO1_SDCOIL_GAIN -1.0 -> -13.0
    C1:SUS-AS1_SDCOIL_GAIN 1.0 -> 13.0
    C1:SUS-AS4_SDCOIL_GAIN -1.0 -> -13.0
    C1:SUS-PR2_SDCOIL_GAIN -1.0 -> -13.0
    C1:SUS-PR3_SDCOIL_GAIN -1.0 -> -13.0
    C1:SUS-LO2_SDCOIL_GAIN -1.0 -> -13.0
    C1:SUS-SR2_SDCOIL_GAIN -1.0 -> -13.0

In short, now 10 cts of input to C1:SUS-AS1_ULCOIL would create same actuation on UL as 10 cts of input to C1:SUS-AS1_SDCOIL will on SD.


In reply to

Quote: http://nodus.ligo.caltech.edu:8080/40m/16802

[Koji, JC]

Coil Drivers LO2, SR2, AS4, and AS1 have been updated a reinstalled into the system. 

LO2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100008

LO2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100530

SR2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100614

SR2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100615

AS1 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100610

AS1 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100611

AS4 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100612

AS4 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100613

 

Quote: http://nodus.ligo.caltech.edu:8080/40m/16791

[JC Koji]

To give more alignment ranges for the SUS alignment, we started updating the output resistors of the BHD SUS coil drivers.
As Paco has already started working on LO1 alignment, we urgently updated the output Rs for LO1 coil drivers.
LO1 Coil Driver 1 now has R=100 // 1.2k ~ 92Ohm for CH1/2/3, and LO1 Coil Driver 2 has the same mod only for CH3. JC has taken the photos and will upload/update an elog/DCC.

We are still working on the update for the SR2, LO2, AS1, and AS4 coil drivers. They are spread over the workbench right now. Please leave them as they're for a while.
JC is going to continue to work on them tomorrow, and then we'll bring them back to the rack.

 

Quote: https://nodus.ligo.caltech.edu:8081/40m/16760

[Yuta Koji]

We took out the two coil driver units for PR3 and the incorrect arrangement of the output Rs were corrected. The boxes were returned to the rack.

In order to recover the alignment of the PR3 mirror, C1:SUS_PR3_SUSPOS_INMON / C1:SUS_PR3_SUSPIT_INMON / C1:SUS_PR3_SUSYAW_INMON were monitored. The previous values for them were {31150 / -31000 / -12800}. By moving the alignment sliders, the PIT and YAW values were adjusted to be {-31100 / -12700}. while this change made the POS value to be 52340.

The original gains for PR3 Pos/Pit/Yaw were {1, 0.52, 0.2}. They were supposed to be reduced to  {0.077, 0.04, 0.015}.
I ended up having the gains to be {0.15, 0.1, 0.05}. The side gain was also increased to 50.

----

Overall, the output R configuration for PR2/PR3 are summarized as follows. I'll update the DCC.

PR2 Coil Driver 1 (UL/LL/UR) / S2100616 / PCB S2100520 / R_OUT = (1.2K // 100) for CH1/2/3

PR2 Coil Driver 2 (LR/SD) / S2100617 / PCB S2100519 / R_OUT = (1.2K // 100) for CH3

PR3 Coil Driver 1 (UL/LL/UR) / S2100619 / PCB S2100516 / R_OUT = (1.2K // 100) for CH1/2/3

PR3 Coil Driver 2 (LR/SD) / S2100618 / PCB S2100518 / R_OUT = (1.2K // 100) for CH3

 

  17135   Thu Sep 8 11:54:37 2022 JCConfigurationLab OrganizationLab Organization

The arms in the 40m laboratory have now been sectioned off. Each arm has been divided up into 15 sections. Along the Y arm, the section are labelled "Section Y1 - Section Y15". For the X arm, they are labelled "Section X1- Section X15". Anything changed or moved will now be updated into the elog with their appropriate section.

 

Below is an example of Section X6.

Attachment 1: 1A7026BC-82A9-49E9-BA22-1A700DFEC5D2.jpeg
1A7026BC-82A9-49E9-BA22-1A700DFEC5D2.jpeg
Attachment 2: 2A904809-82F0-40C0-B907-B48C3A0E789E.jpeg
2A904809-82F0-40C0-B907-B48C3A0E789E.jpeg
Attachment 3: CB4B8591-B769-454D-9A16-EE9176004099.jpeg
CB4B8591-B769-454D-9A16-EE9176004099.jpeg
  17136   Thu Sep 8 12:01:02 2022 JCConfigurationLab OrganizationLab Organization

The floor cable cover has been changed out for a new one. This is in Section X11.

Attachment 1: F41AD1DA-29E9-4449-99CB-5F43AE527CA6_1_105_c.jpeg
F41AD1DA-29E9-4449-99CB-5F43AE527CA6_1_105_c.jpeg
Attachment 2: FF5F2CE8-85E8-4B6F-8F8A-9045D978F670.jpeg
FF5F2CE8-85E8-4B6F-8F8A-9045D978F670.jpeg
  17137   Thu Sep 8 16:03:25 2022 YehonathanUpdateLSCRealignment, arm locking, gains adjustments

{Anchal, Yehonathan}

We came this morning and the IMC was misaligned. The IMC was realigned and locked. This of course changed the input beam and sent us down to a long alignment journey.

We first use TTs to find beam on BHD DCPD/Camera since it is only single bounce on all optics.

Then, PR2/3 were used to find POP beam while keeping the BHD beam.

Unfortunately, that was not enough. TTs and PRs have some degeneracy which caused us to lose the REFL beam.

Realizing this we went to AS table to find the REFL beam. We found a ghost beam that decieved us for a while. Realizing it was a ghost beam, we moved TT2 in pitch, while keeping the POP DCPD high with PRs, until we saw a new beam on the viewing card.

We kept aligning TT1/2, PR2/3 to maximize the REFL DCPD while keeping the POP DCPD high. We tried to look at the REFL camera but soon realized that the REFL beam is too weak for the camera to see.

At that point we already had some flashing in the arms (we centered the OpLevs in the beginning).

Arms were aligned and locked. We had some issue with the X-ARM not catching lock. We increased the gain and it locked almost immediately. To fix the arms gains correctly we took OLTFs (Attachment) and adjusted the XARM gain to 0.027 to make the UGF at 200Hz.


Both arms locked with 200 Hz UGF from:

From GPS: 1346713049
    To GPS: 1346713300

From GPS: 1346713380
    To GPS: 1346714166

HEPA turned off:
From GPS: 1346714298
    To GPS: 1346714716

 

  17138   Tue Sep 13 14:12:03 2022 YehonathanUpdateBHDTrying LO phase locking again

[Paco, Yehonathan]

Summary:  We locked LO phase using the DC PD (A - B) error point without saturating the control point, i.e. not a "bang bang" control.


Some suspensions were improved so we figure we should go back to trying to lock the LO phase.

We misalign ETMs and lock MICH using AS55. We put a small MICH offset by putting C1:LSC-MICH_OFFSET = -80.

AS and LO beams were aligned to overlap by maximizing the BHD signal visibility.

BHD DCPDs were balanced by misaligning the AS beam and using the LO beam only.

We measure the transfer function between the DCPDs and find the coherence is 1 at 1 Hz (because of seismic motion) so we measure the ratio between them to be 0.3db.

AS beam is aligned again to overlap with the LO beam. For the work below, we use the largest MICH OFFSET we could impinge before losing the lock = +90. This has the effect of increasing our optical gain.

We started using the HPC LOCK IN screen to dither POS on the different BHD SUS. We first started with AS1 (freq = 137.137 Hz, gain = 1000). The sensing matrix element was chosen accordingly (from the demodulated output) and fed to the LO_PHASE; because this affected the AS port alignment this was of course not the best choice. We moved over to LO2 (freq = 318.75 Hz, gain = 1000) but for the longest time couldn't see the dither line at the error point (A-B).

After this we added comb60 notch filters at DCPD_A and DCPD_B input signals. We ended up just feeding the (A-B) error point to LO1, and trying to lock mid fringe, which suceeded without saturation. The gain of the LO_PHASE filter was set to 0.2 (previously 20; attributable to the newly unclipped LO beam intensity?), and again we only enabled FM4 and FM5 for this. After this a dither line at 318.75 Hz finally appeared in the A-B error point! To be continued...

Attachment 1: Screenshot_2022-09-13_17-41-33.png
Screenshot_2022-09-13_17-41-33.png
  17139   Wed Sep 14 15:40:51 2022 JCUpdateTreasurePlastic Containers

There are brand new empty plastic containers located inside the shed that is outside in the cage. These can be used for organizing new equipment for CDS, or cleanup after Wednesday meetings

Attachment 1: B0DD2BEC-E57E-4AA2-8672-2ABB41F44F84.jpeg
B0DD2BEC-E57E-4AA2-8672-2ABB41F44F84.jpeg
  17140   Thu Sep 15 11:13:32 2022 AnchalSummaryASSYARM and XARM ASS restored

With limited proof of working for a few times (but robustly), I'm happy to report that ASS on YARM and XARM is working now.


What is wrong?

The issue is that PR3 is not placed in correct position in the chamber. It is offset enough that to send a beam through center of ITMY to ETMY, it has to reflect off the edge of PR3 leading to some clipping. Hence our usual ASS takes us to this point and results in loss of transmission due to clipping.

Solution: We can not solve this issue without moving PR3 inside the chamber. But meanwhile, we can find new spot positions on ITMY and ETMY, off the center in YAW direction only, which would allow us to mode match properly without clipping. This would mean that there will be YAW suspension noise to Length coupling in this cavity, but thankfully, YAW degree of freedom stays relatively calm in comparison to PIT or POS for our suspensions. Similarly, we need to allow for an offset in ETMX beam spot position in YAW. We do not control beam spot position on ITMX due to lack of enough actuators to control all 8 DOFs involved in mode matching input beam with a cavity. So instead I found the right offset for ITMX transmission error signal in YAW that works well.

I found these offsets (found empirically) to be:

  • C1:ASS-YARM_ETM_YAW_L_DEMOD_I_OFFSET: 22.6
  • C1:ASS-YARM_ITM_YAW_L_DEMOD_I_OFFSET: 4.2
  • C1:ASS-XARM_ETM_YAW_L_DEMOD_I_OFFSET: -7.6
  • C1:ASS-XARM_ITM_YAW_T_DEMOD_I_OFFSET: 1

These offsets have been saved in the burt snap file used for running ASS.


Using ASS

I'll reiterate here procedure to run ASS.

  • Get YARM locked to TEM00 mode and atleast 0.4 transmission on C1:LSC-TRY_OUT
  • Open sitemap->ASC->c1ass
  • Click ! Scripts YARM -> Striptool to open a striptool monitor for ASS error signals.
  • Click on ! Scripts YARM -> Dither ON to switch on the dither.
  • Wait for all error signals to have settled around zero (this should also maximize the transmission channel (currently maximizing to 1.1).
  • Click on ! Scripts YARM -> Freeze Offsets
  • Click on ! Scripts YARM -> Offload Offsets
  • Click on ! Scripts YARM -> Dither OFF.
  • Then proceed to XARM. Get it locked to TEM00 mode and atleast 0.4 transmission on C1:LSC-TRX_OUT
  • Open sitemap->ASC->c1ass
  • Click ! Scripts XARM -> Striptool to open a striptool monitor for ASS error signals.
  • Click on ! Scripts XARM -> Dither ON to switch on the dither.
  • Wait for all error signals except C1:ASS-XARM_ITM_PIT_L_DEMOD_I_OUT16 and C1:ASS-XARM_ITM_YAW_L_DEMOD_I_OUT16 to have settled around zero (this should also maximize the transmission channel (currently maximizing to 1.1).
  • Click on ! Scripts XARM -> Freeze Offsets
  • Click on ! Scripts XARM -> Offload Offsets
  • Click on ! Scripts XARM -> Dither OFF.
  17141   Thu Sep 15 16:19:33 2022 YehonathanUpdateLSCPOX-POY noise budget

Doing POX-POY noise measurement as a poor man's FPMI for diagnostic purposes. (Notebook in /opt/rtcds/caltech/c1/Git/40m/measurements/LSC/POX-POY/Noise_Budget.ipynb)

The arms were locked individually using POX11 and POY11. The optical gain was estimated to be by looking at the PDH signal of each arm: the slope was computed by taking the negative peak to positive peak counts and assuming that the arm length change between those peaks is lambda/(2*Finesse), where lambda = 1um and the arm finesse is taken to be 450.

Xarm peak-to-peak counts is ~ 850 while Yarm's is ~ 1100. This gives optical gains of 3.8e11 cts/m and 4.95e11 cts/m respectively.

Next, ETMX actuation TF is measured (attachments 1,2) by exciting C1:LSC-ETMX/Y_EXC and measuring at C1:LSC-X/YARM_IN1_DQ and calibrating with the optical gain.

Using these calibrations I plot the POX-POY (attachment 3) and POX+POY (attachment 4) total noise measurements using two methods:

1. Plotting the calibrated IN and OUT channels of XARM-YARM (blue and orange). Those two curves should cross at the UGF (200Hz in this case).

2. Plotting the calibrated XARM-YARM IN channels times 1-OLTF (black).

The UGF bump can be clearly seen above the true noise in those plots.

However, POX+POY OUT channel looks too high for some reason making the crossing frequency between IN and OUT channels to be ~ 300Hz. Not sure what was going on with this.

Next, I will budget this noise with the individual noise contributions.

Attachment 1: XARM_Actuation_Plot.pdf
XARM_Actuation_Plot.pdf
Attachment 2: YARM_Actuation_Plot.pdf
YARM_Actuation_Plot.pdf
Attachment 3: Sensitivity_Plot_1347315385.pdf
Sensitivity_Plot_1347315385.pdf
  17142   Thu Sep 15 21:12:53 2022 PacoUpdateBHDLO phase "dc" control

Locked the LO phase with a MICH offset=+91. The LO is midfringe (locked using the A-B zero crossing), so it's far from being "useful" for any readout but we can at least look at residual noise spectra.

I spent some time playing with the loop gains, filters, and overall lock acquisition, and established a quick TF template at Git/40m/measurements/BHD/HPC_LO_PHASE_TF.xml

So far, it seems that actuating on the LO phase through LO2 POS requires 1.9 times more strength (with the same "A-B" dc sensing). After closing the loop by FM4, and FM5, actuating on LO2 with a filter gain of 0.4 closes the loop robustly. Then, FM3 and FM6 can be enabled and the gain stepped up to 0.5 without problem. The measured UGF (Attachment #1) here was ~ 20 Hz. It can be increased to 55 Hz but then it quickly becomes unstable. I added FM1 (boost) to the HPC_LO_PHASE bank but didn't get to try it.

The noise spectra (Attachment #2) is still uncalibrated... but has been saved under Git/40m/measurements/BHD/HPC_residual_noise_spectra.xml

Attachment 1: lophase_oltf.pdf
lophase_oltf.pdf
Attachment 2: lophase_noise_spectra.pdf
lophase_noise_spectra.pdf
  17143   Mon Sep 19 17:02:57 2022 PacoSummaryGeneralPower Outage 220916 -- restored all

Restore lab

[Paco, Tega, JC, Yehonathan]

We followed the instructions here. There were no major issues, apart from the fb1 ntp server sync taking long time after rebooting once.


ETMY damping

[Yehonathan, Paco]

We noticed that ETMY had to much RMS motion when the OpLevs were off. We played with it a bit and noticed two things: Cheby4 filter was on for SUS_POS and the limiter on ULCOIL was on at 0 limit. We turned both off.

We did some damping test and observed that the PIT and YAW motion were overdamped. We tune the gain of the filters in the following way:

SUSSIDE_GAIN 1250->50

SUSPOS_GAIN 200->150

SUSYAW_GAIN 60->30

These action seem to make things better.

  17144   Mon Sep 19 20:21:06 2022 TegaUpdateComputers1X7 and 1X6 work

[Tega, Paco, JC]


We moved the GPS network time server and the Frequency distribution amplifier from 1X7 to 1X6 and the PEM AA, ADC adapter and Martian network switch from 1X6 to 1X7. Also mounted the dolphin IX switch at the rear of 1X7 together with the DAQ and martian switches. This cleared up enough space to mount all the front-ends, however, we found that the mounting brackets for the frontends do not fit in the 1X7 rack, so I have decided to mount them on the upper part of the test stand for now while we come up with a fix for this problem. Attachments 1 to 3 show the current state of racks 1X6, 1X7 and the teststand.

 

Attachment 1: Front of racks 1X6 and 1X7

Attachment 2: Rear of rack 1X7

Attachment 3: Front of teststand rack


Plan for the remainder of the week

Tuesday

  • Setup the 6 new front-ends to boot off the FB1 clone.
  • Test PCIe I/O cables by connecting them btw the front-ends and teststand I/O chassis one at a time to ensure they work
  • Then lay the fiber cables to the various I/O chassis.

Wednesday

  • Migrate the current models on the 6 front-ends to the new system.
  • Replace RFM IPC parts with dolphin IPC parts in c1rfm model running c1sus machine
  • Replace the RFM parts in c1iscex and c1iscey models
  • Drop c1daf and c1oaf models from c1isc machine, since the front-ends have only have 6 cores
  • Build and install models

Thursday [CAN I GET THE IFO ON THIS DAY PLEASE?]

  • Complete any remaining model work
  • Connect all I/O chassis to their respective (new) front-end and see if we can start the models (Need to think of a safe way to do this. Should we disconnect coil drivers b4 starting the models?)

Friday

  • Tie-up any loose ends
Attachment 1: IMG_20220919_204013819.jpg
IMG_20220919_204013819.jpg
Attachment 2: IMG_20220919_203541114.jpg
IMG_20220919_203541114.jpg
Attachment 3: IMG_20220919_203458952.jpg
IMG_20220919_203458952.jpg
  17145   Tue Sep 20 07:03:04 2022 PacoSummaryGeneralPower Outage 220916 -- restored all

[JC, Tega, Paco ]

I would like to mention that during the Vacuum startup, after the AUX pump was turned on, Tega and I were walking away while the pressure decreases. While we were, valves opened on their own. Nobody was near the VAC Desktop during this. I asked Koji if this may be an automatic startup, but he said the valves shouldn't open unless they are explicitely told to do so. Has anyone encountered this before?

Quote:

Restore lab

[Paco, Tega, JC, Yehonathan]

We followed the instructions here. There were no major issues, apart from the fb1 ntp server sync taking long time after rebooting once.


ETMY damping

[Yehonathan, Paco]

We noticed that ETMY had to much RMS motion when the OpLevs were off. We played with it a bit and noticed two things: Cheby4 filter was on for SUS_POS and the limiter on ULCOIL was on at 0 limit. We turned both off.

We did some damping test and observed that the PIT and YAW motion were overdamped. We tune the gain of the filters in the following way:

SUSSIDE_GAIN 1250->50

SUSPOS_GAIN 200->150

SUSYAW_GAIN 60->30

These action seem to make things better.

 

  17146   Tue Sep 20 15:40:07 2022 yehonathanUpdateBHDTrying doing AC lock

We resume the LO phase locking work. MICH was locked with an offset of 80 cts. LO and AS beams were aligned to maximize the BHD readout visibility on ndscope.

We lock the LO phase on a fringe (DC locking) actuating on LO1.

Attachment 1 shows BHD readout (DCPD_A_ERR = DCPD_A - DCPD_B) spectrum with and without fringe locking while LO2 line at 318 Hz is on. It can be seen that without the fringe locking the dithering line is buried in the A-B noise floor. This is probably due to multiple fringing upconversion. We figured that trying to directly dither-lock the LO phase might be too tricky since we cannot resolve the dither line when the LO phase is unlocked.

We try to handoff the lock from the fringe lock to the AC lock in the following way: Since the AC error signal reads the derivative of the BHD readout it is the least sensitive to the LO phase when the LO phase is locked on a dark fringe, therefore we offset the LO to realize an AC error signal. LO phase offset is set to ~ 40 cts (peak-to-peak counts when LO phase is uncontrolled is ~ 400 cts).

We look at the "demodulated" signal of LO1 from which the fringe locking error signal is derived (0 Hz frequency modulation 0  amplitude) and the demodulated signal of LO2 where a ~ 700 Hz line is applied. We dither the LO phase at ~ 50Hz to create a clear signal in order to compare the two error signals. Although the 50 Hz signal was clearly seen on the fringe lock error signal it was completely unresolved in the LO2 demodulated signal no matter how hard we drove the 700Hz line and no matter what demodulation phase we chose. Interestingly, changing the demodulation phase shifted the noisy LO2 demodulated signal by some constant. Will post a picture later.

Could there be some problem with the modulation-demodulation model? We should check again but I'm almost certain we saw the 700Hz line with an SNR of ~ 100 in diaggui, even with the small LO offset changes in the 700Hz signal phase should have been clearly seen in the demodulated signal. Maybe we should also check that we see the 50Hz side-bands around the 700Hz line on diaggui to be sure.

Attachment 1: Screenshot_2022-09-20_15-38-44.png
Screenshot_2022-09-20_15-38-44.png
  17147   Tue Sep 20 18:18:07 2022 AnchalSummarySUSETMX, ETMY damping loop gain tuning

[Paco, Anchal]

We performed local damping loop tuning for ETMs today to ensure all damping loops' OLTF has a Q of 3-5. To do so, we applied a step on C1:SUS-ETMX/Y_POS/PIT/YAW_OFFSET, and looked for oscillations in the error point of damping loops (C1:SUS-EMTX/Y_SUSPOS/PIT/YAW_IN1) and increased or decreased gains until we saw 3-5 oscillations before the error signal stabilizes to a new value. For side loop, we applied offset at C1:SUS-ETMX/Y_SDCOIL_OFFSET and looked at C1:SUS-ETMX/Y_SUSSIDE_IN1. Following are the changes in damping gains:

C1:SUS-ETMX_SUSPOS_GAIN : 61.939   ->   61.939
C1:SUS-ETMX_SUSPIT_GAIN :   4.129     ->   4.129
C1:SUS-ETMX_SUSYAW_GAIN : 2.065     ->   7.0
C1:SUS-ETMX_SUSSIDE_GAIN : 37.953  ->   50

C1:SUS-ETMY_SUSPOS_GAIN : 150.0     ->   41.0
C1:SUS-ETMY_SUSPIT_GAIN :   30.0       ->   6.0
C1:SUS-ETMY_SUSYAW_GAIN : 30.0       ->   6.0
C1:SUS-ETMY_SUSSIDE_GAIN : 50.0      ->   300.0

 

  17148   Tue Sep 20 23:06:23 2022 TegaUpdateComputersSetup the 6 new front-ends to boot off the FB1 clone

[Tega, Radhika, JC]

We wired the front-ends for power, DAQ and martian network connections. Then moved the I/O chassis from the buttom of the rack to the middle just above the KVM switch so we can leave the top og the I/O chassis open for access to the ports of OSS target adapter card for testing the extension fiber cables.

Attachment 1 (top -> bottom)

c1sus2

c1iscey

c1iscex

c1ioo

c1sus

c1lsc


When I turned on the test stand with the new front-ends, after a few minutes, the power to 1x7 was cut off due to overloading I assume. This brought down nodus, chiara and FB1. After Paco reset the tripped switch, everything came back without us actually doing anything, which is an interesting observation.


After this event, I moved the test stand power plug to the side wall rail socket. This seems fine so far. I then brought chiara (clone) and FB1 (clone) online. Here are some changes I made to get things going:

Chiara (clone)

  • Edited '/etc/dhcp/dhcpd.conf' to update the MAC address of the front-ends to match the new machines, then run
  • sudo service isc-dhcp-server restart
  • then restart front-ends
  • Edited '/etc/hosts' on chiara to include c1iscex and c1iscey as these were missing

 

FB1 (clone)

Getting the new front-ends booting off FB1 clone:

1. I found that the KVM screen was flooded with setup info about the dolphin cards on the LLO machines. This actually prevented login using the KVM switch for two of these machines.  Strangely, one of them 'c1sus' seemed to be fine, see attachment 2, so I guessed this was bcos the dolphin network was already configured earlier when we were testing the dolphin communications. So I decided to configure the remaining dolphin cards. To do so, we do the following

Dolphin Configuration:

1a. Ideally running

sudo /opt/DIS/sbin/dis_mkconf -fabrics 1 -sclw 8 -stt 1 -nodes c1lsc c1sus c1ioo c1iscex c1iscey c1sus2 -nosessions

should set up all the nodes, but this did not happen. In fact, I could no longer use the '/opt/DIS/sbin/dis_admin' GUI after running this operation and restarting the 'dis_networkmgr.service' via

sudo systemctl restart dis_networkmgr.service

so  I logged into each front-end and configured the dolphin adapter there using

sudo /opt/DIS/sbin/dis_config

After which I shut down FB1 (clone) bcos restarting it earlier didn't work, I waited a few minutes and then started it.  Everything was fine afterward, although I am not quite sure what solved the issue as I tried a few things and I was glad to see the problem go!

1b. I later found after configuring all the dolphin nodes that 2 of them failed the '/opt/DIS/sbin/dis_diag' test with an error message suggesting three possible issues of which one was 'faulty cable'. I looked at the units in question and found that swapping both cables with the remaining spares solved the problem. So it seems like these cables are faulty (need to double-check this). Attachment 3 shows the current state of the dolphin nodes on the front-ends and the dolphin switch.


2. I noticed that the NFS mount service for the mount points '/opt/rtcds' and '/opt/rtapps' in /etc/fstab exited with an error, so I ran 

sudo mount -a

3. edit '/etc/hosts' to include c1iscex and c1iscey as these were missing

 

Front-ends

To test the PCIe extension fiber cables that connect the front-ends to their respective I/O chassis, we run the following command (after booting the machine with the cable connected): 

controls@c1lsc:~$ lspci -vn | grep 10b5:3
    Subsystem: 10b5:3120
    Subsystem: 10b5:3101

If we see the output above, then both the cable and OSS card are fine (We know from previous tests that the OSS card on the I/O chassis is good). Since we only have one I/O chassis, we repeat the step above 8 times, also cycling through the six new front-end as we go so that we are also testing the installed OSS host adapter cards. I was able to test 4 cables and 4 OSS host cards (c1lsc, c1sus, c1ioo, c1sus2), but the remaining results were inconclusive (i.e. it seems to suggest that 3 out of the remaining 5 fiber cables are faulty, which in itself would be considered unfortunate but I found the reliability if the test to be in question when I went back to test the functionality to the 2 remaining OSS host cards using a cable that passed the test earlier and it didn't pass. After a few retries, I decided to call it a day b4 I lose my mind) and need to be redone again tomorrow.

 

Note: We were unable to lay the cables today bcos these tests were not complete, so we are a bit behind the plan. Would see if we can catch up tomorrow.

 

Quote:

Plan for the remainder of the week

Tuesday

  • Setup the 6 new front-ends to boot off the FB1 clone.
  • Test PCIe I/O cables by connecting them btw the front-ends and teststand I/O chassis one at a time to ensure they work
  • Then lay the fiber cables to the various I/O chassis.

 

Attachment 1: IMG_20220921_084220465.jpg
IMG_20220921_084220465.jpg
Attachment 2: dolphin_err_init_state.png
dolphin_err_init_state.png
Attachment 3: dolphin_final_state.png
dolphin_final_state.png
  17149   Wed Sep 21 10:10:22 2022 YehonathanUpdateCDSC1SU2 crashed

{Yehonathan, Anchal}

Came this morning to find that all the BHD optics watchdogs tripped. Watchdogs were restored but the all ADCs were stuck.

We realized that the C1SU2 model crashed.

We run /scripts/cds/restartAllModels.sh to reboot all machines. All the machines rebooted and burt restored to yesterday around 5 PM.

The optics seem to have returned to good alignment and are damped.

  17150   Wed Sep 21 17:01:59 2022 PacoUpdateBHDBH55 RFPD installed - part I

[Radhika, Paco]

Optical path setup

We realized the DCPD - B beam path was already using a 95:5 beamsplitter to steer the beam, so we are repurposing the 5% pickoff for a 55 MHz RFPD. For the RFPD we are using a gold RFPD labeled "POP55 (POY55)" which was on the large optical table near the vertex. We have decided to test this in-situ because the PD test setup is currently offline.

Radhika used a Y1-1025-45S mirror to steer the B-beam path into the RFPD, but a lens should be added next in the path to focus the beam spot into the PD sensitive area. The current path is illustrated by Attachment #1.

We removed some unused OPLEV optics to make room for the RFPD box, and these were moved to the optics cabinet along Y-arm [Attachment #2].

 


[Anchal, Yehonathan]

PD interfacing and connections

In parallel to setting up the optical path configuration in the ITMY table, we repurposed a DB15 cable from a PD interface board in the LSC rack to the RFPD in question. Then, an SMA cable was routed from the RFPD RF output to an "UNUSED" I&Q demod board on the LSC rack. Lucky us, we also found a terminated REFL55 LO port, so we can draw our demod LO from there. There are a couple (14,15,20,21) ADC free inputs after the WF2 and WF3 whitening filter interfaces.


Next steps

  • Finish alignment of BH55 beam to RFPD
  • Test RF output of RFPD once powered
  • Modify LSC model, rebuild and restart
Attachment 1: IMG_3760.jpeg
IMG_3760.jpeg
Attachment 2: IMG_3764.jpeg
IMG_3764.jpeg
  17151   Wed Sep 21 17:16:14 2022 TegaUpdateComputersSetup the 6 new front-ends to boot off the FB1 clone

[Tega, JC]

We laid 4 out of 6 fiber cables today. The remaining 2 cables are for the I/O chassis on the vertex so we would test the cables the lay it tomorrow. We were also able to identify the problems with the 2 supposedly faulty cable, which are not faulty. One of them had a small bend in the connector that I was able to straighten out with a small plier and the other was a loose connection in the switch part. So there was no faulty cable, which is great! Chris wrote a matlab script that does the migration of all the model files. I am going through them, i.e. looking at the CDS parameter block to check that all is well. Next task is to build and install the updated models. Also need to update the '/opt/rtcds' and '/opt/rtapps' directory to the latest in the 40m chiara system.

 

  17152   Thu Sep 22 19:51:58 2022 AnchalUpdateBHDBH55 LSC Model Updates - part II

I updated follwoing in teh rtcds models and medm screens:

  • c1lsc
    • Added reading of ADC0_20 and ADC0_21 as demodulated BHD output at 55 MHz, I and Q channels.
    • Connected BH55_I and BH55_Q to phase rotation and creation of output channels.
    • Replaced POP55 with BH55 in the RFPD input matrix.
    • Send BH55_I and BH55_Q over IPC to c1hpc
    • Added BH55 RFPD model in LSC screen, in RFPD input matrix, whitening box. Some work is still remaining.
  • c1hpc
    • Added recieving BH55_I and BH55_Q.
    • Added BH55_I and BH55_Q to sensing matrix through filter modules. Now these can be used to control LO phase.
    • Added BH55 signals to the medm screen.
  • c1scy
    • Updated SUS model to new sus model that takes care of data acquisition rates and also adds BIASPOS, BIASPIT and BIASYAW filter modules at alignment sliders.

Current state:

  • All models built and installed without any issue or error.
  • On restarting all models, I first noticed 0x2000 error on c1lsc, c1scy and c1hpc. But these errors went away with doing daqd restart on fb1.
  • BH55 FM buttons are not connected to antialiasing analog filter. Need to do this and update medm screen accordingly.
  • The IPC from c1lsc to c1hpc is not working. One sender side, it does not show any signal which needs to be resolved.
  17153   Thu Sep 22 20:57:16 2022 TegaUpdateComputersbuild, install and start 40m models on teststand

[Tega, Chris]

We built, installed and started all the 40m models on the teststand today. The configuration we employ is to connect c1sus to the teststand I/O chassis and use dolphin to send the timing to other frontends. To get this far, we encounterd a few problems that was solved by doing the following:

0. Fixed frontend timing sync to FB1 via ntp

1. Set the rtcds enviroment variable `CDS_SRC=/opt/rtcds/userapps/trunk/cds/common/src` in the file '/etc/advligorts/env'

2. Resolved chgrp error during models installation using sticky bits on chiara, i.e. `sudo chmod g+s -R /opt/rtcds/caltech/c1/target`

3. Replaced `sqrt` with `lsqrt` in `RMSeval.c` to eliminate compilation error for c1ioo

4. Created a symlink for 'activateDQ.py' and 'generate_KisselButton.py' in '/opt/rtcds/caltech/c1/post_build'

5. Installed and configured dolphin for new frontend 'c1shimmer'

6. Replaced 'RFM0' with 'PCIE' in the ipc file, '/opt/rtcds/caltech/c1/chans/ipc/C1.ipc'

 

We still have a few issues namely:

1. The user models are not running smoothly. the cpu usage jumps to its maximum value every second or so.

2. c1omc seems to be unable to get its timing from its IOP model (Issue resolved by changing the CDS block parameter 'specific_cpu' from 6 to 4 bcos the new FEs only have 6 cores, 0-5)

3. The need to load the `dolphin-proxy-km` library and start the `rts-dolphin_daemon` service whenever we reboot the front-end

Attachment 1: dolphin_state_plus_c1shimmer.png
dolphin_state_plus_c1shimmer.png
Attachment 2: FE_status_overview.png
FE_status_overview.png
  17154   Fri Sep 23 10:05:38 2022 JCUpdateVACN2 Interlocks triggered

[Chub, Anchal, Tega, JC]

After replacing an empty tank this morning, I heard a hissing sound coming from the nozzle area. It turns out that this was from the copper tubing. The tubing we slightly broken and this was comfirmed with soapy water bubbles. This caused the N2 pressure to drop and the Vac interlocks to be triggered. Chub and I went ahead and replaced the fitting in connected this back to normal. Anchal and Tega have used the Vacuum StartUp procedures to restore the vacuum to normal operation.

Adding screenshot as the pressure is decreasing now.

Attachment 1: Screenshot_2022-09-23_10-19-39.png
Screenshot_2022-09-23_10-19-39.png
Attachment 2: Screenshot_2022-09-23_10-22-45.png
Screenshot_2022-09-23_10-22-45.png
ELOG V3.1.3-