40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 341 of 355  Not logged in ELOG logo
ID Date Authordown Type Category Subject
  17165   Thu Sep 29 18:01:14 2022 AnchalUpdateBHDBH55 LSC Model Updates - part IV

More model changes

c1lsc:

  • BH55_I and BH55_Q are now being read at ADC_0_14 and ADC_0_15. The ADC_0_20 and ADC_0_21 are bad due to faulty whitening filter board.
  • The whitening switch controls were also shifted accordingly.
  • the slow epics channels for BH55 anti-aliasing switch and whitening switch were added in /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db

c1mcs:

  • MC1, MC2, and MC3 are running on new suspension models now.

c1hpc:

  • DCPD_A and DCPD_B have been renamed to BHDC_A and BHDC_B following naming convention at other ports.
  • After the input summing matrix, the signals are called BHDC_SUM and BHDC_DIFF now.
  • BHDC_SUM and BHDC_DIFF can be directly using in sensing matrix bypassing the dither demodulation (to be used for DC locking)
  • BH55_I and BH55_Q are also sent for dither demodulation now (to be used in double dither method, RF and audio).
  • SHMEM channel names to c1bac were changed.

c1bac:

  • Conformed with new SHMEM channel names from c1hpc
  17166   Fri Sep 30 18:30:12 2022 AnchalUpdateASSModel Changes

I updated c1ass model today to use PR2 PR3 instead of TT1 TT2 for YARM ASS. This required changes in c1su2 too. I have split c1su2 into c1su2 (LO1., LO2, AS1, AS4) and c1su3 (SR2, PR2, PR3). Now the two models are using 31 and 21 CPU out of 60 which was earlier 55/60. All changed compiled correctly and have been uploaded. Models have been restared and medm screens have been updated.


Model changes

c1su2:

  • Everything related to SR2, PR2, and PR3 have been moved to c1su3.
  • Extra binary output channels are also distributed between c1su2 and c1su3. BO_4 and BO5 have been moved to c1su3.

c1su3:

  • Added IPC receiving from ASS for PR2 and PR3

c1ass:

  • Inputs to TT1 and TT2 PIT and YAW filter modules have been terminated to ground.
  • The ASS outputs for YARM have been renamed to PR2 and PR3 from TT1 and TT2.
  • IPC sending blocks added to send PR2 and PR3 ASC signals to c1su3.

 


To do:

  • Updated YARM ASS output matrix to handle change in coil driver actuation on PR2 and PR3 in comparison to TT1 and TT2.
  • Yuta suggested dithering PR2 and PR3 for input beam pointing for YARM alignment.
  17168   Sat Oct 1 13:09:49 2022 AnchalUpdateIMCWFS turned on

I turned on WFS on IMC at:

PDT: 2022-10-01 13:09:18.378404 PDT
UTC: 2022-10-01 20:09:18.378404 UTC
GPS: 1348690176.378404

The following channels are being saved in frames at 1024 Hz rate:

  • C1:IOO-MC_TRANS_PIT_ERR (Same as C1:IOO-MC_TRANS_PIT_OUT)
  • C1:IOO-MC_TRANS_YAW_ERR (Same as C1:IOO-MC_TRANS_YAW_OUT)
  • C1:IOO-MC_TRANS_SUM_ERR (Same as C1:IOO-MC_TRANS_SUMFILT_OUT)

We can keep it running over the weekend as we will not use the interferometer. I'll keep an eye on it with occasional log in. We'll post the time when we switch it off again.


The IMC lost lock at:

UTC    Oct 03, 2022    01:04:16    UTC
Central    Oct 02, 2022    20:04:16    CDT
Pacific    Oct 02, 2022    18:04:16    PDT

GPS Time = 1348794274

The WFS loops kept running and thus took IMC to a misaligned state. Between the above two times, IMC was locked continuously with very brief lock loss events, and had all WFS loops running.

  17174   Thu Oct 6 11:12:14 2022 AnchalUpdateBHDBH55 RFPD installation complete

[Yuta, Paco, Anchal]

BH55 RFPD installation was still not complete until yesterday because of a peculiar issue. As soon as we would increase the whitening gain on this photodiode, we saw spikes coming in at around 10 Hz. Following events took place while debugging this issue:

  • We first thought that RFPD might be bad as we had just picked it up from what we call the graveyard table.
  • Paco fixed the bad connection issue at RF out and we confired RFPD transimpedance by testing it. See 40m/17159.
  • We tried changing the whitening filter board but that did not help.
  • We used BH55 RFPD to lock MICH by routing the demodulation board outputs to AS55 channels on WF2 board. We were able to lock MICH and increase whitening gain without the presence of any spikes. This ruled out any issue with RFPD.
  • Yuta and I tried swapping the whitening filter board but the problem persisted, which made us realize that the issue could be in the acromag that is writing the whitening gain for BH55 RFPD.
  • We combed through the /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db file to check if the whitening gain DAC channels are written twice but that was not the case. But changing the scan rate of the whitening gain output channel did change the rate at which teh spikes were coming.
  • This proved that some other process is constantly writing zero on these outputs.
  • It tuned out that all unused channels of acromags for c1iscaux are still defined and made to write 0 through /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_SPARE.db file. I don't think we need this spare file. If someone wants to use spare channels, they can quickly add it to dB file and restart the modbusIOC service on c1iscaux, it takes less than 2 minutes to do it. I vote to completely get rid of this file or atleast not use it in the cmd file.
  • After removing the violating channels, the problem with BH55 RFPD is resolved.

The installation of BH55 RFPD is complete now.

 

  17175   Thu Oct 6 12:02:21 2022 AnchalUpdateCDSCDS Upgrade Plan

[Chris, Anchal]

Chris and I discussed our plan for CDS upgrade which amounts to moving new FEs, new chiara, and new FB1 OS system tomartian network.


Preparation:

  • Chiara (clone) (will be called "New Chiara" henceforth) will be resynced to existing chiara to get all model and medm changes.
  • All models on New Chiara will be rebuilt, and reinstalled.
  • All running servies on existing chiara will be printed and stored for comparison later.
  • New Chiara's OS drive will be updraged to Debian 10 and all services will be restored:
    • DHCP
    • DNS
    • NFS
    • rsync
  • Existing fb1 DAQ network card (10 GBps ethernet card) will be verified.
  • Make a list of all fb1 file system mounts and their UUIDs.

Upgrade plan:

Date: Fri Oct 7, 2022
Time: 11:00 am (After coffee and donuts)
Minimum required people: Chris, Anchal, JC (the more the merrier)

Steps:

  1. Ensure a snapshot of all channels is present from Oct 6th on New Chiara.
  2. Shutdown all machines:
    1. All slow computers (Except c1vac).
      Computer List: ssh into the computers and run:
      sudo systemctl stop modbusIOC.service
      sudo shutdown -h now
      1. c1susaux
      2. c1susaux2
      3. c1auxex
      4. c1auxey1
      5. c1psl
      6. c1iscaux
    2. All fast computers. Run on rossa:
      /cvs/cds/rtcds/caltech/c1/Git/40m/scripts/cds/stopAllModels.sh
      Disconnect left ethernet cables from the back of these computers.
    3. Power off all I/O chassis
    4. Swap the oneStop cables on all I/O chassis to fiber cables. On c1sus, connect the copper oneStop cable to teststand c1sus FE.
    5. Tun on all I/O chassis.
  3. Exchange chairas.
    1. Connect old chiara to teststand network.
    2. Connect New Chiara to martian network.
    3. Turn on both old and new chiara.
    4. Ensure all services are running on New Chiara by comparing with the list made earlier during preparation.
  4. fb1.
    1. Move fb1(clone)'s OS drive into existing fb1 (on 1X6)
    2. Turn on fb1 (on 1X6).
    3. Ensure fb1 is mounting all it's file systems correctly.
  5. New FEs
    1. Connect the network switch for new FEs to martian network. Make sure that old chiara is not connected to this same switch.
    2. Turn on the new FEs. All models should start on boot in sequence.
    3. Check if all models have green lights.
  6. Burt restore using latest snapshot available.
  7. Perform tests:
    1. Check if all local damping loops are working as before.
    2. Check if all IPC channels are transmitting and receiving correctly.
    3. Check if IMC is able to lock.
    4. Try single arm locking
    5. Try MICH locking.
  8. Make contingency plan on how to revert to old system if something fails.
  17176   Thu Oct 6 18:50:57 2022 AnchalSummaryBHDBH55 meas diff angle estimation and LO phase lock attempts

[Yuta, Paco, Anchal]

BH55 meas diff

We estimated meas diff angle for BH55 today by following this elog post. We used moku:lab Moku01 to send a 55 MHz tone to PD input port of BH55 demodulation board. Then we looked at I_ERR and Q_ERR signals. We balanced the gain on I channel to 1.16 to get the two signals to same peak to peak heights. Then we changed the mead diff angle to 91.97 to make the "bounding box" zero. Our understanding is that we just want the ellipse to be along x-axis.

We also aligned beam input to BH55 bit better. We used the single bounce beam from aligned ITMY as the reference.


LO phase lock with single RF demodulation

We attempted to lock LO phase with just using BH55 demodulated output.

Configuration:

  • ITMX, ETMs were significantly misaligned.
  • At BH port, overlapping beams are single bounce back from ITMY and LO beam.

We expected that we would be able to lock to 90 degree LO phase just like DC locking. But now we understand that we can't beat the light with it's own phase modulated sidebands.

The confusion happened because it would work with Michelson at the dark port output of michelson, amplitude modulation is generated at 55 MHz. We tried to do the same thing as was done for DC locking with single bounce  and then michelson, but we should have seen this beforehand. Lesson: Always write down expectation before attempting the lock.

 

  17178   Fri Oct 7 22:45:15 2022 AnchalUpdateCDSCDS Upgrade Status Update

[Chris, Anchal, JC, Paco, Yuta]

Quote:

Steps:

  1. Ensure a snapshot of all channels is present from Oct 6th on New Chiara.
  2. Shutdown all machines:
    1. All slow computers (Except c1vac).
      Computer List: ssh into the computers and run:
      sudo systemctl stop modbusIOC.service
      sudo shutdown -h now
      1. c1susaux
      2. c1susaux2
      3. c1auxex
      4. c1auxey1
      5. c1psl
      6. c1iscaux
    2. All fast computers. Run on rossa:
      /cvs/cds/rtcds/caltech/c1/Git/40m/scripts/cds/stopAllModels.sh
      Disconnect left ethernet cables from the back of these computers.
    3. Power off all I/O chassis
    4. Swap the oneStop cables on all I/O chassis to fiber cables. On c1sus, connect the copper oneStop cable to teststand c1sus FE.
    5. Tun on all I/O chassis.
  3. Exchange chairas.
    1. Connect old chiara to teststand network.
    2. Connect New Chiara to martian network.
    3. Turn on both old and new chiara.
    4. Ensure all services are running on New Chiara by comparing with the list made earlier during preparation.

We finished all steps upto step 3 without any issue. We restarted all workstations to get the new nfs mount from New Chiara. Some other machined in lab might requrie restart too if they require nfs mounts. Note, c1sus was initially connected using a fiber oneStop cable that tested OK with the teststand IO chassis, but it still did not work with the c1sus chassis, and was reverted to a copper cable.


[Chris, Anchal, JC]

Quote:
  • fb1.
    1. Move fb1(clone)'s OS drive into existing fb1 (on 1X6)
    2. Turn on fb1 (on 1X6).
    3. Ensure fb1 is mounting all it's file systems correctly.

While doing step 4, we realized that all 8 drive bays in the existing fb1 are occupied by disks that are managed by a hardware RAID controller (MegaRAID). All 8 hard disks seem to be combined into a single logical volume, which is then partitioned and appears to the OS as a 2 TB storage device (/dev/sda for OS) and 23.5 TB storage device (/dev/sdb for frames). There was no free drive bay to install our OS drive from fb1 (clone), nor was there any already installed drive that we could identify as an "OS drive" and swap out, without affecting access to the frame data. We tried to boot fb1 with the OS drive from fb1 (clone) using multiple SATA to USB cables, but it was not detected as a bootable drive. We then tried to put the OS drive back in fb1 (clone) and use the clone machine as the 40m framebuilder temporarily, in order to work on booting up fb1 in parallel with the rest of the upgrade. We found that fb1 (clone) would no longer boot from the drive either, as it had apparently lost (or never possessed?) its grub boot loader. The boot loader was reinstalled from the debian 10 install thumbdrive, and then fb1 (clone) booted up and functioned normally, allowing the remainder of the upgrade to go forward.


[Chris, Jamie]

Jamie investigated the situation with the existing fb1, and found that there seem to be additional drive bays inside the fb1 chassis (not accessible from the front panel), in which the new OS disk could be installed and connected to open SATA ports on the motherboard. We can try this possible route to booting up fb1 and restoring access to past frames next week.


[Chris, Anchal]

Quote:
 

Steps:

  • New FEs
    • Connect the network switch for new FEs to martian network. Make sure that old chiara is not connected to this same switch.
    • Turn on the new FEs. All models should start on boot in sequence.
    • Check if all models have green lights.
  • Burt restore using latest snapshot available.
  • Perform tests:
    • Check if all local damping loops are working as before.
    • Check if all IPC channels are transmitting and receiving correctly.
    • Check if IMC is able to lock.

We carried out the rest of the steps upto 7.3. We started all slow machines, some of them required reloading the daemons using:

sudo systemctl daemon-reload
sudo systemctl restart modbusIOC

We found that we were unable to ssh to c1psl, c1susaux, and c1iscaux. It turned out that chiara (clone) had a very outdated martian host table for the nameserver. Since Chris had introduced some new IPs for IPMI ports, dolphin switch etc, we could not simply copy back from the old chiara. So Chris used diff command to go through all changes and restored DNS configuration.

We were able to burt restore to Oct 7, 03:19 am point using the latest snapshot on New Chiara. All suspensions were being locally damped properly. We restarted megatron and optimus to get nfs mounts. All docker services are running normally, IMC autolocker is working and FF slow PID is working as well. PMC autolocker is also working fine. megatron's autourt cron job is running properly and restarted creating snapshots from 6:19 pm onwards.


Remaining things to do:

  • Test basic IFO locking
  • Resume BHD commissioning activities.
  • Chris and Jamie would work on transfering fb1 job to real fb1. This would restore access to all past frames which is not available right now.
  • Eventually, move the new FEs to 1X7 for permanent move into new CDS system.
  • After a few weeks of succesful run, we can remove old FEs from racks and associated cables.
  17184   Tue Oct 11 16:52:42 2022 AnchalUpdateIOORenaming WFS channels to match LIGO site conventions

[Tega, Anchal]

c1mcs and c1ioo models have been updated to add new acquisition of data.


IOO:WFS channels

We found from https://ldvw.ligo.caltech.edu/ldvw/view that following channels with "WFS" in them are acquired at the sites:

  • :IOO-WFS1_IP
  • :IOO-WFS1_IY
  • :IOO-WFS2_IP
  • :IOO-WFS2_IY

These are most probably error signals of WFS1 and WFS2. At 40m, we have following channel names instead:

  • C1:IOO-WFS1_I_PIT_OUT
  • C1:IOO-WFS1_I_YAW_OUT
  • C1:IOO-WFS2_I_PIT_OUT
  • C1:IOO-WFS2_I_YAW_OUT

And similar for Q outputs as well. We also have chosen quadrature signals (signals after sensing matrix) at:

  • C1:IOO-WFS1_PIT_OUT
  • C1:IOO-WFS1_YAW_OUT
  • C1:IOO-WFS2_PIT_OUT
  • C1:IOO-WFS2_YAW_OUT

We added following testpoints and are acquiruing them at 1024 Hz:

  • C1:IOO-WFS1_IP  (same as C1:IOO-WFS1_I_PIT_OUT)
  • C1:IOO-WFS1_IY  (same as C1:IOO-WFS1_I_YAW_OUT)
  • C1:IOO-WFS2_IP  (same as C1:IOO-WFS2_I_PIT_OUT)
  • C1:IOO-WFS2_IY  (same as C1:IOO-WFS2_I_YAW_OUT)

IOO-MC_TRANS

For the transmission QPD at MC2, we found following acquired channels at the site:

  • :IOO-MC_TRANS_DC
  • :IOO-MC_TRANS_P
  • :IOO-MC_TRANS_Y

We created testpoints in c1mcs.mdl to add these channel names and acquire them. Following channels are now available at 1024 Hz:

  • C1:IOO-MC_TRANS_DC
  • C1:IOO-MC_TRANS_P
  • C1:IOO-MC_TRANS_Y

We started acquiring following channels for the 6 error signals at 1024 Hz:

  • C1:IOO-WFS1_PIT_IN1
  • C1:IOO-WFS1_YAW_IN1
  • C1:IOO-WFS2_PIT_IN1
  • C1:IOO-WFS2_YAW_IN1
  • C1:IOO-MC2_TRANS_PIT_IN1
  • C1:IOO-MC2_TRANS_YAW_IN1

We started acquiring following 6 control signals at 1024 Hz as well:

  • C1:IOO-MC1_PIT_OUT
  • C1:IOO-MC1_YAW_OUT
  • C1:IOO-MC2_PIT_OUT
  • C1:IOO-MC2_YAW_OUT
  • C1:IOO-MC3_PIT_OUT
  • C1:IOO-MC3_YAW_OUT

RXA: useful to know that you have to append "_DQ" to all of the channel names above if you want to find them with nds2-client.


Other changes:

In order to get C1:IOO-MC_TRANS_[DC/P/Y], we had to get rid of same named EPICS output channels in the model. These were been acquired at 16 Hz before this way. We then updated medm screens and autolocker config file. For slow outputs of these channels, we are using C1:IOO-MC_TRANS_[PIT/YAW/SUMFILT]_OUTPUT now. We had to restart daqd service for changes to take effect. This can be done by sshing into fb1 and running:

sudo systemctl restart rts-daqd

Now there is a convinient button present in FE overview status medm screen to restart DAQD service by a simple click.

  17195   Mon Oct 17 20:04:16 2022 AnchalUpdateBHDBH55 RF output amplified

[Radhika, Anchal]

We have added an RF amplifier to the output of BH55. See the MICH signal on BH55 outputs as compared to AS55 output on the attached screenshot.

Quote:

Next steps:

- Amplify the BH55 RF signal before demodulation to increase the SNR. In order to power an RF amplifier, we need to use a breakout board to divert some power from the DB15 cable currently powering BH55.

 


Details:

  • Radhika first tried to use ZFL-500-HLN+ amplifier taken out from the amplifier storage along X-arm.
  • She used a DB15 breakout board to source the amplifier power from PD interface cable.
  • However, she reported no signal at the output.
  • We found that BH55 RFPD was not properly fixed tot eh optical table. We bolted it down properly and aligned the beam to the photodiode.
  • We still did not see any RF output.
  • I took over from Radhika on this issue. I tested the transfer function of the amplifier using moku:lab. I found that it was not amplifying at all.
  • I brought in a beanchtop PS and tested the amplifier by powering it directly. It drew 100 mA of current but showed no amplififcation in transfer function. The transfer function was constant at -40 dB with or without the amplifier powered.
  • I took out another RF amplifier from the same storage. This time a ZFL-1000-LN. I tested it with both benchtop PS and PD interface power source, it was wokring with 20 dB amplification.
  • I completed the installation and cable management. See photos attached.
  • I also took the opportunity to center the ITMY oplev.

Please throw away malfunctioning parts or label them malfunctioning before storing them with other parts. If we have to test each and every part before installation, it will waste too much of our time.

 

  17198   Tue Oct 18 20:43:38 2022 AnchalUpdateOptimal ControlIMC open loop noise monitor

WFS loops were running for past 2 hours when I made the overall gain slider zero at:

PDT: 2022-10-18 20:42:53.505256 PDT
UTC: 2022-10-19 03:42:53.505256 UTC
GPS: 1350186191.505256

The output values are fixed to a good alignment. IMC transmission is about 14100 counts right now. I'll turn on the loop tomorrow morning. Data from tonight can be used for monitoing open loop noise.

  17199   Wed Oct 19 09:48:49 2022 AnchalUpdateOptimal ControlIMC open loop noise monitor

Turning WFS loops back on at:

PDT: 2022-10-19 09:48:16.956979 PDT
UTC: 2022-10-19 16:48:16.956979 UTC
GPS: 1350233314.956979

  17203   Fri Oct 21 10:37:36 2022 AnchalSummaryBHDBH55 phase locking efforts

After the amplifier was modified with a capacitor, we continued trying to approach locking LO phase to in quadrature with AS beam. Following is a short summary of the efforts:

  • To establish some ground, we tested locking MICH using BH55_Q instead of AS55_Q. After amplification, BH55_Q is almost the same level in signal as AS55_Q and a robust lock was possible.
  • Then we locked the LO phase using BH55_Q (single RF sideband locking), which locks the homodyne phase angle to 90 degrees. We were able to successfully do this by turning on extra boost at FM2 and FM3 along with FM4 and FM5 that were used to catch lock.
  • We also tried locking in a single ITMY bounce configuration. This is a Mach-Zehnder interferometer with PR2 acting as the first beam splitter and BHDBS as the recombination beamsplitter. Note that we failed earlier at this attempt due to the busted demodulation board. This lock worked as well with single RF demodulation using BH55_Q.
  • The UGF achieved in the above configurations was ~15 Hz.
  • In between and after the above steps, we tried using audio dither + RF sideband, and double demodulation to lock the LO phase but it did not work:
    • We could see a good Audio dither signal at 142.7 Hz on the BH55_Q signal. SNR above 20 was seen.
    • However, on demodulating this signal and transferring all signal to C1:HPC-BH55_Q_DEMOD_I_OUT, we were unable to lock the LO phase.
    • Using xyplot tool, we tried to see the relationship between C1:HPC-BHDC_DIFF_OUT and C1:HPC-BH55_Q_DEMOD_I_OUT. The two signals, according to our theory, should be 90 degrees out of phase and should form an ellipse on XY plot. But what we saw was basically no correlation between the two.
    • Later, I tried one more thing. The comb60 filter on BH55 is not required when using audio dither with it, so I switched it off.
      • I turned off comb60 filters on both BH55_I and BH55_Q filter modules.
      • I set the audio dither to 120 Hz this time to utilize the entire 120 Hz region between 60 Hz and 180 Hz power line peaks.
      • I changed the demodulation low pass filter to 60 Hz Butterworth filter. I tried using 2nd order to lose less phase due to this filter.
      • These steps did not fetch me any different results than before, but I did not get a good time to investigate this further as we moved into CDS upgrade activities.
  17218   Tue Nov 1 15:41:18 2022 AnchalUpdateSUSF2A filter design and trial on MC1

Following discussion in this elog thread (40/6004), I used the design of F2A (force to angle(pitch)) decoupling filter as mentioned in this DCC doc T010140. This document is very useful as it talks about the overall control loops of a suspension, including sensor signal conditioning, damping filter shapes, force to pitch decoupling, pitch to position decoupling, and coil strength balancing. In future, if people are working on suspension characterization and damping, this document is a good resource to read.


Force to Angle (Pitch) decoupling filter

The document address this problem with first principle calculation using the geometry of single suspensions. As a first pass, I decided to use the design value of these geometric paramters to create a filter design for upper coils and one for lower coils. The parameters are listed in table 2. I used following:

  Description Value used
L Vertical dis-tance  from  the  suspension  point  to  the  wire  take-off  point 247.1 mm
h Pitch distance (distance above the center-of-mass of the wire take-off point) 0.9 mm
l L + h 248 mm
D Vertical distance from a magnet to the center of the optic 24.7 mm
Q Q value used in poles of the filter (doc says to use 1000) 3, 5, 10
\omega_0 Position resonant angular frequency given by \sqrt{g/l} 6.288 rad/s

Using above parameters, we can define the F2A filter for upper coils as:

T(s) = \frac{s^2 + \omega_0^2(1 + h/D)}{s^2 + s \omega_0 /Q + \omega_0^2}

and for lower coil:

T(s) = \frac{s^2 + \omega_0^2(1 - h/D)}{s^2 + s \omega_0 /Q + \omega_0^2}

I used the design values as listed in the table above and got the filters as shown in attachment 1 for Q=3 case. I think the Q is higher than what other f2a filters I have seen for example at ETMY, the filters are as shown in attachment 2.

I tried turning on MC1 f2a filters but the watchdog tripped in about 4 minutes. This was the case when WFS were turned off. Another trial also lead to the same result. I tried this on MC2 and MC3 as well, all of them tripped after som time. See attachment 3 to see MC1 tripping on these filters.

I'll now try to use a lower Q filter.

  17219   Tue Nov 1 17:17:27 2022 AnchalUpdateSUSModified F2A filter design and trial on MC1

After a quick discussion with Yuta, we figured that the introduction of a finite Q that Peter Fritschel does in this DCC doc T010140 for the poles pair, he should have done the same for the zeros pair as well otherwise there will be a notch at around 1 Hz. So I simply modified the filter design to have same Q for both zero pair and pole pair and got following transfer functions:

For upper coils:

T(s) = \frac{s^2 + s\omega_0\sqrt{1 + h/D}/Q + \omega_0^2(1+h/D))}{s^2 + s\omega_0/Q + \omega_0^2}

for lower coils:

T(s) = \frac{s^2 + s\omega_0\sqrt{1 - h/D}/Q + \omega_0^2(1-h/D))}{s^2 + s\omega_0/Q + \omega_0^2}

Attachment 1 shows the new filter design. I tested this filter set on MC1 and the optic kept on going as if nothing changed. That is atleast a good sign. Now next step would be test test if this actually helped in reducing the POS->PIT coupling on MC1, maybe using WFS signals.

The filters were added using this createF2Afilters.py script.

 

  17220   Tue Nov 1 17:59:23 2022 AnchalUpdateSUSAdded F2A filters on MC1, MC2, and MC3

I've cleared all old attempts on F2A filters on MC1, MC2, and MC3, and added the default F2A filter described in the last post. I added 3 such sets of filters, with Q=3, 7, and 10. I have turned on Q=3 filter for all IMC optics right now. I'll setup some test of switching between different Q filters in future.

  17222   Thu Nov 3 14:00:29 2022 AnchalUpdateSUSMC1 coil strengths balanced

I balanced the face coil strengths of MC1 using following steps:

  • At all points, keep sum(abs(coil_gains)) = 4
  • After reading coil gains, remove the signs. Do the operations as below, and before writing put back the signs.
  • Butterfly to POS decoupling:
    • Drive butterfly mode at 13 Hz using LOCKIN2 on MC1 and look at C1:IOO-MC_F_DQ for position fluctuations
    • Subtract 0.05 times the BUT vector to coil strengths to see the effect on C1:IOO-MC_F_DQ using diaggui exponential averaging of 5, BW=1.
    • Use Newton-Raphson from here to reach to no POS actuation when driving butterfly mode.
  • POS to PIT decouping:
    • Drive LOCKIN2 in POS mode at 13 Hz and look for PIT signal at C1:IOO-MC_TRANS_P_DQ using diaggui exponential averaging of 5, BW=1.
    • Subtract 0.05 times the PIT vector to coil strengths
    • Use Newton-Raphson from here to reach to no PIT actuation when driving POS.
  • POS to YAW decoupling:
    • Drive LOCKIN2 in POS mode at 13 Hz and look for YAW signal at C1:IOO-MC_TRANS_Y_DQ using diaggui exponential averaging of 5, BW=1.
    • Subtract 0.05 times the YAW vector to coil strengths
    • Use Newton-Raphson from here to reach to no YAW actuation when driving POS.

By the end, I was able to see no actuation on POS when butterfly is driven with 30000 counts amplitude at 13 Hz. I was able to see no PIT or YAW actuation when POS is driven with 10000 counts at 13 Hz.

Final coil strengths found:

C1:SUS-MC1_ULCOIL_GAIN: -1.008
C1:SUS-MC1_URCOIL_GAIN: -0.98  
C1:SUS-MC1_LRCOIL_GAIN: -1.06
C1:SUS-MC1_LLCOIL_GAIN: -0.952

I used this notebook while doing the above work. It has a couple of functions that could be useful in future while doing similar balancing.

 

  17223   Thu Nov 3 14:42:57 2022 AnchalUpdateSUSMC2 coil strengths balanced

Balanced MC2 coil strengths using the same method.

Final coil strengths found:

C1:SUS-MC2_ULCOIL_GAIN: 1.074
C1:SUS-MC2_URCOIL_GAIN: -0.979
C1:SUS-MC2_LRCOIL_GAIN: 0.97
C1:SUS-MC2_LLCOIL_GAIN: -0.976

  17225   Thu Nov 3 16:22:11 2022 AnchalUpdateSUSMC3 coil strengths balanced

Balanced MC3 coil strengths using the same method.

Final coil strengths found:

C1:SUS-MC3_ULCOIL_GAIN: 0.942
C1:SUS-MC3_URCOIL_GAIN: -1.038
C1:SUS-MC3_LRCOIL_GAIN: 1.075
C1:SUS-MC3_LLCOIL_GAIN: -0.945

 

  17227   Thu Nov 3 17:36:00 2022 AnchalUpdateSUSF2A filters on MC1, MC2, and MC3 set to test at 1 am
Quote:

 I'll setup some test of switching between different Q filters in future.

The f2A filters are set to test on IMC optics. The script used is testF2AFilters.py. The script is running on rossa in a tmux session named f2aTest. It will trigger at 1 am, Nov 4th 2022. First the script will turn off all F2A filters on IMC optics, wait for an hour, then it will try out the three F2A filter sets with different Q values, one at a time, for one hour each. So the test should last for roughly 4 hours. The gpstime stamps will be written in a logfile that can be used later to readback noise performance of IMC with different filter. The script has a try-except failsafe to revert things to original state if something fails. To stop the script from triggering or stop it during runtime, do following on rossa:

tmux a -t f2aTest
ctrl+C
exit

 

  17228   Thu Nov 3 20:07:01 2022 AnchalSummaryBHDAS1 coil balancing required

[Anchal, Koji]

The LO phase lock that was achieved lasts for a short time because as soon as a considerable POS offset is required on AS1, the POS to PIT coupling causes the AS-LO overlap to go away. To fix this, we need to balance the coil outputs of AS1 atleast and add the f2a filters too. To follow similar method as used for IMC optics, we need a sensor for true PIT and YAW motion of AS1. Today, we looked into the possiblity of installing a QPD at BHD output path to use it for AS1, AS4, LO1, LO2, SR2, PR2 and PR3 coil strength tuning. We found a QPD which is mentioned in this elog. We found QPD interface boards setup for old MCT and MC Refl QPDs (dating before 2008). We also found the old IP-POS QPD cable between 1Y2 and BS Oplev table. We took out this cable from BS oplev end upto ITMY opleve table, put on a new DB25 connector on the ribbon cable, and connected it to the QPD on ITMY table. There is still following work to be done:

  • Move back the BHD port camera a few inches and the lends with it.
  • Put a beamsplitter in the beam going to this camera and align it to fall on the new QPD.
  • Connect the the other end of cable to QPD interface board on 1Y2.
  • Take the lemo outputs or IDC outputs from the QPD interface board to spare ADC inputs (maybe on LSC I/O chassis or SUS2 I/O chassis).
  • Make changes in RTS model to read this QPD input.
  • Enjoy balancing the coils on the 7 new suspensions.
  17231   Mon Nov 7 11:24:05 2022 AnchalUpdateSUSF2A filters on MC1, MC2, and MC3 set to test at 1 am

This test was not successfull as IMC lost lock during the f2A filter trial. However, we do have 1 hour off data when all f2A filters were turned off in between following GPS times:

start gpstime: 1351584077

stop gpstime: 1351587677

After this gpstime, the f2A filters were turned ON for all IMC optics. After about 2000 seconds of no issues, the MC3 suspension suddenly rung up 1 Hz oscillations around 1351590720 gpstime. See attachment one for noise spectra of local damping error signals for MC3 before and after this event. See attachment 2 for time series of this event.

So, after this point, MC3 remained rung up and IMC remained unlocked, so no WFS signals are meaningful after gpstime 1351590720.

I have seen this happening out of nowhere to MC3 today too when PSL shutter was closed and only thing interacting with MC3 was the local damping loop. This suggests that some glitch event happens in MC3 which is not taken well by the f2a filter on it. The ringing goes down as soon as we turn OFF the f2a filter. The other optics show no such signs.

We'll do more tests in future to figure out the issue. For now, MC3 f2a filters are kept off. Maybe we need custom filter for MC3 rather than the design value default filter we are using right now. I'm attaching foton bode plot for MC3 f2a filters for verification that correct filters are in place.

  17232   Mon Nov 7 12:02:15 2022 AnchalUpdateSUSIMC test with PSL shutter closed.

Following configurations were kept today morning:

Start Time Stop Time HEPA PSL Shutter MC1 f2a MC2 f2a MC3 f2a MC3 ringing Notes

1351879503

10:04:45 PST
18:04:45 UTC

1351879797

10:09:39 PST
18:09:39 UTC

ON OFF ON ON ON NO Found later that MC3 started ringing at 1351879797

1351879797

10:09:39 PST
18:09:39 UTC

1351881325

10:35:07 PST
18:35:07 UTC

ON OFF ON ON ON YES This is the duration when MC3 was ringing

1351881325

10:35:07 PST
18:35:07 UTC

1351882257

10:50:39 PST
18:50:39

OFF OFF ON ON ON YES We turned off HEPA filter during this time, MC3 was still ringing.

1351882257

10:50:39 PST
18:50:39 UTC

1351882346

10:52:08 PST
18:52:08 UTC

OFF OFF ON ON ON NO I noticed MC3 rining and reset f2a filter by turning it off, waiting for it to damp down and restarting it.

1351882346

10:52:08 PST
18:52:08 UTC

1351883406

11:09:48 PST
19:09:48 UTC

OFF OFF ON ON ON YES MC3 started ringing again soon.

1351883406

11:09:48 PST
19:09:48 UTC

1351885490

11:44:32 PST
19:44:32 UTC

OFF OFF ON ON OFF NO MC3 f2a filters turned off due to repeated failure.

1351885490

11:44:32 PST
19:44:32 UTC

1351887487

12:17:49 PST
20:17:49 UTC

ON OFF ON ON OFF NO Turned ON HEPA for completeness of data.

 

  17234   Mon Nov 7 14:38:59 2022 AnchalUpdateSUSMC3 coil strengths rebalanced

I checked again today by sending excitation at POS and reading back from C1:IOO-MC_TRANS_P and C1:IOO-MC_TRANS_Y. I found that there was some POS->PIT and POS->YAW coupling remaining that I was to remove by same method. New coil gains are:

C1:SUS-MC3_ULCOIL_GAIN: 0.942
C1:SUS-MC3_URCOIL_GAIN: -1.042
C1:SUS-MC3_LRCOIL_GAIN: 1.076
C1:SUS-MC3_LLCOIL_GAIN: -0.94

 

  17235   Mon Nov 7 16:14:43 2022 AnchalUpdateSUSF2A filter with Q=1, 0.3 stable with MC3

I tired running for a few hours F2A filter with Q=1 and for maybe 30 min Q=0.3 on MC3 today and that keeps the suspension stable. So I'm going to put in Q=0.3 at FM1, Q=0.7 at FM2, and Q=1 filter on FM3. I am setting the test again for tonight with some modifications. Now the separate set of filters will be tried one by one on the three different optics so that we know the best Q filter for each optic. It is set to trigger at 1 am tonight in tmux sessions f2aMC1Test, f2aMC2Test, f2aMC3Test on rossa. To cancel the test or interrupt, do:

tmux a -t f2aMC1Test
ctrl+C
exit
tmux a -t f2aMC2Test
ctrl+C
exit
tmux a -t f2aMC3Test
ctrl+C
exit
  17236   Mon Nov 7 17:10:41 2022 AnchalSummaryBHDQPD installation seems like lost cause

The new QPD installation is turning out to be much more hard than it originally seemed. After finsing the cable, QPD and interface board, when I tried to use the cable, it seems like it is not powered or connected to the interface board at all. I tried both QPD ports on the QPD interface board (D990692) both none worked. I measured the output pins of IDC style connector on the interface board and they seem to have the correct voltages at the correct pins. But when I connect this to our cable and go to the other side of the cable which is a DB25, use a breakout board and see for the voltages, I see nothing. The even pins which are supposed to be connected to each other and to GND are also not connected to each other. I pulled out teh DB25 end of the cable and brought it close to the IDC end to do a direct conitnuity test and this test failed too.

I even foudn another IDC end of a spare QPD cable hanging near 1Y2, but could not find the other end of this cable either.

So moving forward, we have following options:

  • Assume the cable is bad and try to find another cable.
    • It is very hard to find these cables in the lab. Koji and I have already done one sweep.
  • Source 26 pin 2 row IDC female connector and make a ribbon cable ourselves.
    • We probably will need to buy this connector for this to work.
    • Downs has apparantely thrown away all IDC connectors.
  • Use clean room QPD that does not use this interface.
    • The QPD used in clean room tests for suspension hanging used a different board.
    • This board is just lying on the floor, mounted on one slot of a big 6U chassis.
  • Use AS WFS
    • If used in current position, it would not be useful for BHD port or tuning LO1, LO2, and AS4.
    • If taken to ITMY oplev table, we will need to source LO and opther connections right at the PD head as that is design for these PDs.
  • Use GigE camera
    • We can replace the analog camera with a GigE camera on the BHD output.
    • We will need to revide GigE camera code and medm screens for this, and run an ethernet cable to ITMY oplev table.
  • Someone verify that the cable is indeed not working as I am seeing above. If I am wrong, I would be a happier person.
  17237   Mon Nov 7 19:53:12 2022 AnchalUpdateSUSMC2 OSEMs calibrated using MC_F

MC2 OSEM outputs were calibrated today using MC_F to get the output values in microns. This was done using this diaggui file. We drive a sine wave at 13 Hz and 5000 cts at C1:SUS-MC1_BIASPOS_EXC. This signal is read at C1:IOO-MC_F and the C1:SUS-MC1_ULSEN_OUT and similar OSEM output channels. MC_F calibration in Hz is assumed to be correct. In diaggui, a calibration conversion of 4.8075e-14 m/Hz is added to convert MC_F signal into meters. This is then used to calibrate the OSEM outputs and necessary gain changes were done in teh cts2um filter module in all of the face OSEM input filters. Following are the new gains:

  • UL 0.36 -> 0.28510
  • UR 0.36 -> 0.26662
  • LL 0.36 -> 0.34861
  • LR 0.36 -> 0.71575

Note that this measurement was done after the coil strengths for MC2 have been balanced in 40m/17223.

  17238   Mon Nov 7 20:00:37 2022 AnchalUpdateSUSMC1 OSEMs output is weird

Following up, I tried to do this exercise with MC1 and MC3. While MC3 shows expected minute corrections to the previous value, MC1 showed much alrger corrections which led me to investigate further. Koji suggested to take a transfer function between MC_F and the OSEM outputs for both MC1 and MC3 the same way to see if something is different. And Koji was absolutely right. MC1 MC_F to OSEM outptu transfer function has a frequency dependent value, with a slope of ~0.6. Very weird. I'm holding on to doing OSEM calibration on both MC1 and MC3 until we know better on what is happening. See attached transfer functions.

Reminder, MC1 is using new satellite amplifier box, but OSEM outputs are read through single ended PDMon outputs rather than the differential ended PD Output port, because rest of the MC1 electronics is still last generation and the whitening board for them take in single ended input.

  17241   Tue Nov 8 10:23:42 2022 AnchalUpdateSUSMC3 damping loop step responses

I tuned MC3 local damping gains by looking at step responses in the DOF bassis. The same procedure was followed as described in 40m/17133. The gains were changed as following:

  Old Gains New Gains
C1:SUS-MC3_SUSPOS_GAIN 100 200
C1:SUS-MC3_SUSPIT_GAIN 24 10
C1:SUS-MC3_SUSYAW_GAIN 8 30
C1:SUS-MC3_SUSSIDE_GAIN 125 75

Attachement 1 shows the step responsed with the old gains and attachment 2 shows the step responses with the new gains. There is considerable cross coupling between SIDE OSEM and Coil to the face DOFs (POS, PIT, YAW). I think the high SIDE gain earlier was the culprit that started ringing with the f2a filters.

I agree that POS and SIDE step responses could look better but this was the best I was able to achieve. Further attempts by others is most welcome.

I also verified running f2a filter with Q=3 and it has been stably running with no ringing for past few minutes. More long term behavior is yet to be seen.C1:SUS-MC3_SUSSIDE_GAIN

  17242   Tue Nov 8 10:35:26 2022 AnchalUpdateSUSIMC F2A test

This time the test was succesful but I have reverted MC3 f2a filters back to with Q=3, 7, and 10. The inital part of the test is still useful though. I'm attaching amplitude spectral density curves for WFS control points and C1:IOO-MC_F_DQ in the different configurations. The shaded region is the 15.865 percentile to 84.135 percentile bounds of the PSD data. This corresponds to +/- 1 sigma percentiles for a gaussian variable. Also note that in each decade of freqeuncy, the FFt bin width is different such that each decade has 90 points (eg 0.1 Hz bin width for 1Hz to 10 Hz data, 1 Hz binwidth  for 10 Hz to 100 Hz and so on.)

The WFS control points do not show any significant difference in most of the frequency band. The differences below 10 mHz are not averaged enough as this was 30min data segments only.

C1:IOO-MC_F_DQ channel also show no significant difference in 0.1 Hz to 20 Hz. Between 20-100 Hz, we see that higher Q filters resulted in slightly less noise but the effect of the filters in this frequency band should be nothing, so this could be just coincidence or maybe the system behaves better with hgiher Q filters. In teh lower frequency band, we would should take more data to average more after shortlisting on some of these f2a filters. It seems like MC1 Q=10 (red curve) filter performs very good. For MC2, there is no clear sign. I'm not sure why MC2 Q=3 curve got a big offset in low frequency region. Such things normally happen due to significant linear trend presence in signal.

I'm not sure what other channels might be interesting to look at. Some input would be helpful.

  17245   Tue Nov 8 18:12:01 2022 AnchalUpdateSUSMC2 OSEMs calibrated using MC_F

I reran this measurement at low frequency 0.1016 Hz. Following were the cts2um gain changes:

  • UL 0.28510 -> 0.30408(0.00038)
  • UR 0.26662 -> 0.28178(0.00027)
  • LL 0.34861  -> 0.38489(0.00049)
  • LR 0.71575 -> 0.80396(0.00681)

Edited AG: Wed Nov 9 12:17:12 2022 : Uncertainties added by taking coherence of each channel and MC_F with excitation, using \sqrt{\frac{1-\gamma}{2 \gamma n_{avg}}} to get fractional error in ASD values I used for taking ratios(where \gammais coherence and n_{avg} is number of averages (5 in this case)), and adding MC_F ASD frac error to all sensor's frac error, and finally multiplyingit witht he ratios obtained above to get error in cts2um gain values.

RXA: I don't believe it. This is more accurate than the LIGO calibration of strain and also more accurate than the NIST calibration of laser power.

 

  17246   Tue Nov 8 19:39:34 2022 AnchalUpdateSUSMC3 OSEMs calibrated using MC_F

I did the same measurement for MC3 with one difference that OSEMs report \sqrt{2} more motion than IMC cavity length change due to it being at 45 degrees. Following are the new cts2um gain values

  • UL 0.36 -> 0.39827(0.00045)
  • UR 0.36 -> 0.33716(0.00038)
  • LL 0.36  -> 0.34469(0.00039)
  • LR 0.36 -> 0.33500(0.00038)

 

  17249   Wed Nov 9 11:07:16 2022 AnchalUpdateSUSIMC test

Following configurations were kept today morning:

Start Time Stop Time HEPA PSL Shutter WFS Loops

1352046006

08:19:48 PST
16:19:48 UTC

1352050278

09:31:00 PST
17:31:00 UTC

OFF OPEN OFF

1352050393

09:32:55 PST
17:32:55 UTC

1352054538

10:40:00 PST
18:40:00 UTC

OFF OPEN ON

1352054537

10:41:59 PST
18:41:59 UTC

1352058724

11:51:46 PST
19:51:46

OFF CLOSED OFF

f2a filters with Q=10 (FM3) were turned on all IMC optics.

  17251   Wed Nov 9 20:01:38 2022 AnchalUpdateSUSMC1 OSEMs output is weird

I took a coil to OSEM transfer function for MC1 osems  (LL, UR) today and again the slope of the transfer function was -1.4 instead of -2 as expected. I compared this with MC3 coil to osem transfer function (LL) which correctly had the slope of -2. See attachments 1 and 2 for the results. This measurement was taken with PSL shutter closed and local damping loops turned off.

As I mentioned earlier, MC1 is using new satellite amplifier box (S2100029-v2) whose transfer function data exists and was actually measured by me in 40m/15776. Using this transfer function data, and the foton 3:30 (FM1) filter, I tried to recreate the product transfer function that should happen if both filters are working correctly. Attachment 3 shows these transfer function plots. I overlayed on top of this the measured transfer function of OSEM to position displacement as done in 40m/17238 by making the magnitude equal at 1 Hz. It is suspicious how nicely the measured transfer function overlay with the satellite amplifier measured transfer function, both in magnitude and phase. I'll investigate more tomorrow.

 

  17252   Wed Nov 9 20:29:20 2022 AnchalUpdateSUSMC2 and MC3 set to undergo free swing test tonight

I have set a free swing test for MC2 and MC3 to trigger at 1 am tonight. The test should last for about 4.5 hrs upto 5:30 am. It will close the PSL shutter, perform the test, and open the shutter afterwards. To cancel or interrupt the test, go to rossa and do:

tmux a -t FST
ctrl+c
exit

 

  17256   Fri Nov 11 11:29:11 2022 AnchalUpdateSUSMC1 OSEMs output is weird

Late elog; original time Thursday, Nov 10 16:00 2022

MC1 is using a new satellite amplifier which was a whitening circuit on it with 3 Hz zero and 30 Hz pole. But to read out this signal, we use the old whitening board as it serves as the interface board with the ADC too. This is D000210 Whitening and Interface Board. This board has a switchable whitening filter which our RTS models supply GND as the switch input. It was not immediately clear to me if the GND input to this switch means whitening is ON or not.

I disconnected inputs and outputs to the whitening Board used for MC1 OSEM PDs, and I used a moku:go to measure the transfer function for the UR channel. This confirmed that whitening is turned ON on this interface board as well, which means the MC1 OSEM signals are whitened twice, while digitally we have been dewhitening only once. To fix this there are two possible solutions:

  • We turn on another identical dewhitening filter in MC1 OSEM input filter modules (a 3:30 at FM3)
  • We can change the MC1 Simulink model to stop keeping whitening on by default.
  17263   Sat Nov 12 21:59:24 2022 AnchalFrogsComputer Scripts / ProgramsFSS SLOW servo not running

I stopped the Docker PID script and started the old python script on megatron. Instructions on how to do this are here.

On optimus I ran:

sudo docker stop scripts_PID_FSS_Slow_1

On megatron I ran:

sudo systemctl enable FSSSlow
sudo systemctl restart FSSSlow

However, the daemon service keeps failing and restarting. So currently the FSSSlow is not running. I do not know how to debug this script.


On a side note, I tested the docker service by restarting it, and it was working. From the logs, it seems like it got stuck because it could not find C1:IOO-MC_LOCK channel which occurs when c1psl epics servers fail or get stuck. The blinker on this script runs when the script is running but it does not stop if the script gets stuck somewhere. If someone decides to use this script in the future, they would need to correct error catching so that no reply from caget looks like an error and the script restarts rather than keep trying to get the channel value. Or the blinker implementation should change in the script so that it displays a stuck state.

Quote:
 

 Whoever knows about this, please stop that Docker PID and we can just run the old python script on megatron.

 

  17281   Thu Nov 17 16:48:07 2022 AnchalFrogsComputer Scripts / ProgramsFSS SLOW servo running Now

I've moved the FSS Slow PID script running to megatron through systemd daemons. The script is working as expected right now. I've updated megatron motd and the always running scripts page here.

  17286   Fri Nov 18 17:00:15 2022 AnchalUpdateSUSMC1 and MC3 OSEMs calibrated using MC_F

After the MC1 osem dewhitening was fixed, I did the calibration of MC1 OSEM signals using MC_F using this notebook. A 0.1 Hz oscillation with amplitude of 1000 cts was sent to MC1 lockin2 and was kept on between 1352851381 and 1352851881. Then I read back the data from DQ channels and performed a welch with standard deviation calculation from the different segments used. From this measurement, I arrive to the following cts2um gain values that were changed in MC1 filter file. The damping remained stable after the changes:

MC1:
UL: 0.09 -> 0.105(12)
UR: 0.09 -> 0.078(9)
LR: 0.09 -> 0.065(7)
LL: 0.09 -> 0.087(10)

I followed the same method for MC3 as well to get mroe meaningful error bars. This measurement was done between 1352856980 and 1352857480 using this notebook. Here are the changes made:

MC3
UL: 0.39827 -> 0.509(57)
UR: 0.33716 -> 0.424(48)
LR: 0.335 -> 0.365(40)
LL: 0.34469 -> 0.376(43)

The larger error bars could be due to more noisy MC3 osem outputs as the satellite amplifier gain is lower here.

  17289   Sun Nov 20 13:42:21 2022 AnchalUpdateSUSIMC test

I repeated this test for the following configuration:

  • PSL shutter closed at good IMC transmission
  • Offset value of 14000 written to C1:IOO-MC_TRANS_SUM_FILT_OFFSET
  • WFS loops ON but ASCPIT outputs of the optics were turned off.

The test ran for 4000 seconds between following timestamps:

start time: 1352878206
stop time: 1352882207

This script was used to run this test and can be used again in future to repeat the same test.

  17295   Mon Nov 21 18:33:05 2022 AnchalUpdateSUSMC2 OSEMs calibrated using MC_F

Repeated this for MC2 using the error measurement technique mentioned in 40m/17286 using this notebook. Following are the cts2um gain changes:

UL: 0.30408 -> 0.415(47)
UR: 0.28178 -> 0.361(39)
LR: 0.80396 -> 0.782(248)
LL: 0.38489 -> 0.415(49)

I averaged 19 samples to get these values hoping to have reached systematic error limit. The errors did not change from a trial with 9 samples except for the LR OSEM.
  17298   Tue Nov 22 10:29:31 2022 AnchalSummarySUSITMY Coil Strengths Balanced

I followed this procedure to balance the coil strengths on ITMY. The position sensor was created by closing PSL shutter so that IR laser is free running, and locking the green laser to YARM, this makes C1:ALS-BEATY_FINE_PHASE_OUT a position sensor for ITMY. The oplev channels C1:SUS-ITMY_OL_PIT_IN1 and C1:SUS-ITMY_OL_YAW_IN1 were used for PIT and YAW sensors. Everything else followed the procedure. The coil gains were changed as follow:

C1:SUS-ITMY_ULCOIL_GAIN :   1.036 -> 1.061
C1:SUS-ITMY_URCOIL_GAIN : -1.028 -> -0.989
C1:SUS-ITMY_LRCOIL_GAIN :   0.930 -> 0.943
C1:SUS-ITMY_LLCOIL_GAIN : -1.005  -> -1.007

I used this notebook and this diaggui to do this balancing.

  17301   Wed Nov 23 11:06:08 2022 AnchalUpdateBHDc1hpc and c1sus modified to add BS dither and demodulation option

c1hpc has option of dithering BS now (sending excitation to BS LSC port to c1sus over IPC). This is available for demodulating BHDC and BH55 signals. Also BS is a possible feedback point, however, we would stick to using LSC screen for any MICH locking.

c1sus underwent 2 changes. All suspension models were upgraded to the new suspension model (see 40m/16938 and 40m/17165). Now the channel data rates are set in simulink model and activateDQ script is not doing anything for any of the suspension models.

  17310   Thu Nov 24 11:23:35 2022 AnchalUpdateSUSLow noise state

I've turned off HEPA fan and all lights at
PST: 2022-11-24 11:23:59.509949 PST
UTC: 2022-11-24 19:23:59.509949 UTC
GPS: 1353353057.509949

c1ioo model has been updated to acquire C1:IOO-MC2_TRANS_PIT_OUT  and C1:IOO-MC2_TRANS_YAW_OUT at 512 Hz rate.

I'll update when I turn the HEPA on again. I plan to turn it on for a few hours everyday to keep the PSL enclosure clean.


Turned on HEPA again at:

PST: 2022-11-25 12:14:34.848054 PST
UTC: 2022-11-25 20:14:34.848054 UTC
GPS: 1353442492.848054

However this was probably not a low noise state due to vacuum disruption mentioned here.

  17311   Thu Nov 24 15:37:45 2022 AnchalUpdateASCIMC WFS output matrix diagonalization effort

I tried following the steps and the method I was using converged to same output matrix upto 2 decimal points but there is still left over cross coupling as you can see in Attachment 1. With the new output matrix, WFS loop can be turned on with full overall gain of 1.


Changes:

  • I switched off +20dB FM2 on C1IOO-WFS1_PIT and increased gain C1:IOO-WFS1_PIT_GAIN from 0.1 to 1 to be uniform with other filters.
  • Output matrix change:
    • Old matrix:
      -2.   4.8 -7.3
       3.6  3.5 -2. 
       2.   1.  -6.8
    • New Matrix:
      3.44  4.22 -7.29
      0.75  0.92 -1.59
      3.41  4.16 -7.21
  • I think the main change that allowed the WFS loop to become stable was the 0,0 element sign change.

Method:

  • I made overall gain C1:IOO-WFS_GAIN 0
  • Switched of (0:0.8) FM3 on PIT filter modules (IOO-WFS1_PIT, IOO-WFS2_PIT, IOO-MC2_TRANS_PIT)
  • Changed ramp time to 2 seconds on all these modules
  • Used offset of 10000 for WFS2 and MC2_TRANS, and 30000 for WFS1 (for some reason, response to WFS1 step was much lower than others)
  • Measured the following sensor channels
    • C1:IOO-WFS1_I_PIT_OUT
    • C1:IOO-WFS2_I_PIT_OUT
    • C1:IOO-MC_TRANS_PIT_OUT
  • First I took 30s average of these channels, then applied the offsets in the three modules one by one and recorded steps in each sensor.
  • Measured step from reference value taken before, and normalized each step to the DOF that was actually stepped to get a matrix.
  • Inverted this matrix and multiplied with existing output matrix. Made sure column norm1 is same as before and column signs are same as before.
  • Repeated a few times.

Note: The standard deviation on the averages was very high even after averaging for 30s. This data should be averaged after low passing high frequencies but I couldn't find the filter module medm screens for these signals, so I just proceeded with simple averaging of full rate signal using cdsultis avg command.


Fri Nov 25 12:46:31 2022

The WFS loop are unstable again. This could be due to the matrix balancing done while vacuum was disrupted. The above matrix does not work anymore.

  17312   Fri Nov 25 12:15:46 2022 AnchalUpdateVACVacuum Gate valves restored

I came today to find that PSL shutter was closed. I orginially thought some shimmer obersvations are underway in the quiet state. But that was not the case. When I tried to open the shutter, it closed back again indicating a hard compliance condition making it close. This normally happens when vacuum level is not sufficient, so I opened the vacuum screena dn indeed all gate valves were closed. This most probably happend during this interlock trip. So the main volume was just slowly leaking and reached to milli torr level today.

Lesson for future: Always check vacuum status when interlock trips.


[Paco, Anchal]

Paco came by to help. We went to asia (the Asus laptop at vacuum workstation) but could not open the medm or find the nfs mounted files. The chiara change did something and nfs mounted directories are not available on asia of c1vac. We rebooted asia and the nfs mount was working again. We can't simply restart c1vac because it runs acromag channels for vacuum system and needs to be done more carefully, a task for Monday.

After restarting asia, we opened the the vacuum control medm screen and followed the vaccum pump down instructions (mainly opening of the gate vales as the pumps were already on). Point to keep in mind, rule of thumb, do not open valve between a turbo pump and a volume if the pressure differential is more than 3 orders of magnitude. Saving turbo pumps is the priority.. Now the main volume is pumping down.

  17313   Fri Nov 25 15:35:23 2022 AnchalUpdateSUSLow noise state

Turned off HEPA at:

PST: 2022-11-25 15:34:55.683645 PST
UTC: 2022-11-25 23:34:55.683645 UTC
GPS: 1353454513.683645

Turned on HEPA back at:

PST: 2022-11-28 11:14:31.310453 PST
UTC: 2022-11-28 19:14:31.310453 UTC
GPS: 1353698089.310453
 

  17315   Mon Nov 28 11:15:23 2022 AnchalUpdateCDSFront ends DAC Kill (DK) got activated; restored FEs

[Anchal, Paco, Yehonathan, JC]

Last night at 9:15 pm PST (Nov 27th, 2022) some kind of disruption happened to FEs. See attachment 1 to see the changes in FE state words of the IOP models. on c1lsc, c1sus and c1scex, change of 140 happend, that's 2nd, 3rd and 7th bit of the FE word was flipped, which I think is the TIM, ADC and DAC KILL (DK). When we came in morning, IMC suspensions were undamped and not responsive to coil kicks, vertex suspensions the same case, ETMX also same. The c1sus2 modelw as all in red.

To fix this, we restarted all rtcds models on all FEs by sshing into the computers and doing:

rtcds restart --all

Then we burt restored all models to 27th Nov, 3:19 am point doing following on rossa:

~>cd /opt/rtcds/caltech/c1/Git/40m/scripts/cds
cds (main)>./burtRestoreAndResetSUS.sh /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2022/Nov/27/03:19

Note: this issue was previously seen when fb1 was restarted without shutting down the FEs, and once when the martian switch was disrupted while FE models were running.

I'm not sure why this happened this time, what caused it at 9:15 pm yesterday, and why only c1lsc, c1sus and c1iscex models went to DAC KILL state. This disruption should be investigated by cds upgrade team.

  17317   Mon Nov 28 16:53:22 2022 AnchalSummaryBHDF2A filters on LO1 LO2 AS1 and AS4

[Paco, Anchal]

I changed the script in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/outMatFilters/createF2Afilters.py to read the measured POS resonant frequencies stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMatCalc/resFreqs.yml instead of using the estimate sqrt(g/len). I then added Q = 3 F2A filters into FM1 output filter of LO1, LO2, AS1 and AS4 suspensions in anticipation of BHD locking scheme work.

  17320   Mon Nov 28 20:14:27 2022 AnchalUpdateASCAS WFS proposed path to IMC WFS heads

In Attachment 1, I give a plan for the proposed path of AS beam into the IMC WFS heads to use them temporarily as AS WFS. Paths shown in orange are the existing MC REFL path, red for the existing AS path, cyan for the proposed AS path, and yellow for the existing IFO refl path.  We plan to overlap AS beam to the same path by installing the following new optics on the table:

  • M1 will be a new mirror mounted on a flipper mount reflecting 100% of AS beam to SW corner of the table.
  • M2 will be a new fixed mirror for steering the new AS beam path to match with MC WFS path.
  • M3 will be the existing beamsplitter used to pick off light for MC refl camera. We'll just mount this on a flipper so that it can be removed from the path. Precaution will be required to protect the CCD from high intensity MC reflection by putting on more ND filters.
  • The AS beam would need to be made approximately 1 mm in beam width. The required lenses for this would be placed between M1 and M2.

I request people to go through this plan and find out if there are any possible issues and give suggestions.


PS: Thanks JC for the photos. I got it from foteee google photos. It would be nice if these are also put into the 40m wiki page for photos of optical tables.


RXA: Looks good. I'm not sure if ND filters can handle the 1 W MC reflection, so perhaps add another flipper there. It would be good if you can measure the power on the WFS with a power meter so we know what to put there. Ideally we would match the existing power levels there or get into the 0.1-10 mW range.

  17322   Tue Nov 29 15:32:32 2022 AnchalUpdateBHDc1hpc model updates to support double audio dither

Many changes have been done to c1hpc to support dual demodulation at audio frequencies. We moved away with ASS style of lockin setup as the number of connections and screens required would become very large. Instead now, the demodulation is done for a selected oscillator, on a selected signal. Similarly, the demodulated signal can be further demodulated for another selected oscillator. Please familarize yourself with new screen and test the new model. The previous version of the model is kept as backup alogn with all it's medm screens, so nothing is lost. Shown as an example in the screenshot, AS1 and BS oscillators can be turned on, and BHDC_DIFF signal can be demodulated first with BS and next with AS oscillator to get the signal.

ELOG V3.1.3-