40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 341 of 341  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Typeup Category Subject
  17154   Fri Sep 23 10:05:38 2022 JCUpdateVACN2 Interlocks triggered

[Chub, Anchal, Tega, JC]

After replacing an empty tank this morning, I heard a hissing sound coming from the nozzle area. It turns out that this was from the copper tubing. The tubing we slightly broken and this was comfirmed with soapy water bubbles. This caused the N2 pressure to drop and the Vac interlocks to be triggered. Chub and I went ahead and replaced the fitting in connected this back to normal. Anchal and Tega have used the Vacuum StartUp procedures to restore the vacuum to normal operation.

Adding screenshot as the pressure is decreasing now.

Attachment 1: Screenshot_2022-09-23_10-19-39.png
Screenshot_2022-09-23_10-19-39.png
Attachment 2: Screenshot_2022-09-23_10-22-45.png
Screenshot_2022-09-23_10-22-45.png
  17155   Fri Sep 23 14:10:19 2022 RadhikaUpdateBHDBH55 RFPD installed - part I

[Radhika, Paco, Anchal]

I placed a lens in the B-beam path to focus the beam spot onto the RFPD [Attachment 1]. To align the beam spot onto the RFPD, Anchal misaligned both ETMs and ITMY so that the AS and LO beams would not interfere, and the PD output would remain at some DC level (not fringing). The RFPD response was then maximized by scanning over pitch and yaw of the final mirror in the beam path (attached to the RFPD).

Later Anchal noticed that there was no RFPD output (C1:LSC-BH55_I_ERR, C1:LSC-BH55_Q_ERR). I took out the RFPD and opened it up, and the RF OUT SMA to PCB connection wire was broken [Attachment 2]. I re-soldered the wire and closed up the box [Attachment 3]. After placing the RFPD back, we noticed spikes in C1:LSC-BH55_I_ERR and C1:LSC-BH55_Q_ERR channels on ndscope. We suspect there is still a loose connection, so I will revisit the RFPD circuit on Monday. 

Attachment 1: IMG_3766.jpeg
IMG_3766.jpeg
Attachment 2: IMG_3770.jpeg
IMG_3770.jpeg
Attachment 3: IMG_3773.jpeg
IMG_3773.jpeg
  17156   Fri Sep 23 18:31:46 2022 ranaUpdateBHDBH55 RFPD installed - part I

A design flaw in these initial LIGO RFPDs is that the SMA connector is not strain releieved by mounting to the case. Since it is only mounted to the tin can, when we attach/remove cables, it bends the connector, causing stress on the joint.

To get around this, for this gold box RFPD, connect the SMA connector to the PCB using a S shaped squiggly wire. Don't use multi-strand: this is usually good, since its more flexible, but in this case it affects the TF too much. Really, it would be best to use a coax cable, but a few-turns cork-screw, or pig-tail of single-core wire should be fine to reduce the stress on the solder joint.

Quote:
 

Later Anchal noticed that there was no RFPD output (C1:LSC-BH55_I_ERR, C1:LSC-BH55_Q_ERR). I took out the RFPD and opened it up, and the RF OUT SMA to PCB connection wire was broken [Attachment 2]. I re-soldered the wire and closed up the box [Attachment 3]. After placing the RFPD back, we noticed spikes in C1:LSC-BH55_I_ERR and C1:LSC-BH55_Q_ERR channels on ndscope. We suspect there is still a loose connection, so I will revisit the RFPD circuit on Monday. 

 

  17157   Fri Sep 23 19:04:12 2022 AnchalUpdateBHDBH55 LSC Model Updates - part III

BH55

I further updated LSC model today with following changes:

  • BH55 whitening switch binary output signal is now routed to correct place.
    • Switching FM1 which carries dewhitening digital filter will always switch on corresponding analog whitening before ADC input.
  • The whitening can be triggered using LSC trigger matrix as well.
  • The ADC_0 input to LSC subsystem is now a single input and channels are separated inside the subsystem.

The model built and installed with no issues.

Further, the slow epics channels for BH55 anti-aliasing switch and whitening switch were added in /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db


IPC issue resolved

The IPC issue that we were facing earlier is resolved now. The BH55_I and BH55_Q signal after phase rotation is successfully reaching c1hpc model where it can be used to lock LO phase. To resolve this issue, I had to restart all the models. I also power cycled the LSC I/O chassis during this restart as Tega suspected that such a power cycle is required while adding new dolphin channels. But there is no way to find out if that was required or not. Good news is that with the new cds upgrade, restarting rtcds models will be much easier and modular.


ETMY Watchdog Updated

[Anchal, Tega]

Since ETMY does not use HV coil driver anymore, the watchdog on ETMY needs to be similar to other new optics. We made these updated today. Now ETMY watchdog while slowly ramps down the alignment offsets when it is tripped.

  17158   Fri Sep 23 19:07:03 2022 TegaUpdateComputersWork to improve stability of 40m models running on teststand

[Chris, Tega]

Timing glitch investigation:

  • Moved dolphin transmit node from c1sus to c1lsc bcos we suspect that the glitch might be coming from the c1sus machine (earlier c1pem on c1sus was running faster then realtime).
  • Installed and started c1oaf to remove the shared memory IPC error to/from c1lsc model
  • /opt/DIS/sbin/dis_diag gives two warnings on c1sus2
    • [WARN] IXH Adapter 0 - PCIe slot link speed is only Gen1
    • [WARN] Node 28 not reachable, but is an entry in the dishosts.conf file - c1shimmer is currently off, so this is fine.

DAQ network setup:

  • Added the DAQ ethernet MAC address  and fixed IPV4 address for the front-ends to '/etc/dhcp/dhcpd.conf'
  • Added the fixed DAQ IPV4 address and port for all the front-ends to '/etc/advligorts/subscriptions.txt' for `cps_recv` service
  • Edited '/etc/advligorts/master' by including all the iop and user models '.ini' files in '/opt/rtcds/caltech/c1/chans/daq/' containing channel info and the corresponding tespoint files in '/opt/rtcds/caltech/c1/target/gds/param/'
  • Created systemd environment file for each front-end in '/diskless/root/etc/advligorts/' containing the argument for local data concentrator and daq data transmitter (`local_dc_args` and `cps_xmit_args`). We currently have staggered the delay (-D waitValue) times of the front-ends by setting it to the last number in the daq ip address when we were facing timing glitch issues, but should probably set it back to zero to see if it has any effect.

Other:

  • Edited /etc/resolv.conf on fb1 and 'diskless/root' to enable name resolution via for example `host c1shimmer` but the file gets overwritten on chiara for some reason

Issues:

  1. Frame writing is not working at the moment. It did at some point in the past for a couple of days but stopped working earlier today and we can't quite figure out why. 
  2. We can't get data via diaggui or ndscope either. Again, we recall the working in the past too but not sure why it has stopped working now.   
  3. The cpu load on c1su2 is too high so we should split into two models
  4. We still get the occassional IPC glitch both for shared memory and dolphin, see attachments
Attachment 1: dolphin_state_all_green.png
dolphin_state_all_green.png
Attachment 2: dolphin_state_IPC_glitch.png
dolphin_state_IPC_glitch.png
  17159   Mon Sep 26 11:39:37 2022 PacoUpdateBHDBH55 RFPD installed - part II

[Paco, Anchal]

We followed rana's suggestion for stress relief on the SMA joint in the BH55 RFPD that Radhika resoldered. We used a single core, pigtailed wire segment after cleaning up the solder joint on J7 (RF Out) and also soldered the SMA shield to the RF cage (see Attachment #1). This had a really good effect on the rigidity of the connection, so we moved back to the ITMY table.

We measured the TEST in to RF Out transfer function using the Agilent network analyzer, just to see the qualitative features (resonant gain at around 55 MHz and second harmonic suppression at around 110 MHz) shown in Attachment #2. We used 10kOhm series resistance in test input path to calibrate the measured transimpedance in V/A. The RFPD has been installed in the ITMY table and connected to the PD interface box and IQ demod boards in the LSC rack as before.

Measurement files

Attachment 1: PXL_20220926_175010061.MOTION.jpg
PXL_20220926_175010061.MOTION.jpg
Attachment 2: BH55_Transimpedance_Measurement.pdf
BH55_Transimpedance_Measurement.pdf
  17160   Tue Sep 27 10:50:11 2022 PacoUpdateBHDcalibrated LO phase noise

Locked LO phase to ITMX single bounce beam at the AS port, using the DCPD (A-B) error point and actuating on LO1 POS. For this the gain was tuned from 0.6 to 4.0. A rough Michelson fringe calibration gives a counts to meters conversion of ~0.212 nm/count, and the OLTF looks qualitatively like the one in a previous measurement (~ 20 dB at 1 Hz, UGF = 30 Hz). The displacement was then converted to phase using lambda=1e-6; I'm not sure what the requirement is on the LO phase (G1802014 says 1e-4 rad/rtHz at 1 Hz, but our requirement doc says 1 to 20 nrad/rtHz (rms?)... anyways wit this rough calibration we are still off in either case.

The balancing gain is obvious at DC in the individual DCPD spectra, and the common mode rejection in the (A-B) signal is also appreciable. I'll keep working on refining this, and implementing a different control scheme.

Attachment 1: lo_phase_asd.pdf
lo_phase_asd.pdf
  17161   Wed Sep 28 16:37:26 2022 PacoUpdateBHDcalibrated LO phase noise; update

[yuta, paco]

Update; the high frequency ( > 100 Hz) drop is of course not real and comes from a 4th order LP filter in the HPC demod I filter which I haven't accounted for. Furthermore, we have gone through the calibration factors and corrected a factor of 2 in the optical gain. Then, I also added the CLTF to show in loop and out of loop error respectively. The updated plot, though not final, is in Attachment #1.

Attachment 1: lo_phase_asd.pdf
lo_phase_asd.pdf
  17162   Wed Sep 28 19:15:56 2022 KojiUpdateGeneralTesting 950nm laser found in trash pile

I don't know what was wrong with the past setup but the 950nm laser (QPHOTONICS QFLD-950-3S) just worked fine up to ~300MHz with basically the same setup.

A 20dB coupler picks up a small amount of the driving signal from the source signal of the network analyzer. This was fed to CHR. The fiber-coupled NewFocus PD RF output was connected to CHA.
The calibration of the response was done with the thru response (connect the source signal to the CHA via all the long cables).

Attachment 1 shows the response CHA/CHR. The output is somewhat flat up to 20MHz and goes down towards 100MHz, but still active up to 500MHz as long as the normalization with the New Focus PD works.
The structure around 200MHz~300MHz changes with how the wires of the clips are arranged. I have twisted and coiled them as shown and the notch disappeared. For the permanent setup we should keep the lines as short as possible and take care of the stray capacitance and the inductance.

Attachment 2 shows the setup at the network analyzer side. Nothing special.

Attachment 3 shows the setup at the laser side. The DB9 connector on the Jenne's laser has the negative output of the LD driver connected to the coax core and the positive output connected to the shield of the coax. Therefore the coax core (red clip) has to be connected to Pin 9 and the coax shield (black clip) to PIn 5.

Attachment 1: PXL_20220929_013850989.MP.jpg
PXL_20220929_013850989.MP.jpg
Attachment 2: PXL_20220929_013859439.jpg
PXL_20220929_013859439.jpg
Attachment 3: PXL_20220929_013911125.jpg
PXL_20220929_013911125.jpg
  17163   Wed Sep 28 21:54:08 2022 PacoUpdateBHDcalibrated LO phase noise; update

Repeated the LO phase noise measurement, this time with the LO - ITMY single bounce, and a couple of fixes Koji hinted at including:

  1. The DEMOD angle was the missing piece! The previous error point showed lower noise than the individual DCPDs because the demodulation angle had not been checked. I corrected it so that the error point in LO_PHASE control was exactly equal to the LO-ITMY single bounce fringe. With this, the gain on the servo had to be adjusted from 4.00 to 0.12, still using FM4, FM5, and this time also FM8 (BLP600).
  2. Turned off 60 Hz harmonics comb notches on DCPDs, they are unecessary.
  3. Acquired noise spectra down to 0.1 Hz, with 0.03 Hz bin width to increase resolution and identify resonant SUS noise near 1 Hz.

This time, after alignment the fringe amplitude was 500 counts. Attachment #1 shows the updated plot with the calibrated noise spectra for the individual DCPD signals A and B as well as their rms values. Attachment #2 shows the error point, in loop and the estimated out of loop spectra with their rms as well. The peak at ~ 240 Hz is quite noticeable in the error point time series, and dominates the high frequency rms noise. The estimated rms out of loop noise is ~ 9.2 rad, down to 100 mHz.

Attachment 1: dcpd_phase_asd.pdf
dcpd_phase_asd.pdf
Attachment 2: lo_phase_asd.pdf
lo_phase_asd.pdf
  17164   Thu Sep 29 15:12:02 2022 JCUpdateComputersSetup the 6 new front-ends to boot off the FB1 clone

[Jamie, Christopher, JC]

This morning we decided to label the the fiber optic cables. While doing this, we noticed that the ends had different label, 'Host' and 'Target'. Come to find out, the fiber optic cables are directional. Four out of Six of the cables were reversed. Luckily, 1 cable for the 1Y3 IO Chassis has a spare already laid (The cable we are currently using).  Chris, Jamie, and I have begun reversing these cable to there correct position.

Quote:

[Tega, JC]

We laid 4 out of 6 fiber cables today. The remaining 2 cables are for the I/O chassis on the vertex so we would test the cables the lay it tomorrow. We were also able to identify the problems with the 2 supposedly faulty cable, which are not faulty. One of them had a small bend in the connector that I was able to straighten out with a small plier and the other was a loose connection in the switch part. So there was no faulty cable, which is great! Chris wrote a matlab script that does the migration of all the model files. I am going through them, i.e. looking at the CDS parameter block to check that all is well. Next task is to build and install the updated models. Also need to update the '/opt/rtcds' and '/opt/rtapps' directory to the latest in the 40m chiara system.

 

 

  17165   Thu Sep 29 18:01:14 2022 AnchalUpdateBHDBH55 LSC Model Updates - part IV

More model changes

c1lsc:

  • BH55_I and BH55_Q are now being read at ADC_0_14 and ADC_0_15. The ADC_0_20 and ADC_0_21 are bad due to faulty whitening filter board.
  • The whitening switch controls were also shifted accordingly.
  • the slow epics channels for BH55 anti-aliasing switch and whitening switch were added in /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db

c1mcs:

  • MC1, MC2, and MC3 are running on new suspension models now.

c1hpc:

  • DCPD_A and DCPD_B have been renamed to BHDC_A and BHDC_B following naming convention at other ports.
  • After the input summing matrix, the signals are called BHDC_SUM and BHDC_DIFF now.
  • BHDC_SUM and BHDC_DIFF can be directly using in sensing matrix bypassing the dither demodulation (to be used for DC locking)
  • BH55_I and BH55_Q are also sent for dither demodulation now (to be used in double dither method, RF and audio).
  • SHMEM channel names to c1bac were changed.

c1bac:

  • Conformed with new SHMEM channel names from c1hpc
  17166   Fri Sep 30 18:30:12 2022 AnchalUpdateASSModel Changes

I updated c1ass model today to use PR2 PR3 instead of TT1 TT2 for YARM ASS. This required changes in c1su2 too. I have split c1su2 into c1su2 (LO1., LO2, AS1, AS4) and c1su3 (SR2, PR2, PR3). Now the two models are using 31 and 21 CPU out of 60 which was earlier 55/60. All changed compiled correctly and have been uploaded. Models have been restared and medm screens have been updated.


Model changes

c1su2:

  • Everything related to SR2, PR2, and PR3 have been moved to c1su3.
  • Extra binary output channels are also distributed between c1su2 and c1su3. BO_4 and BO5 have been moved to c1su3.

c1su3:

  • Added IPC receiving from ASS for PR2 and PR3

c1ass:

  • Inputs to TT1 and TT2 PIT and YAW filter modules have been terminated to ground.
  • The ASS outputs for YARM have been renamed to PR2 and PR3 from TT1 and TT2.
  • IPC sending blocks added to send PR2 and PR3 ASC signals to c1su3.

 


To do:

  • Updated YARM ASS output matrix to handle change in coil driver actuation on PR2 and PR3 in comparison to TT1 and TT2.
  • Yuta suggested dithering PR2 and PR3 for input beam pointing for YARM alignment.
  17167   Fri Sep 30 20:18:55 2022 PacoUpdateBHDLO phase noise with different actuation points

[Paco, Koji]

We took lo phase noise spectra actuating on the for different optics-- LO1, LO2, AS1, and AS4. The servo was not changed during this time with a gain of 0.2, and we also took a noise spectrum without any light on the DCPDs. The plot is shown in Attachment #1, calibrated in rad/rtHz, and shown along with the rms values for the different suspension actuation points. The best one appears to be AS1 from this measurement, and all the optics seem to show the same 270 Hz (actually 268 Hz) resonant peak.


268 Hz noise investigation

Koji suspected the observed noise peak belongs to some servo oscillation, perhaps of mechanical origin so we first monitored the amplitude in an exponentially averaging spectrum. The noise didn't really seem to change too much, so we decided to try adding a bandstop filter around 268 Hz. After the filter was added in FM6, we turned it on and monitored the peak height as it began to fall slowly. We measured the half-decay time to be 264 seconds, which implies an oscillation with Q = 4.53 * f0 * tau ~ 3.2e5. This may or may not be mechanical, further investigation might be needed, but if it is mechanical it might explain why the peak persisted in Attachment #1 even when we change the actuation point; anyways we saw the peak drop ~ 20 dB after more than half an hour... After a while, we noticed the 536 Hz peak, its second harmonic, was persisting, even the third harmonic was visible.

So this may be LO1 violin mode & friends -

We should try and repeat this measurement after the oscillation has stopped, maybe looking at the spectra before we close the LO_PHASE control loop, then closing it carefully with our violin output filter on, and move on to other optics to see if they also show this noise.

Attachment 1: BHDC_asd_actuation_point.png
BHDC_asd_actuation_point.png
  17168   Sat Oct 1 13:09:49 2022 AnchalUpdateIMCWFS turned on

I turned on WFS on IMC at:

PDT: 2022-10-01 13:09:18.378404 PDT
UTC: 2022-10-01 20:09:18.378404 UTC
GPS: 1348690176.378404

The following channels are being saved in frames at 1024 Hz rate:

  • C1:IOO-MC_TRANS_PIT_ERR (Same as C1:IOO-MC_TRANS_PIT_OUT)
  • C1:IOO-MC_TRANS_YAW_ERR (Same as C1:IOO-MC_TRANS_YAW_OUT)
  • C1:IOO-MC_TRANS_SUM_ERR (Same as C1:IOO-MC_TRANS_SUMFILT_OUT)

We can keep it running over the weekend as we will not use the interferometer. I'll keep an eye on it with occasional log in. We'll post the time when we switch it off again.


The IMC lost lock at:

UTC    Oct 03, 2022    01:04:16    UTC
Central    Oct 02, 2022    20:04:16    CDT
Pacific    Oct 02, 2022    18:04:16    PDT

GPS Time = 1348794274

The WFS loops kept running and thus took IMC to a misaligned state. Between the above two times, IMC was locked continuously with very brief lock loss events, and had all WFS loops running.

  17169   Mon Oct 3 08:35:59 2022 TegaUpdateIMCAdding IMC channels to frames for NN test

[Rana]

For the upcoming NN test on the IMC, we need to add some more channels to the frames. Can someone please add the MC2 TRANS SUM, PIT, YAW at 256 Hz? and then make sure they're in frames.

and even though its not working correctly, it would be good if someone can turn the MC WFS on for a little while. I'd just like to get some data to test some code. If its easy to roughly close the loops, that would be helpful too.


[Anchal]

Currently, none of these channels are being written on frames. From simulink model, it seems the channels:

  • C1:IOO-MC_TRANS_SUMFILT_OUT_DQ

  • C1:IOO-MC_TRANS_PIT_OUT_DQ

  • C1:IOO-MC_TRANS_YAW_OUT_DQ

are supposed to be DQed but are not present in the /opt/rtcds/caltech/c1/chans/daq/C1MCS.ini file. I tried simply adding these channels to the file and rerunning the daqd_ services but that caused 0x2000 error on c1mcs model. In my attempt, I did not know what chnnum to give for these channels so I omitted that and maybe that is the issue.

The only way I know to fix this is to make and install c1mcs model again which would bring these channels into C1MCS.ini file. But We'll have to run activateDQ.py if we do that which I am not totally sure if it is in running condition right now. @Christopher Wipf do you have any suggestions?


[Rana]

aren't they all filtered? If so, perhaps we can choose whatever is the equivalent naming at the LIGO sites rather than roll our own again.

@Tega Edo can we run activateDQ.py or will that break everything now?


[Tega]

@Rana Adhikari Looking into this now.

@Anchal Gupta The only problem I see with activateDQ.py is the use of the deprecated print function, i.e. print var instead of print(var). After fixing that, it runs OK and does not change the input INI files as they already have the required channel names. I have created a temporary folder, /opt/rtcds/caltech/c1/chans/daq/activateDQtests/, which is populated with copies of the original INI files, a modified version of activateDQ.py that does not overwrite the original input files, and a script file difftest.sh that compares the input and output files so we can test the functionality of activateDQ.py in isolation. Furthermore, looking through the code suggests that all is well. Can you look at what I have done to check that this is indeed the case? If so, your suggestion of rebuilding and installing the updated c1mcs model and running activateDQ.py afterward should do the trick.

I tested the code with:

cd /opt/rtcds/caltech/c1/chans/daq/activateDQtests/

./activateDQ.py

which creates output files with an _ prefix, for example _C1MCS.ini is the output file for C1MCS.ini, then I ran

./difftest

to compare all the input and corresponding output files.

Note that the channel names you are proposing would change after running activateDQ.py, i.e.

C1:IOO-MC_TRANS_SUMFILT_OUT_DQ -> C1:IOO-MC_TRANS_SUM_ERR

C1:IOO-MC_TRANS_PIT_OUT_DQ -> C1:IOO-MC_TRANS_PIT_ERR

C1:IOO-MC_TRANS_YAW_OUT_DQ -> C1:IOO-MC_TRANS_YAW_ERR

My question is this: why aren't we using the correct channel names in the first place so that we have less work to do later on when we finally decide to stop using this postbuild script?


[Anchal]

Yeah I found that these ERR channels are acquired and stored. I don't think we should do this either. Not sure what was the original motivation for this change. I tried commenting out this part of activateDQ.py and remaking and reinstalled c1mcs but it seems that activateDQ.py is called as postbuild script automatically on install and it uses some other copy of this file as my changes did not take affect and the DQ name change still happened.


[Tega]

Ah, we encountered the same puzzle as well. Chris found out that our models have `SCRIPT=activateDQ.py` embedded in the cds parameter block description, see attached image. We believe this is what triggers the postbuild script call to `activateDQ.py`. As for the file location, modern rtcds would have it in /opt/rtcds/caltech/c1/post_build, but I am not sure where ours is located. I did a quick search for this but could not find it in the usual place so I looked around for a bit and found this:

controls@rossa> find /opt/rtcds/userapps/ -name "activateDQ.py"

/opt/rtcds/userapps/trunk.bak/cds/c1/scripts/activateDQ.py

/opt/rtcds/userapps/tags/H2OAT_RCG2.5.1/cds/c1/scripts/activateDQ.py

/opt/rtcds/userapps/branches/StanfordGuardianDev/cds/c1/scripts/activateDQ.py

/opt/rtcds/userapps/branches/branch-2.3/cds/c1/scripts/activateDQ.py

/opt/rtcds/userapps/branches/branch-2.4/cds/c1/scripts/activateDQ.py

/opt/rtcds/userapps/trunk/cds/c1/scripts/activateDQ.py

My guess is the last one /opt/rtcds/userapps/trunk/cds/c1/scripts/activateDQ.py.

Maybe we can ask @Yuta Michimura since he wrote this script?

Anyway, we could also try removing SCRIPT=activateDQ.py from the cds parameter block description to see if that stops the postbuild script call, but keep in mind that doing so would also stop the update of the OSEM and oplev channel names. This way we know what script is being used since we will have to run it after every install (this is a bad idea).

controls@c1sus:~ 0$ env | grep script

CDS_SCRIPTS_PATH=:/opt/rtcds/userapps/release/cds/c1/scripts:/opt/rtcds/userapps/release/cds/common/scripts:/opt/rtcds/userapps/release/isc/c1/scripts:/opt/rtcds/userapps/release/isc/common/scripts:/opt/rtcds/userapps/release/sus/c1/scripts:/opt/rtcds/userapps/release/sus/common/scripts

It looks like the guess was correct. Note that in the newer version of rtcds, we can use `rtcds env` instead of `env` to see what is going on.

Attachment 1: Screen_Shot_2022-09-30_at_9.52.39_AM.png
Screen_Shot_2022-09-30_at_9.52.39_AM.png
  17170   Mon Oct 3 13:11:22 2022 YehonathanUpdateBHDSome comparison of LO phase lock schemes

I pushed a notebook and a Finesse model for comparing different LO phase locking schemes. Notebook is on https://git.ligo.org/40m/bhd/-/blob/master/controls/compare_LO_phase_locking_schemes.ipynb,

Here's a description of the Finesse modeling:

I use a 40m kat model https://git.ligo.org/40m/bhd/-/blob/master/finesse/C1_w_initial_BHD_with_BHD55.kat derived from the usual 40m kat file. There I added and EOMs (in the spaces between the BS and ITMs and in front of LO2) to simulate audio dithering. A PD was added at a 5% pickoff from one of the BHD ports to simulate the RFPD recently installed on the ITMY table.

First I find the nominal LO phase by shaking MICH and maximizing the BHD response as a function of the LO phase (attachment 1).

Then, I run another simulation where I shake the LO phase at some arbitrary frequency and measure the response at different demodulation schemes at the RFPD and at the BHD readout.

The optimal responses are found by using the 'max' keyword instead of specifying the demodulation phase. This uses the demodulation phase that maximizes the signal. For example to extract the signal in the 2 RF sideband scheme I use:

pd3 BHD55_2RF_SB $f1 max $f2 max $fs max nPickoffPDs

I plot these responses as a function of LO phase relative to the nominal phase divided by 90 degrees (attachment 2). The schemes are:

1. 2 RF sidebands where 11MHz and 55MHz on the LO and AS ports are used.

2. Single RF sideband (11/55 MHz) together with the LO carrier. As expected, this scheme is useful only when trying to detect the amplitude quadrature.

3. Audio dithering MICH and using it together with one of the LO RF sidebands. The actuation strength is chosen by taking the BS actuation TF 1e-11 m/cts*(50/f)**2 and using 10000 cts giving an amplitude of 3nm for the ITMs.

For LO actuation I can use 13 times more actuation strentgh becasue its coild drivers' output current is 13 more then the old ones.

4. Double audio dithering of LO2+MICH detecting it directly at the BHD readout (attachment 3).

Without noise considerations, it seems like double audio dithering is by far the best option and audio+RF is the next best thing.

The next thing to do is to make some noise models in order to make the comparison more concrete.

This noise model will include Input noises, residual MICH motion, and laser noise. Displacement noise will not be included since it is the thing we want to be detected.

Attachment 1: MICH_sens_vs_LO_phase.pdf
MICH_sens_vs_LO_phase.pdf
Attachment 2: LO_phase_sens_vs_LO_phase_RF.pdf
LO_phase_sens_vs_LO_phase_RF.pdf
Attachment 3: LO_phase_sens_vs_LO_phase_double_audio.pdf
LO_phase_sens_vs_LO_phase_double_audio.pdf
  17171   Mon Oct 3 15:19:05 2022 PacoUpdateBHDLO phase noise and control after violin mode filters

[Anchal, Paco]

We started the day by taking a spectrum of C1:HPC-LO_PHASE_IN1, the BHD error point, and confirming the absence of 268 Hz peaks believed to be violin modes on LO1. We then locked the LO phase by actuating on LO2, and AS1. We couldn't get a stable loop with AS4 this morning. In all of these trials, we looked to see if the noise increased at 268 Hz or its harmonics but luckily it didn't. We then decided to add the necessary output filters to avoid exciting these violin modes. The added filters are in the C1:SUS-LO1_LSC bank, slots FM1-3 and comprise bandstop filters at first, second and third harmonics observed previously (268, 536, and 1072 Hz); bode plots for the foton transfer functions are shown in Attachment #1. We made sure we weren't adding too much phase lag near the UGF (~ 1 degree @ 30 Hz).

We repeated the LO phase noise measurement by actuating on LO1, LO2 and AS1, and observe no noise peaks related to 268 Hz this time. The calibrated spectra are in Attachment #2. Now the spectra look very similar to one another, which is nice. The rms is still better when actuating with AS1.


[Paco]

After the above work ended, I tried enabling FM1-3 on the C1:HPC_LO_PHASE control filters. These filters boost the gain to suppress noise at low frequencies. I carefully enabled them when actuating on LO1, and managed to suppress the noise by another factor of 20 below the UGF of ~ 30 Hz. Attachment #3 shows the screenshot of the uncalibrated noise spectra for (1) unsupressed (black, dashed), (2) suppressed with FM4-5 (blue, solid), and (3) boosted FM1-5 suppression (red).


Next steps:

  • Compare LO-ITMY and LO-ITMX single bounce noise spectra and MICH.
  • Compare DC locking scheme versus BH55 once it's working.
Attachment 1: filters_c1sus_lo1_lsc.png
filters_c1sus_lo1_lsc.png
Attachment 2: BHDC_asd_act.png
BHDC_asd_act.png
Attachment 3: boosted_lo_phase_control.png
boosted_lo_phase_control.png
  17172   Tue Oct 4 21:00:49 2022 ChrisUpdateComputersFailed takeover attempt with the new front ends

[Jamie, JC, Chris]

Today we made a failed attempt to take over the 40m hardware with the new front ends on the test stand.

As an initial test, we connected the new c1iscey to its I/O chassis using the OneStop fiber link. This went smoothly, so we tried to proceed with the rest of the system, which uncovered several problems. Consequently, we’ve reverted control back to the old front ends tonight, and will regroup and make another attempt tomorrow.

Status summary:

  • c1iscey worked on the first try
  • c1lsc worked, after we sorted out which of the two OneStop cables run to its rack we needed to use
  • c1sus2 sort of worked (its models have been crashing sporadically)
  • c1ioo had a busted OneStop cable, and worked after that was replaced
  • c1sus refused to work with the fiber OneStop cables (we tried several, including the known working one from c1ioo), but we jury-rigged it to run over a copper cable, after nudging the teststand rack a bit closer to the chassis
  • c1iscex refused to work with the fiber OneStop cables, and substituting copper was not an option, so we were stuck

There are various pathologies that we've seen with the OneStop interface cards in the I/O chassis. We don't seem to have the documentation for these cards, but our interpretive guesses are as follows:

  • When working, it is supposed to illuminate all the green LEDs along the top of the card, and the four next to the connector. In this state, you can run lspci -vt on the host, and see the various PLX/Contec/etc devices that populate the chassis.
  • When the cable is unplugged or bad, only four green LEDs illuminate on the card, and none by the connector. No devices from the chassis can be seen from the host.
  • On c1iscex and c1sus, when a fiber link is plugged in, it turns on all the LEDs along the top of the card, but the four next to the connector remain dark. We’re not sure yet what this is trying to tell us, but lspci finds no devices from the chassis, same as if it is unplugged.
  • Also, sometimes on c1iscex, no LEDs would illuminate at all (possibly the card was not seated properly).

Tomorrow, we plan to swap out the c1iscex I/O chassis for the one in the test stand, and see if that lets us get the full system up and running.

  17173   Thu Oct 6 07:29:30 2022 ChrisUpdateComputersSuccessful takeover attempt with the new front ends

[JC, Chris]

Last night’s CDS upgrade attempt succeeded in taking over the IFO. If the IFO users are willing, let’s try to run with it today.

The new system was left in control of the IFO hardware overnight, to check its stability. All looks OK so far.

The next step will be to connect the new FEs, fb1, and chiara to the martian network, so they’re directly accessible from the control room workstations (currently the system remains quarantined on the teststand network). We’ll also need to bring over the changes to models, scripts, etc that have been made since Tega’s last sync of the filesystem on chiara.

The previous elog noted a mysterious broken state of the OneStop link between FE and IO chassis, where all green LEDs light up on the OneStop board in the IO chassis, except the four next to the fiber link connector. This was seen on c1sus and c1iscex. It was recoverable last night on c1iscex, by fully powering down both FE computer and chassis, waiting a bit, and then powering up chassis and computer again. Currently c1sus is running with a copper OneStop cable because of the fiber link troubles we had, but this procedure should be tried to see if one of the fiber links can be made to work after all.

In order to string the short copper OneStop cable for c1sus, we had to move the teststand rack closer to the IO chassis, up against the back of 1X6/1X7. This is a temporary state while we prepare to move the FEs to their new rack. It hopefully also allows sufficient clearance to the exit door to pass the upcoming fire inspection.

At first, we connected the teststand rack’s power cables to the receptacle in 1X7, but this eventually tripped 1X7’s circuit breaker in the wall panel. Now, half of the teststand rack is on the receptacle in 1X6, and the other half is on 1X7 (these are separate circuits).

After the breaker trip, daqd couldn’t start. It turned out that no data was flowing to it, because the power cycle caused the DAQ network switch to forget a setting I had applied to enable jumbo frames on the network. The configuration has now been saved so that it should apply automatically on future restarts. For future reference, the web interface of this switch is available by running firefox on fb1 and navigating to 10.0.113.254.

When the FE machines are restarted, a GPS timing offset in /sys/kernel/gpstime/offset sometimes fails to initialize. It shows up as an incorrect GPS time in /proc/gps and on the GDS_TP MEDM screens, and prevents the data from getting timestamped properly for the DAQ. This needs to be looked at and fixed soon. In the meantime, it can be worked around by setting the offset manually: look at the value on one of the FEs that got it right, and apply it using sudo sh -c "echo CORRECT_OFFSET >/sys/kernel/gpstime/offset".

In the first ~30 minutes after the system came up last night, there were transient IPC errors, caused by drifting timestamps while the GPS cards in the FEs got themselves resynced to the satellites. Since then, timing has remained stable, and no further errors occurred overnight. However, the timing status is still reported as red in the IOP state vectors. This doesn’t seem to be an operational problem and perhaps can be ignored, but we should check it out later to make sure.

Also, the DAC cards in c1ioo and c1iscey reported FIFO EMPTY errors, triggering their DACKILL watchdogs. This situation may have existed in the old system and gone undetected. To bypass the watchdog, I’ve added the optimizeIO=1 flag to the IOP models on those systems, which makes them skip the empty FIFO check. This too should be further investigated when we get a chance.

  17174   Thu Oct 6 11:12:14 2022 AnchalUpdateBHDBH55 RFPD installation complete

[Yuta, Paco, Anchal]

BH55 RFPD installation was still not complete until yesterday because of a peculiar issue. As soon as we would increase the whitening gain on this photodiode, we saw spikes coming in at around 10 Hz. Following events took place while debugging this issue:

  • We first thought that RFPD might be bad as we had just picked it up from what we call the graveyard table.
  • Paco fixed the bad connection issue at RF out and we confired RFPD transimpedance by testing it. See 40m/17159.
  • We tried changing the whitening filter board but that did not help.
  • We used BH55 RFPD to lock MICH by routing the demodulation board outputs to AS55 channels on WF2 board. We were able to lock MICH and increase whitening gain without the presence of any spikes. This ruled out any issue with RFPD.
  • Yuta and I tried swapping the whitening filter board but the problem persisted, which made us realize that the issue could be in the acromag that is writing the whitening gain for BH55 RFPD.
  • We combed through the /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db file to check if the whitening gain DAC channels are written twice but that was not the case. But changing the scan rate of the whitening gain output channel did change the rate at which teh spikes were coming.
  • This proved that some other process is constantly writing zero on these outputs.
  • It tuned out that all unused channels of acromags for c1iscaux are still defined and made to write 0 through /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_SPARE.db file. I don't think we need this spare file. If someone wants to use spare channels, they can quickly add it to dB file and restart the modbusIOC service on c1iscaux, it takes less than 2 minutes to do it. I vote to completely get rid of this file or atleast not use it in the cmd file.
  • After removing the violating channels, the problem with BH55 RFPD is resolved.

The installation of BH55 RFPD is complete now.

 

  17175   Thu Oct 6 12:02:21 2022 AnchalUpdateCDSCDS Upgrade Plan

[Chris, Anchal]

Chris and I discussed our plan for CDS upgrade which amounts to moving new FEs, new chiara, and new FB1 OS system tomartian network.


Preparation:

  • Chiara (clone) (will be called "New Chiara" henceforth) will be resynced to existing chiara to get all model and medm changes.
  • All models on New Chiara will be rebuilt, and reinstalled.
  • All running servies on existing chiara will be printed and stored for comparison later.
  • New Chiara's OS drive will be updraged to Debian 10 and all services will be restored:
    • DHCP
    • DNS
    • NFS
    • rsync
  • Existing fb1 DAQ network card (10 GBps ethernet card) will be verified.
  • Make a list of all fb1 file system mounts and their UUIDs.

Upgrade plan:

Date: Fri Oct 7, 2022
Time: 11:00 am (After coffee and donuts)
Minimum required people: Chris, Anchal, JC (the more the merrier)

Steps:

  1. Ensure a snapshot of all channels is present from Oct 6th on New Chiara.
  2. Shutdown all machines:
    1. All slow computers (Except c1vac).
      Computer List: ssh into the computers and run:
      sudo systemctl stop modbusIOC.service
      sudo shutdown -h now
      1. c1susaux
      2. c1susaux2
      3. c1auxex
      4. c1auxey1
      5. c1psl
      6. c1iscaux
    2. All fast computers. Run on rossa:
      /cvs/cds/rtcds/caltech/c1/Git/40m/scripts/cds/stopAllModels.sh
      Disconnect left ethernet cables from the back of these computers.
    3. Power off all I/O chassis
    4. Swap the oneStop cables on all I/O chassis to fiber cables. On c1sus, connect the copper oneStop cable to teststand c1sus FE.
    5. Tun on all I/O chassis.
  3. Exchange chairas.
    1. Connect old chiara to teststand network.
    2. Connect New Chiara to martian network.
    3. Turn on both old and new chiara.
    4. Ensure all services are running on New Chiara by comparing with the list made earlier during preparation.
  4. fb1.
    1. Move fb1(clone)'s OS drive into existing fb1 (on 1X6)
    2. Turn on fb1 (on 1X6).
    3. Ensure fb1 is mounting all it's file systems correctly.
  5. New FEs
    1. Connect the network switch for new FEs to martian network. Make sure that old chiara is not connected to this same switch.
    2. Turn on the new FEs. All models should start on boot in sequence.
    3. Check if all models have green lights.
  6. Burt restore using latest snapshot available.
  7. Perform tests:
    1. Check if all local damping loops are working as before.
    2. Check if all IPC channels are transmitting and receiving correctly.
    3. Check if IMC is able to lock.
    4. Try single arm locking
    5. Try MICH locking.
  8. Make contingency plan on how to revert to old system if something fails.
ELOG V3.1.3-