40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 1 of 341  Not logged in ELOG logo
ID Datedown Author Type Category Subject
  17159   Mon Sep 26 11:39:37 2022 PacoUpdateBHDBH55 RFPD installed - part II

[Paco, Anchal]

We followed rana's suggestion for stress relief on the SMA joint in the BH55 RFPD that Radhika resoldered. We used a single core, pigtailed wire segment after cleaning up the solder joint on J7 (RF Out) and also soldered the SMA shield to the RF cage (see Attachment #1). This had a really good effect on the rigidity of the connection, so we moved back to the ITMY table.

We measured the TEST in to RF Out transfer function using the Agilent network analyzer, just to see the qualitative features (resonant gain at around 55 MHz and second harmonic suppression at around 110 MHz) shown in Attachment #2. We used 10kOhm series resistance in test input path to calibrate the measured transimpedance in V/A. The RFPD has been installed in the ITMY table and connected to the PD interface box and IQ demod boards in the LSC rack as before.

Measurement files

  17158   Fri Sep 23 19:07:03 2022 TegaUpdateComputersWork to improve stability of 40m models running on teststand

[Chris, Tega]

Timing glitch investigation:

  • Moved dolphin transmit node from c1sus to c1lsc bcos we suspect that the glitch might be coming from the c1sus machine (earlier c1pem on c1sus was running faster then realtime).
  • Installed and started c1oaf to remove the shared memory IPC error to/from c1lsc model
  • /opt/DIS/sbin/dis_diag gives two warnings on c1sus2
    • [WARN] IXH Adapter 0 - PCIe slot link speed is only Gen1
    • [WARN] Node 28 not reachable, but is an entry in the dishosts.conf file - c1shimmer is currently off, so this is fine.

DAQ network setup:

  • Added the DAQ ethernet MAC address  and fixed IPV4 address for the front-ends to '/etc/dhcp/dhcpd.conf'
  • Added the fixed DAQ IPV4 address and port for all the front-ends to '/etc/advligorts/subscriptions.txt' for `cps_recv` service
  • Edited '/etc/advligorts/master' by including all the iop and user models '.ini' files in '/opt/rtcds/caltech/c1/chans/daq/' containing channel info and the corresponding tespoint files in '/opt/rtcds/caltech/c1/target/gds/param/'
  • Created systemd environment file for each front-end in '/diskless/root/etc/advligorts/' containing the argument for local data concentrator and daq data transmitter (`local_dc_args` and `cps_xmit_args`). We currently have staggered the delay (-D waitValue) times of the front-ends by setting it to the last number in the daq ip address when we were facing timing glitch issues, but should probably set it back to zero to see if it has any effect.


  • Edited /etc/resolv.conf on fb1 and 'diskless/root' to enable name resolution via for example `host c1shimmer` but the file gets overwritten on chiara for some reason


  1. Frame writing is not working at the moment. It did at some point in the past for a couple of days but stopped working earlier today and we can't quite figure out why. 
  2. We can't get data via diaggui or ndscope either. Again, we recall the working in the past too but not sure why it has stopped working now.   
  3. The cpu load on c1su2 is too high so we should split into two models
  4. We still get the occassional IPC glitch both for shared memory and dolphin, see attachments
  17157   Fri Sep 23 19:04:12 2022 AnchalUpdateBHDBH55 LSC Model Updates - part III


I further updated LSC model today with following changes:

  • BH55 whitening switch binary output signal is now routed to correct place.
    • Switching FM1 which carries dewhitening digital filter will always switch on corresponding analog whitening before ADC input.
  • The whitening can be triggered using LSC trigger matrix as well.
  • The ADC_0 input to LSC subsystem is now a single input and channels are separated inside the subsystem.

The model built and installed with no issues.

Further, the slow epics channels for BH55 anti-aliasing switch and whitening switch were added in /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db

IPC issue resolved

The IPC issue that we were facing earlier is resolved now. The BH55_I and BH55_Q signal after phase rotation is successfully reaching c1hpc model where it can be used to lock LO phase. To resolve this issue, I had to restart all the models. I also power cycled the LSC I/O chassis during this restart as Tega suspected that such a power cycle is required while adding new dolphin channels. But there is no way to find out if that was required or not. Good news is that with the new cds upgrade, restarting rtcds models will be much easier and modular.

ETMY Watchdog Updated

[Anchal, Tega]

Since ETMY does not use HV coil driver anymore, the watchdog on ETMY needs to be similar to other new optics. We made these updated today. Now ETMY watchdog while slowly ramps down the alignment offsets when it is tripped.

  17156   Fri Sep 23 18:31:46 2022 ranaUpdateBHDBH55 RFPD installed - part I

A design flaw in these initial LIGO RFPDs is that the SMA connector is not strain releieved by mounting to the case. Since it is only mounted to the tin can, when we attach/remove cables, it bends the connector, causing stress on the joint.

To get around this, for this gold box RFPD, connect the SMA connector to the PCB using a S shaped squiggly wire. Don't use multi-strand: this is usually good, since its more flexible, but in this case it affects the TF too much. Really, it would be best to use a coax cable, but a few-turns cork-screw, or pig-tail of single-core wire should be fine to reduce the stress on the solder joint.


Later Anchal noticed that there was no RFPD output (C1:LSC-BH55_I_ERR, C1:LSC-BH55_Q_ERR). I took out the RFPD and opened it up, and the RF OUT SMA to PCB connection wire was broken [Attachment 2]. I re-soldered the wire and closed up the box [Attachment 3]. After placing the RFPD back, we noticed spikes in C1:LSC-BH55_I_ERR and C1:LSC-BH55_Q_ERR channels on ndscope. We suspect there is still a loose connection, so I will revisit the RFPD circuit on Monday. 


  17155   Fri Sep 23 14:10:19 2022 RadhikaUpdateBHDBH55 RFPD installed - part I

[Radhika, Paco, Anchal]

I placed a lens in the B-beam path to focus the beam spot onto the RFPD [Attachment 1]. To align the beam spot onto the RFPD, Anchal misaligned both ETMs and ITMY so that the AS and LO beams would not interfere, and the PD output would remain at some DC level (not fringing). The RFPD response was then maximized by scanning over pitch and yaw of the final mirror in the beam path (attached to the RFPD).

Later Anchal noticed that there was no RFPD output (C1:LSC-BH55_I_ERR, C1:LSC-BH55_Q_ERR). I took out the RFPD and opened it up, and the RF OUT SMA to PCB connection wire was broken [Attachment 2]. I re-soldered the wire and closed up the box [Attachment 3]. After placing the RFPD back, we noticed spikes in C1:LSC-BH55_I_ERR and C1:LSC-BH55_Q_ERR channels on ndscope. We suspect there is still a loose connection, so I will revisit the RFPD circuit on Monday. 

  17154   Fri Sep 23 10:05:38 2022 JCUpdateVACN2 Interlocks triggered

[Chub, Anchal, Tega, JC]

After replacing an empty tank this morning, I heard a hissing sound coming from the nozzle area. It turns out that this was from the copper tubing. The tubing we slightly broken and this was comfirmed with soapy water bubbles. This caused the N2 pressure to drop and the Vac interlocks to be triggered. Chub and I went ahead and replaced the fitting in connected this back to normal. Anchal and Tega have used the Vacuum StartUp procedures to restore the vacuum to normal operation.

Adding screenshot as the pressure is decreasing now.

  17153   Thu Sep 22 20:57:16 2022 TegaUpdateComputersbuild, install and start 40m models on teststand

[Tega, Chris]

We built, installed and started all the 40m models on the teststand today. The configuration we employ is to connect c1sus to the teststand I/O chassis and use dolphin to send the timing to other frontends. To get this far, we encounterd a few problems that was solved by doing the following:

0. Fixed frontend timing sync to FB1 via ntp

1. Set the rtcds enviroment variable `CDS_SRC=/opt/rtcds/userapps/trunk/cds/common/src` in the file '/etc/advligorts/env'

2. Resolved chgrp error during models installation using sticky bits on chiara, i.e. `sudo chmod g+s -R /opt/rtcds/caltech/c1/target`

3. Replaced `sqrt` with `lsqrt` in `RMSeval.c` to eliminate compilation error for c1ioo

4. Created a symlink for 'activateDQ.py' and 'generate_KisselButton.py' in '/opt/rtcds/caltech/c1/post_build'

5. Installed and configured dolphin for new frontend 'c1shimmer'

6. Replaced 'RFM0' with 'PCIE' in the ipc file, '/opt/rtcds/caltech/c1/chans/ipc/C1.ipc'


We still have a few issues namely:

1. The user models are not running smoothly. the cpu usage jumps to its maximum value every second or so.

2. c1omc seems to be unable to get its timing from its IOP model (Issue resolved by changing the CDS block parameter 'specific_cpu' from 6 to 4 bcos the new FEs only have 6 cores, 0-5)

3. The need to load the `dolphin-proxy-km` library and start the `rts-dolphin_daemon` service whenever we reboot the front-end

  17152   Thu Sep 22 19:51:58 2022 AnchalUpdateBHDBH55 LSC Model Updates - part II

I updated follwoing in teh rtcds models and medm screens:

  • c1lsc
    • Added reading of ADC0_20 and ADC0_21 as demodulated BHD output at 55 MHz, I and Q channels.
    • Connected BH55_I and BH55_Q to phase rotation and creation of output channels.
    • Replaced POP55 with BH55 in the RFPD input matrix.
    • Send BH55_I and BH55_Q over IPC to c1hpc
    • Added BH55 RFPD model in LSC screen, in RFPD input matrix, whitening box. Some work is still remaining.
  • c1hpc
    • Added recieving BH55_I and BH55_Q.
    • Added BH55_I and BH55_Q to sensing matrix through filter modules. Now these can be used to control LO phase.
    • Added BH55 signals to the medm screen.
  • c1scy
    • Updated SUS model to new sus model that takes care of data acquisition rates and also adds BIASPOS, BIASPIT and BIASYAW filter modules at alignment sliders.

Current state:

  • All models built and installed without any issue or error.
  • On restarting all models, I first noticed 0x2000 error on c1lsc, c1scy and c1hpc. But these errors went away with doing daqd restart on fb1.
  • BH55 FM buttons are not connected to antialiasing analog filter. Need to do this and update medm screen accordingly.
  • The IPC from c1lsc to c1hpc is not working. One sender side, it does not show any signal which needs to be resolved.
  17151   Wed Sep 21 17:16:14 2022 TegaUpdateComputersSetup the 6 new front-ends to boot off the FB1 clone

[Tega, JC]

We laid 4 out of 6 fiber cables today. The remaining 2 cables are for the I/O chassis on the vertex so we would test the cables the lay it tomorrow. We were also able to identify the problems with the 2 supposedly faulty cable, which are not faulty. One of them had a small bend in the connector that I was able to straighten out with a small plier and the other was a loose connection in the switch part. So there was no faulty cable, which is great! Chris wrote a matlab script that does the migration of all the model files. I am going through them, i.e. looking at the CDS parameter block to check that all is well. Next task is to build and install the updated models. Also need to update the '/opt/rtcds' and '/opt/rtapps' directory to the latest in the 40m chiara system.


  17150   Wed Sep 21 17:01:59 2022 PacoUpdateBHDBH55 RFPD installed - part I

[Radhika, Paco]

Optical path setup

We realized the DCPD - B beam path was already using a 95:5 beamsplitter to steer the beam, so we are repurposing the 5% pickoff for a 55 MHz RFPD. For the RFPD we are using a gold RFPD labeled "POP55 (POY55)" which was on the large optical table near the vertex. We have decided to test this in-situ because the PD test setup is currently offline.

Radhika used a Y1-1025-45S mirror to steer the B-beam path into the RFPD, but a lens should be added next in the path to focus the beam spot into the PD sensitive area. The current path is illustrated by Attachment #1.

We removed some unused OPLEV optics to make room for the RFPD box, and these were moved to the optics cabinet along Y-arm [Attachment #2].


[Anchal, Yehonathan]

PD interfacing and connections

In parallel to setting up the optical path configuration in the ITMY table, we repurposed a DB15 cable from a PD interface board in the LSC rack to the RFPD in question. Then, an SMA cable was routed from the RFPD RF output to an "UNUSED" I&Q demod board on the LSC rack. Lucky us, we also found a terminated REFL55 LO port, so we can draw our demod LO from there. There are a couple (14,15,20,21) ADC free inputs after the WF2 and WF3 whitening filter interfaces.

Next steps

  • Finish alignment of BH55 beam to RFPD
  • Test RF output of RFPD once powered
  • Modify LSC model, rebuild and restart
  17149   Wed Sep 21 10:10:22 2022 YehonathanUpdateCDSC1SU2 crashed

{Yehonathan, Anchal}

Came this morning to find that all the BHD optics watchdogs tripped. Watchdogs were restored but the all ADCs were stuck.

We realized that the C1SU2 model crashed.

We run /scripts/cds/restartAllModels.sh to reboot all machines. All the machines rebooted and burt restored to yesterday around 5 PM.

The optics seem to have returned to good alignment and are damped.

  17148   Tue Sep 20 23:06:23 2022 TegaUpdateComputersSetup the 6 new front-ends to boot off the FB1 clone

[Tega, Radhika, JC]

We wired the front-ends for power, DAQ and martian network connections. Then moved the I/O chassis from the buttom of the rack to the middle just above the KVM switch so we can leave the top og the I/O chassis open for access to the ports of OSS target adapter card for testing the extension fiber cables.

Attachment 1 (top -> bottom)







When I turned on the test stand with the new front-ends, after a few minutes, the power to 1x7 was cut off due to overloading I assume. This brought down nodus, chiara and FB1. After Paco reset the tripped switch, everything came back without us actually doing anything, which is an interesting observation.

After this event, I moved the test stand power plug to the side wall rail socket. This seems fine so far. I then brought chiara (clone) and FB1 (clone) online. Here are some changes I made to get things going:

Chiara (clone)

  • Edited '/etc/dhcp/dhcpd.conf' to update the MAC address of the front-ends to match the new machines, then run
  • sudo service isc-dhcp-server restart
  • then restart front-ends
  • Edited '/etc/hosts' on chiara to include c1iscex and c1iscey as these were missing


FB1 (clone)

Getting the new front-ends booting off FB1 clone:

1. I found that the KVM screen was flooded with setup info about the dolphin cards on the LLO machines. This actually prevented login using the KVM switch for two of these machines.  Strangely, one of them 'c1sus' seemed to be fine, see attachment 2, so I guessed this was bcos the dolphin network was already configured earlier when we were testing the dolphin communications. So I decided to configure the remaining dolphin cards. To do so, we do the following

Dolphin Configuration:

1a. Ideally running

sudo /opt/DIS/sbin/dis_mkconf -fabrics 1 -sclw 8 -stt 1 -nodes c1lsc c1sus c1ioo c1iscex c1iscey c1sus2 -nosessions

should set up all the nodes, but this did not happen. In fact, I could no longer use the '/opt/DIS/sbin/dis_admin' GUI after running this operation and restarting the 'dis_networkmgr.service' via

sudo systemctl restart dis_networkmgr.service

so  I logged into each front-end and configured the dolphin adapter there using

sudo /opt/DIS/sbin/dis_config

After which I shut down FB1 (clone) bcos restarting it earlier didn't work, I waited a few minutes and then started it.  Everything was fine afterward, although I am not quite sure what solved the issue as I tried a few things and I was glad to see the problem go!

1b. I later found after configuring all the dolphin nodes that 2 of them failed the '/opt/DIS/sbin/dis_diag' test with an error message suggesting three possible issues of which one was 'faulty cable'. I looked at the units in question and found that swapping both cables with the remaining spares solved the problem. So it seems like these cables are faulty (need to double-check this). Attachment 3 shows the current state of the dolphin nodes on the front-ends and the dolphin switch.

2. I noticed that the NFS mount service for the mount points '/opt/rtcds' and '/opt/rtapps' in /etc/fstab exited with an error, so I ran 

sudo mount -a

3. edit '/etc/hosts' to include c1iscex and c1iscey as these were missing



To test the PCIe extension fiber cables that connect the front-ends to their respective I/O chassis, we run the following command (after booting the machine with the cable connected): 

controls@c1lsc:~$ lspci -vn | grep 10b5:3
    Subsystem: 10b5:3120
    Subsystem: 10b5:3101

If we see the output above, then both the cable and OSS card are fine (We know from previous tests that the OSS card on the I/O chassis is good). Since we only have one I/O chassis, we repeat the step above 8 times, also cycling through the six new front-end as we go so that we are also testing the installed OSS host adapter cards. I was able to test 4 cables and 4 OSS host cards (c1lsc, c1sus, c1ioo, c1sus2), but the remaining results were inconclusive (i.e. it seems to suggest that 3 out of the remaining 5 fiber cables are faulty, which in itself would be considered unfortunate but I found the reliability if the test to be in question when I went back to test the functionality to the 2 remaining OSS host cards using a cable that passed the test earlier and it didn't pass. After a few retries, I decided to call it a day b4 I lose my mind) and need to be redone again tomorrow.


Note: We were unable to lay the cables today bcos these tests were not complete, so we are a bit behind the plan. Would see if we can catch up tomorrow.



Plan for the remainder of the week


  • Setup the 6 new front-ends to boot off the FB1 clone.
  • Test PCIe I/O cables by connecting them btw the front-ends and teststand I/O chassis one at a time to ensure they work
  • Then lay the fiber cables to the various I/O chassis.


  17147   Tue Sep 20 18:18:07 2022 AnchalSummarySUSETMX, ETMY damping loop gain tuning

[Paco, Anchal]

We performed local damping loop tuning for ETMs today to ensure all damping loops' OLTF has a Q of 3-5. To do so, we applied a step on C1:SUS-ETMX/Y_POS/PIT/YAW_OFFSET, and looked for oscillations in the error point of damping loops (C1:SUS-EMTX/Y_SUSPOS/PIT/YAW_IN1) and increased or decreased gains until we saw 3-5 oscillations before the error signal stabilizes to a new value. For side loop, we applied offset at C1:SUS-ETMX/Y_SDCOIL_OFFSET and looked at C1:SUS-ETMX/Y_SUSSIDE_IN1. Following are the changes in damping gains:

C1:SUS-ETMX_SUSPOS_GAIN : 61.939   ->   61.939
C1:SUS-ETMX_SUSPIT_GAIN :   4.129     ->   4.129
C1:SUS-ETMX_SUSYAW_GAIN : 2.065     ->   7.0
C1:SUS-ETMX_SUSSIDE_GAIN : 37.953  ->   50

C1:SUS-ETMY_SUSPOS_GAIN : 150.0     ->   41.0
C1:SUS-ETMY_SUSPIT_GAIN :   30.0       ->   6.0
C1:SUS-ETMY_SUSYAW_GAIN : 30.0       ->   6.0
C1:SUS-ETMY_SUSSIDE_GAIN : 50.0      ->   300.0


  17146   Tue Sep 20 15:40:07 2022 yehonathanUpdateBHDTrying doing AC lock

We resume the LO phase locking work. MICH was locked with an offset of 80 cts. LO and AS beams were aligned to maximize the BHD readout visibility on ndscope.

We lock the LO phase on a fringe (DC locking) actuating on LO1.

Attachment 1 shows BHD readout (DCPD_A_ERR = DCPD_A - DCPD_B) spectrum with and without fringe locking while LO2 line at 318 Hz is on. It can be seen that without the fringe locking the dithering line is buried in the A-B noise floor. This is probably due to multiple fringing upconversion. We figured that trying to directly dither-lock the LO phase might be too tricky since we cannot resolve the dither line when the LO phase is unlocked.

We try to handoff the lock from the fringe lock to the AC lock in the following way: Since the AC error signal reads the derivative of the BHD readout it is the least sensitive to the LO phase when the LO phase is locked on a dark fringe, therefore we offset the LO to realize an AC error signal. LO phase offset is set to ~ 40 cts (peak-to-peak counts when LO phase is uncontrolled is ~ 400 cts).

We look at the "demodulated" signal of LO1 from which the fringe locking error signal is derived (0 Hz frequency modulation 0  amplitude) and the demodulated signal of LO2 where a ~ 700 Hz line is applied. We dither the LO phase at ~ 50Hz to create a clear signal in order to compare the two error signals. Although the 50 Hz signal was clearly seen on the fringe lock error signal it was completely unresolved in the LO2 demodulated signal no matter how hard we drove the 700Hz line and no matter what demodulation phase we chose. Interestingly, changing the demodulation phase shifted the noisy LO2 demodulated signal by some constant. Will post a picture later.

Could there be some problem with the modulation-demodulation model? We should check again but I'm almost certain we saw the 700Hz line with an SNR of ~ 100 in diaggui, even with the small LO offset changes in the 700Hz signal phase should have been clearly seen in the demodulated signal. Maybe we should also check that we see the 50Hz side-bands around the 700Hz line on diaggui to be sure.

  17145   Tue Sep 20 07:03:04 2022 PacoSummaryGeneralPower Outage 220916 -- restored all

[JC, Tega, Paco ]

I would like to mention that during the Vacuum startup, after the AUX pump was turned on, Tega and I were walking away while the pressure decreases. While we were, valves opened on their own. Nobody was near the VAC Desktop during this. I asked Koji if this may be an automatic startup, but he said the valves shouldn't open unless they are explicitely told to do so. Has anyone encountered this before?


Restore lab

[Paco, Tega, JC, Yehonathan]

We followed the instructions here. There were no major issues, apart from the fb1 ntp server sync taking long time after rebooting once.

ETMY damping

[Yehonathan, Paco]

We noticed that ETMY had to much RMS motion when the OpLevs were off. We played with it a bit and noticed two things: Cheby4 filter was on for SUS_POS and the limiter on ULCOIL was on at 0 limit. We turned both off.

We did some damping test and observed that the PIT and YAW motion were overdamped. We tune the gain of the filters in the following way:


SUSPOS_GAIN 200->150


These action seem to make things better.


  17144   Mon Sep 19 20:21:06 2022 TegaUpdateComputers1X7 and 1X6 work

[Tega, Paco, JC]

We moved the GPS network time server and the Frequency distribution amplifier from 1X7 to 1X6 and the PEM AA, ADC adapter and Martian network switch from 1X6 to 1X7. Also mounted the dolphin IX switch at the rear of 1X7 together with the DAQ and martian switches. This cleared up enough space to mount all the front-ends, however, we found that the mounting brackets for the frontends do not fit in the 1X7 rack, so I have decided to mount them on the upper part of the test stand for now while we come up with a fix for this problem. Attachments 1 to 3 show the current state of racks 1X6, 1X7 and the teststand.


Attachment 1: Front of racks 1X6 and 1X7

Attachment 2: Rear of rack 1X7

Attachment 3: Front of teststand rack

Plan for the remainder of the week


  • Setup the 6 new front-ends to boot off the FB1 clone.
  • Test PCIe I/O cables by connecting them btw the front-ends and teststand I/O chassis one at a time to ensure they work
  • Then lay the fiber cables to the various I/O chassis.


  • Migrate the current models on the 6 front-ends to the new system.
  • Replace RFM IPC parts with dolphin IPC parts in c1rfm model running c1sus machine
  • Replace the RFM parts in c1iscex and c1iscey models
  • Drop c1daf and c1oaf models from c1isc machine, since the front-ends have only have 6 cores
  • Build and install models


  • Complete any remaining model work
  • Connect all I/O chassis to their respective (new) front-end and see if we can start the models (Need to think of a safe way to do this. Should we disconnect coil drivers b4 starting the models?)


  • Tie-up any loose ends
  17143   Mon Sep 19 17:02:57 2022 PacoSummaryGeneralPower Outage 220916 -- restored all

Restore lab

[Paco, Tega, JC, Yehonathan]

We followed the instructions here. There were no major issues, apart from the fb1 ntp server sync taking long time after rebooting once.

ETMY damping

[Yehonathan, Paco]

We noticed that ETMY had to much RMS motion when the OpLevs were off. We played with it a bit and noticed two things: Cheby4 filter was on for SUS_POS and the limiter on ULCOIL was on at 0 limit. We turned both off.

We did some damping test and observed that the PIT and YAW motion were overdamped. We tune the gain of the filters in the following way:


SUSPOS_GAIN 200->150


These action seem to make things better.

  17142   Thu Sep 15 21:12:53 2022 PacoUpdateBHDLO phase "dc" control

Locked the LO phase with a MICH offset=+91. The LO is midfringe (locked using the A-B zero crossing), so it's far from being "useful" for any readout but we can at least look at residual noise spectra.

I spent some time playing with the loop gains, filters, and overall lock acquisition, and established a quick TF template at Git/40m/measurements/BHD/HPC_LO_PHASE_TF.xml

So far, it seems that actuating on the LO phase through LO2 POS requires 1.9 times more strength (with the same "A-B" dc sensing). After closing the loop by FM4, and FM5, actuating on LO2 with a filter gain of 0.4 closes the loop robustly. Then, FM3 and FM6 can be enabled and the gain stepped up to 0.5 without problem. The measured UGF (Attachment #1) here was ~ 20 Hz. It can be increased to 55 Hz but then it quickly becomes unstable. I added FM1 (boost) to the HPC_LO_PHASE bank but didn't get to try it.

The noise spectra (Attachment #2) is still uncalibrated... but has been saved under Git/40m/measurements/BHD/HPC_residual_noise_spectra.xml

  17141   Thu Sep 15 16:19:33 2022 YehonathanUpdateLSCPOX-POY noise budget

Doing POX-POY noise measurement as a poor man's FPMI for diagnostic purposes. (Notebook in /opt/rtcds/caltech/c1/Git/40m/measurements/LSC/POX-POY/Noise_Budget.ipynb)

The arms were locked individually using POX11 and POY11. The optical gain was estimated to be by looking at the PDH signal of each arm: the slope was computed by taking the negative peak to positive peak counts and assuming that the arm length change between those peaks is lambda/(2*Finesse), where lambda = 1um and the arm finesse is taken to be 450.

Xarm peak-to-peak counts is ~ 850 while Yarm's is ~ 1100. This gives optical gains of 3.8e11 cts/m and 4.95e11 cts/m respectively.

Next, ETMX actuation TF is measured (attachments 1,2) by exciting C1:LSC-ETMX/Y_EXC and measuring at C1:LSC-X/YARM_IN1_DQ and calibrating with the optical gain.

Using these calibrations I plot the POX-POY (attachment 3) and POX+POY (attachment 4) total noise measurements using two methods:

1. Plotting the calibrated IN and OUT channels of XARM-YARM (blue and orange). Those two curves should cross at the UGF (200Hz in this case).

2. Plotting the calibrated XARM-YARM IN channels times 1-OLTF (black).

The UGF bump can be clearly seen above the true noise in those plots.

However, POX+POY OUT channel looks too high for some reason making the crossing frequency between IN and OUT channels to be ~ 300Hz. Not sure what was going on with this.

Next, I will budget this noise with the individual noise contributions.

  17140   Thu Sep 15 11:13:32 2022 AnchalSummaryASSYARM and XARM ASS restored

With limited proof of working for a few times (but robustly), I'm happy to report that ASS on YARM and XARM is working now.

What is wrong?

The issue is that PR3 is not placed in correct position in the chamber. It is offset enough that to send a beam through center of ITMY to ETMY, it has to reflect off the edge of PR3 leading to some clipping. Hence our usual ASS takes us to this point and results in loss of transmission due to clipping.

Solution: We can not solve this issue without moving PR3 inside the chamber. But meanwhile, we can find new spot positions on ITMY and ETMY, off the center in YAW direction only, which would allow us to mode match properly without clipping. This would mean that there will be YAW suspension noise to Length coupling in this cavity, but thankfully, YAW degree of freedom stays relatively calm in comparison to PIT or POS for our suspensions. Similarly, we need to allow for an offset in ETMX beam spot position in YAW. We do not control beam spot position on ITMX due to lack of enough actuators to control all 8 DOFs involved in mode matching input beam with a cavity. So instead I found the right offset for ITMX transmission error signal in YAW that works well.

I found these offsets (found empirically) to be:


These offsets have been saved in the burt snap file used for running ASS.

Using ASS

I'll reiterate here procedure to run ASS.

  • Get YARM locked to TEM00 mode and atleast 0.4 transmission on C1:LSC-TRY_OUT
  • Open sitemap->ASC->c1ass
  • Click ! Scripts YARM -> Striptool to open a striptool monitor for ASS error signals.
  • Click on ! Scripts YARM -> Dither ON to switch on the dither.
  • Wait for all error signals to have settled around zero (this should also maximize the transmission channel (currently maximizing to 1.1).
  • Click on ! Scripts YARM -> Freeze Offsets
  • Click on ! Scripts YARM -> Offload Offsets
  • Click on ! Scripts YARM -> Dither OFF.
  • Then proceed to XARM. Get it locked to TEM00 mode and atleast 0.4 transmission on C1:LSC-TRX_OUT
  • Open sitemap->ASC->c1ass
  • Click ! Scripts XARM -> Striptool to open a striptool monitor for ASS error signals.
  • Click on ! Scripts XARM -> Dither ON to switch on the dither.
  • Wait for all error signals except C1:ASS-XARM_ITM_PIT_L_DEMOD_I_OUT16 and C1:ASS-XARM_ITM_YAW_L_DEMOD_I_OUT16 to have settled around zero (this should also maximize the transmission channel (currently maximizing to 1.1).
  • Click on ! Scripts XARM -> Freeze Offsets
  • Click on ! Scripts XARM -> Offload Offsets
  • Click on ! Scripts XARM -> Dither OFF.
  17139   Wed Sep 14 15:40:51 2022 JCUpdateTreasurePlastic Containers

There are brand new empty plastic containers located inside the shed that is outside in the cage. These can be used for organizing new equipment for CDS, or cleanup after Wednesday meetings

  17138   Tue Sep 13 14:12:03 2022 YehonathanUpdateBHDTrying LO phase locking again

[Paco, Yehonathan]

Summary:  We locked LO phase using the DC PD (A - B) error point without saturating the control point, i.e. not a "bang bang" control.

Some suspensions were improved so we figure we should go back to trying to lock the LO phase.

We misalign ETMs and lock MICH using AS55. We put a small MICH offset by putting C1:LSC-MICH_OFFSET = -80.

AS and LO beams were aligned to overlap by maximizing the BHD signal visibility.

BHD DCPDs were balanced by misaligning the AS beam and using the LO beam only.

We measure the transfer function between the DCPDs and find the coherence is 1 at 1 Hz (because of seismic motion) so we measure the ratio between them to be 0.3db.

AS beam is aligned again to overlap with the LO beam. For the work below, we use the largest MICH OFFSET we could impinge before losing the lock = +90. This has the effect of increasing our optical gain.

We started using the HPC LOCK IN screen to dither POS on the different BHD SUS. We first started with AS1 (freq = 137.137 Hz, gain = 1000). The sensing matrix element was chosen accordingly (from the demodulated output) and fed to the LO_PHASE; because this affected the AS port alignment this was of course not the best choice. We moved over to LO2 (freq = 318.75 Hz, gain = 1000) but for the longest time couldn't see the dither line at the error point (A-B).

After this we added comb60 notch filters at DCPD_A and DCPD_B input signals. We ended up just feeding the (A-B) error point to LO1, and trying to lock mid fringe, which suceeded without saturation. The gain of the LO_PHASE filter was set to 0.2 (previously 20; attributable to the newly unclipped LO beam intensity?), and again we only enabled FM4 and FM5 for this. After this a dither line at 318.75 Hz finally appeared in the A-B error point! To be continued...

  17137   Thu Sep 8 16:03:25 2022 YehonathanUpdateLSCRealignment, arm locking, gains adjustments

{Anchal, Yehonathan}

We came this morning and the IMC was misaligned. The IMC was realigned and locked. This of course changed the input beam and sent us down to a long alignment journey.

We first use TTs to find beam on BHD DCPD/Camera since it is only single bounce on all optics.

Then, PR2/3 were used to find POP beam while keeping the BHD beam.

Unfortunately, that was not enough. TTs and PRs have some degeneracy which caused us to lose the REFL beam.

Realizing this we went to AS table to find the REFL beam. We found a ghost beam that decieved us for a while. Realizing it was a ghost beam, we moved TT2 in pitch, while keeping the POP DCPD high with PRs, until we saw a new beam on the viewing card.

We kept aligning TT1/2, PR2/3 to maximize the REFL DCPD while keeping the POP DCPD high. We tried to look at the REFL camera but soon realized that the REFL beam is too weak for the camera to see.

At that point we already had some flashing in the arms (we centered the OpLevs in the beginning).

Arms were aligned and locked. We had some issue with the X-ARM not catching lock. We increased the gain and it locked almost immediately. To fix the arms gains correctly we took OLTFs (Attachment) and adjusted the XARM gain to 0.027 to make the UGF at 200Hz.

Both arms locked with 200 Hz UGF from:

From GPS: 1346713049
    To GPS: 1346713300

From GPS: 1346713380
    To GPS: 1346714166

HEPA turned off:
From GPS: 1346714298
    To GPS: 1346714716


  17136   Thu Sep 8 12:01:02 2022 JCConfigurationLab OrganizationLab Organization

The floor cable cover has been changed out for a new one. This is in Section X11.

  17135   Thu Sep 8 11:54:37 2022 JCConfigurationLab OrganizationLab Organization

The arms in the 40m laboratory have now been sectioned off. Each arm has been divided up into 15 sections. Along the Y arm, the section are labelled "Section Y1 - Section Y15". For the X arm, they are labelled "Section X1- Section X15". Anything changed or moved will now be updated into the elog with their appropriate section.


Below is an example of Section X6.

  17134   Wed Sep 7 14:32:15 2022 AnchalUpdateSUSUpdated SD coil gain to keep all coil actuation signals digitally same magnitude

The coil driver actuation strength for face coils was increased by 13 times in LO1, LO2, SR2, AS1, AS4, PR2, and PR3.

After the change the damping strenghth of POS, PIT, and YAW were reduced, but not SIDE coil output filter module requires higher digital input to cause same force at the output. This wouldn't be an issue until we try to correct for cross coupling at output matrix where we would want to give similar order of magnitude coefficients for SIDE coil as well. So I did the following changes to make input to all coil output filters (Face and side) same for these particular suspensions:

  • C1:SUS-LO1_SUSSIDE_GAIN 40.0 -> 3.077
    C1:SUS-AS1_SUSSIDE_GAIN 85.0 -> 6.538
    C1:SUS-AS4_SUSSIDE_GAIN 41.0 -> 3.154
    C1:SUS-PR2_SUSSIDE_GAIN 150.0 -> 11.538
    C1:SUS-PR3_SUSSIDE_GAIN 100.0 -> 7.692
    C1:SUS-LO2_SUSSIDE_GAIN 50.0 -> 3.846
    C1:SUS-SR2_SUSSIDE_GAIN 140.0 -> 10.769
  • C1:SUS-LO1_SDCOIL_GAIN -1.0 -> -13.0
    C1:SUS-AS1_SDCOIL_GAIN 1.0 -> 13.0
    C1:SUS-AS4_SDCOIL_GAIN -1.0 -> -13.0
    C1:SUS-PR2_SDCOIL_GAIN -1.0 -> -13.0
    C1:SUS-PR3_SDCOIL_GAIN -1.0 -> -13.0
    C1:SUS-LO2_SDCOIL_GAIN -1.0 -> -13.0
    C1:SUS-SR2_SDCOIL_GAIN -1.0 -> -13.0

In short, now 10 cts of input to C1:SUS-AS1_ULCOIL would create same actuation on UL as 10 cts of input to C1:SUS-AS1_SDCOIL will on SD.

In reply to

Quote: http://nodus.ligo.caltech.edu:8080/40m/16802

[Koji, JC]

Coil Drivers LO2, SR2, AS4, and AS1 have been updated a reinstalled into the system. 

LO2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100008

LO2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100530

SR2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100614

SR2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100615

AS1 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100610

AS1 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100611

AS4 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100612

AS4 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100613


Quote: http://nodus.ligo.caltech.edu:8080/40m/16791

[JC Koji]

To give more alignment ranges for the SUS alignment, we started updating the output resistors of the BHD SUS coil drivers.
As Paco has already started working on LO1 alignment, we urgently updated the output Rs for LO1 coil drivers.
LO1 Coil Driver 1 now has R=100 // 1.2k ~ 92Ohm for CH1/2/3, and LO1 Coil Driver 2 has the same mod only for CH3. JC has taken the photos and will upload/update an elog/DCC.

We are still working on the update for the SR2, LO2, AS1, and AS4 coil drivers. They are spread over the workbench right now. Please leave them as they're for a while.
JC is going to continue to work on them tomorrow, and then we'll bring them back to the rack.


Quote: https://nodus.ligo.caltech.edu:8081/40m/16760

[Yuta Koji]

We took out the two coil driver units for PR3 and the incorrect arrangement of the output Rs were corrected. The boxes were returned to the rack.

In order to recover the alignment of the PR3 mirror, C1:SUS_PR3_SUSPOS_INMON / C1:SUS_PR3_SUSPIT_INMON / C1:SUS_PR3_SUSYAW_INMON were monitored. The previous values for them were {31150 / -31000 / -12800}. By moving the alignment sliders, the PIT and YAW values were adjusted to be {-31100 / -12700}. while this change made the POS value to be 52340.

The original gains for PR3 Pos/Pit/Yaw were {1, 0.52, 0.2}. They were supposed to be reduced to  {0.077, 0.04, 0.015}.
I ended up having the gains to be {0.15, 0.1, 0.05}. The side gain was also increased to 50.


Overall, the output R configuration for PR2/PR3 are summarized as follows. I'll update the DCC.

PR2 Coil Driver 1 (UL/LL/UR) / S2100616 / PCB S2100520 / R_OUT = (1.2K // 100) for CH1/2/3

PR2 Coil Driver 2 (LR/SD) / S2100617 / PCB S2100519 / R_OUT = (1.2K // 100) for CH3

PR3 Coil Driver 1 (UL/LL/UR) / S2100619 / PCB S2100516 / R_OUT = (1.2K // 100) for CH1/2/3

PR3 Coil Driver 2 (LR/SD) / S2100618 / PCB S2100518 / R_OUT = (1.2K // 100) for CH3


  17133   Tue Sep 6 17:39:40 2022 PacoUpdateSUSLO1 LO2 AS1 AS4 damping loop step responses

I tuned the local damping gains for LO1, LO2, AS1, and AS4 by looking at step responses in the DOF basis (i.e. POS, PIT, YAW, and SIDE). The procedure was:

  1. Grab an ndscope with the error point signals in the DOF basis, e.g. C1:SUS-LO1_SUSPOS_IN1_DQ
  2. Apply an offset to the relevant DOF using the alignment slider offset (or coil offset for the SIDE DOF) while being careful not to trip the watchdog. The nominal offsets found for this tuning are summarized below:
Alignment/Coil Step sizes
LO1 800 300 300 10000
LO2 800 300 400 -10000
AS1 800 500 500 20000
AS4 800 400 400 -10000
  1. Tune the damping gains until the DOF shows a residual Q with ~ 5 or more oscillations.
  2. The new damping gains are below for all optics and their DOFs, and Attachments #1-4 summarize the tuned step responses as well as the other DOFs (cross-coupled).
Local damping gains
LO1 10.000 5.000 3.000 40.000
LO2 10.000 3.000 3.000 50.000
AS1 14.000 2.500 3.000 85.000
AS4 15.000 3.100 3.000 41.000

Note that during this test, FM5 has been populated for all these optics with a BounceRoll (notches at 16.6, 23.7 Hz) filter, apart from the Cheby (HF rolloff) and the 0.0:30 filters.

  17132   Tue Sep 6 09:57:26 2022 JCSummaryGeneralLab cleaning

DB9 Cables have been assorted and placed behind the Y-Arm. Long BNC Cables and Ethernet Cables have been stored under the Y-Arm. 


We held the lab cleaning for the first time since the campus reopening (Attachment 1).
Now we can use some of the desks for the people to live! Thanks for the cooperation.

We relocated a lot of items into the lab.

  • The entrance area was cleaned up. We believe that there is no 40m lab stuff left.
    • BHD BS optics was moved to the south optics cabinet. (Attachment 2)
    • DSUB feedthrough flanges were moved to the vacuum area (Attachment 3)
  • Some instruments were moved into the lab.
    • The Zurich instrument box
    • KEPCO HV supplies
    • Matsusada HV supplies
  • We moved the large pile of SUPERMICROs in the lab. They are around MC2 while the PPE boxes there were moved behind the tube around MC2 area. (Attachment 4)
  • We have moved PPE boxes behind the beam tube on XARM behind the SUPERMICRO computer boxes. (Attachment 7)
  • ISC/WFS left over components were moved to the pile of the BHD electronics.
    • Front panels (Attachment 5)
    • Components in the boxes (Attachment 6)

We still want to make some more cleaning:

- Electronics workbenches
- Stray setup (cart/wagon in the lab)
- Some leftover on the desks
- Instruments scattered all over the lab
- Ewaste removal


  17131   Fri Sep 2 15:40:25 2022 AnchalSummaryALSDFD cable measurements

[Anchal, Yehonathan]

I laid down another temporary cable from Xend to 1Y2 (LSC rack) for also measuring the Q output of the DFD box. Then to get a quick measurement of these long cable delays, we used Moku:GO in oscillator mode, sent 100 ns pulses at a 100 kHz rate from one end, and measured the difference between reflected pulses to get an estimate of time delay. The other end of long cables was shorted and left open for 2 sets of measurements.

I-Mon Cable delay: (955+/- 6) ns / 2 = 477 +/- 3 ns

Q-Mon Cable delay: (535 +/- 6) ns / 2 = 267 +/- 3 ns

Note: We were underestimating the delay in I-Mon cable by about a factor of 2.

I also took the opportunity to take a delay time measurement of DFD delayline. Since both ends of cable were present locally, it made more sense to simply take a transfer function to get a clean delay measurement. This measurement resulted with value of 197.7 +/- 0.1 ns. See attached plot. Data and analysis here.

  17130   Fri Sep 2 15:35:19 2022 AnchalUpdateGeneralAlong the Y arm part 2

[Anchal, Radhika]

The cables in USPS open box were important cables that are part of the new electronics architecture. These are 3 ft D2100103 DB15F to DB9M Reducer Cable that go between coil driver output (DB15M on back) to satellite amplifier coil driver in (DB9F on the front). These have been placed in a separate plastic box, labeled, and kept with the rest of the D-sub cable plastic boxes that are part of the upgrade wiring behind the tube on YARM across 1Y2. I believe JC would eventually store these dsub cable boxes together somewhere later.

  17129   Fri Sep 2 15:30:10 2022 AnchalUpdateGeneralAlong the X arm part 1

[Anchal, Radhika]

Attachment 2: The custom cables which were part of the intermediate setup between old electronics architecture and new electronics architecture were found.
These include:

  • 2 DB37 cables with custom wiring at their connectors to connect between vacuum flange and new Sat amp box, marked J4-J5 and J6-J7.
  • 2 DB15 to dual head DB9 (like a Hydra) cables used to interface between old coil drivers and new sat amp box.

A copy of these cables are in use for MC1 right now. These are spare cables. We put them in a cardboard box and marked the box appropriately.
The box is under the vacuum tube along Yarm near the center.


  17128   Fri Sep 2 15:26:42 2022 YehonathanUpdateGeneralSOS and other stuff in the clean room

{Paco, Yehonathan}

BHD Optics box was put into the x-arm last clean cabinet. (attachment 5)

OSEMs were double bagged in a labeled box on the x-arm wire racks. (attachment 1)

SOS Parts (wire clamps, winches, suspension blocks, etc.) were put in a box on the x-arm wire rack. (attachment 3)

2"->3" optic adapter parts were put in a box and stored on the xarm wire rack. (attachment 3)

Magnet gluing parts box was labeled and stored on the xarm rack. (attachment 2)

TT SUS with the optics were stored on the flow bench at the x end. Note: one of the TT SUS was found unsuspended. (attachment 4)

InVac parts were double bagged and stored in a labeled box on the x arm wire rack. (attachment 2)

  17127   Fri Sep 2 13:30:25 2022 Ian MacMillanSummaryComputersQuantization Noise Calculation Summary

P. P. Vaidyanathan wrote a chapter in the book "Handbook of Digital Signal Processing: Engineering Applications" called "Low-Noise and Low-Sensitivity Digital Filters" (Chapter 5 pg. 359).  I took a quick look at it and wanted to give some thoughts in case they are useful. The experts in the field would be Leland B. JacksonP. P. VaidyanathanBernard Widrow, and István Kollár.  Widrow and Kollar  wrote the book "Quantization Noise Roundoff Error in Digital Computation, Signal Processing, Control, and Communications" (a copy of which is at the 40m). it is good that P. P. Vaidyanathan is at Caltech.

Vaidyanathan's chapter is serves as a good introduction to the topic of quantization noise. He starts off with the basic theory similar to my own document on the topic. From there, there are two main relevant topics to our goals.

The first interesting thing is using Error-Spectrum Shaping (pg. 387). I have never investigated this idea but the general gist is as poles and zeros move closer to the unit circle the SNR deteriorates so this is a way of implementing error feedback that should alleviate this problem. See Fig. 5.20 for a full realization of a second-order section with error feedback.

The second starts on page 402 and is an overview of state space filters and gives an example of a state space realization (Fig. 5.26). I also tested this exact realization a while ago and found that it was better than the direct form II filter but not as good as the current low-noise implementation that LIGO uses. This realization is very close to the current realization except uses one less addition block.

Overall I think it is a useful chapter. I like the idea of using some sort of error correction and I'm sure his other work will talk more about this stuff. It would be useful to look into.

One thought that I had recently is that if the quantization noise is uncorrelated between the two different realizations then connecting them in parallel then averaging their results (as shown in Attachment 1) may actually yield lower quantization noise. It would require double the computation power for filtering but it may work. For example, using the current LIGO realization and the realization given in this book it might yield a lower quantization noise. This would only work with two similarly low noise realizations. Since it would be randomly sampling two uniform distributions and we would be going from one sample to two samples the variance would be cut in half, and the ASD would show a 1/√2 reduction if using realizations with the same level of quantization noise. This is only beneficial if the realization with the higher quantization noise only has less than about 1.7 times the one with the lower noise. I included a simple simulation to show this in the zip file in attachment 2 for my own reference.

Another thought that I had is that the transpose of this low-noise state-space filter (Fig. 5.26) or even of LIGO's current filter realization would yield even lower quantization noise because both of their transposes require one less calculation.

  17126   Thu Sep 1 09:00:02 2022 JCConfigurationDaily ProgressLocked both arms and aligned Op Levs

Each morning now, I am going to try to align both arms and lock. Along with that, sometime at towards the end of each week, we should align the OpLevs. This is a good habit that should be practiced more often, not only by me. As for the Y Arm, Yehonathan and I had to adjust the gain to 0.15 in order to stabilize the lock.

  17125   Wed Aug 31 16:11:37 2022 KojiUpdateGeneralVertex Lab area to be cleaned

The analog electronics units piled up along the wall was moved into 1X3B rack which was basically empty. (Attachments 1/2/4)

We had a couple of unused Sun Machines. I salvaged VMIC cards (RFM and Fast fiber networking? for DAQ???) and gave them to Tega.
Attachment 3 shows the eWastes collected this afternoon.

  17123   Wed Aug 31 12:57:07 2022 ranaSummaryALScontrol of ALS beat freq from command line -easy

The PZT sweeps that we've been making to characterize the ALS-X laser should probably be discarded - the DFD was not setup correctly for this during the past few months.

Since the DFD only had a peak-peak range of ~5 MHz, whenever the beat frequency drifts out of the linear range (~2-3 MHz), the data would have an arbitrary gain. Since the drift was actually more like 50 MHz, it meant that the different parts of a single sweep could have some arbitrary gain and sign !!! This is not a good way to measure things.

I used an ezcaservo to keep the beat frequency fixed. The attacehed screenshot shows the command line. We read back the unwrapped beat frequency from the phase tracker, and feedback on the PSL's NPRO temperature. During this the lasers were not locked to any cavities (shutters closed, but servos not disabled).

For the purposes of this measurement, I reduced the CAL factor in the phase tracker screen so that the reported FINE_PHASE_OUT is actually in kHz, rather than Hz on this plot. So the green plot is moving by 10's of MHz. When the servo is engaged, you can see the SLOWDC doing some action. We think the calibration of that channel is ~1 GHz/V, so 0.1 SLOWDC Volts should be ~100 MHz. I think there's a factor of 2 missing here, but its close.

As you can see in the top plot, even with the frequency stabilized by this slow feedback (-1000 to -600 seconds), the I & Q outputs are going through multiple cycles, and so they are unusable for even a non serious measurement.

The only way forward is to use less of a delay in the DFD: I think Anchal has been busily installing this shorter cable (hopefully, its ~3-5 m long so that the linear range is more. I think a 10 m cable is too long.), and the sweeps taken later today should be more useful.

  17122   Wed Aug 31 11:39:48 2022 YehonathanUpdateLSCUpdated XARM noise budget

{Radhika, Paco, Yehonathan}

For educational purposes we update the XARM noise budget and add the POX11 calibrated dark noise contribution (attachment).

  17121   Wed Aug 31 01:54:45 2022 KojiUpdateGeneralAlong the Y arm part 2
  17120   Wed Aug 31 01:53:39 2022 KojiUpdateGeneralAlong the Y arm part 1



  17119   Wed Aug 31 01:30:53 2022 KojiUpdateGeneralAlong the X arm part 4

Behind the X arm tube

  17118   Wed Aug 31 01:25:37 2022 KojiUpdateGeneralAlong the X arm part 3



  17117   Wed Aug 31 01:24:48 2022 KojiUpdateGeneralAlong the X arm part 2



  17116   Wed Aug 31 01:22:01 2022 KojiUpdateGeneralAlong the X arm part 1


Attachment 5: RF delay line was accommodated in 1X3B. (KA)

  17115   Wed Aug 31 00:46:56 2022 KojiUpdateGeneralVertex Lab area to be cleaned

As marked up in the photos.


Attachment 5: The electronics units removed. Cleaning half way down. (KA)

Attachment 6: Moved most of the units to 1X3B rack ELOG 17125 (KA)

  17114   Wed Aug 31 00:32:00 2022 KojiUpdateGeneralSOS and other stuff in the clean room

Salvage these (and any other things). Wrap and double-pack nicely. Put the labels. Store them and record the location. Tell JC the location.

  17113   Tue Aug 30 15:21:27 2022 TegaUpdateComputers3 FEs from LHO got delivered today

[Tega, JC]

We received the remaining 3 front-ends from LHO today. They each have a timing card and an OSS host adapter card installed. We also receive 3 dolphin DX cards. As with the previous packages from LLO, each box contains a rack mounting kit for the supermicro machine.

  17112   Mon Aug 29 18:25:12 2022 CiciUpdateGeneralTaking finer measurements of the actuator transfer function

Took finer measurements of the x-arm aux laser actuator tranfer function (10 kHz - 1 MHz, 1024 pts/decade) using the Moku.


I took finer measurements using the moku by splitting the measurement into 4 sections (10 - 32 (~10^4.5) kHz, 32 - 100 kHz, 100 - 320 kHz, 320 - 1000 kHz) and then grouping them together. I took 25 measurements of each ( + a bonus in case my counting was off), plotted them in the attached notebook, and calculated/plotted the standard deviation of the magnitude (normalized for DC offset). Could not upload to the ELOG as .pdf, but the pdf's are in the .zip file.


Next steps are to do the same stdev calculation for phase, which shouldn't take long, and to use the vectfit of this better data to create a PZT inversion filter.

  17111   Mon Aug 29 15:15:46 2022 TegaUpdateComputers3 FEs from LLO got delivered today

[JC, Tega]

We got the 3 front-ends from LLO today. The contents of each box are:

  1. FE machine
  2. OSS adapter card for connecting to I/O chassis
  3. PCI riser cards (x2)
  4. Timing Card and cable
  5. Power cables, mounting brackets and accompanying screws
  17110   Mon Aug 29 13:33:09 2022 JCUpdateGeneralLab Cleanup

The machine shop looked a mess this morning, so I cleaned it up. All power tools are now placed in the drawers in the machine shop. Let me know if there are any questions of where anything here is placed. 

  17109   Sun Aug 28 23:14:22 2022 JamieUpdateComputersrack reshuffle proposal for CDS upgrade

@tega This looks great, thank you for putting this together.  The rack drawing in particular is great.  Two notes:

  1. In "1X6 - proposed" I would move the "PEM AA + ADC Adapter" down lower in the rack, maybe where "Old FB + JetStor" are, after removing those units since they're no longer needed.  That would keep all the timing stuff together at the top without any other random stuff in between them.  If we can't yet remove Old FB and the JetStor then I would move the VME GPS/Timing chassis up a couple units to make room for the PEM module between the VME chassis and FB1.
  2. We'll eventually want to move FB1 and Megatron into 1X7, since it seems like there will be room there.  That will put all the computers into one rack, which will be very nice.  FB1 should also be on the KVM switch as well.

I think most of this work can be done with very little downtime.

ELOG V3.1.3-