40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
 40m Log, Page 140 of 346 Not logged in
ID Date Author Type Category Subject
16266   Thu Jul 29 14:51:39 2021 PacoUpdateOptical LeversRecenter OpLevs

[yehonathan, anchal, paco]

Yesterday around 9:30 pm, we centered the BS, ITMY, ETMY, ITMX and ETMX oplevs (in that order) in their respective QPDs by turning the last mirror before the QPDs. We did this after running the ASS dither for the XARM/YARM configurations to use as the alignment reference. We did this in preparation for PRFPMI lock acquisition which we had to stop due to an earthquake around midnight

16267   Mon Aug 2 16:18:23 2021 PacoUpdateASCAS WFS MICH commissioning

[anchal, paco]

We picked up AS WFS comissioning for daytime work as suggested by gautam. In the end we want to comission this for the PRFPMI, but also for PRMI, and MICH for completeness. MICH is the simplest so we are starting here.

We started by restoromg the MICH configuration and aligning the AS DC QPD (on the AS table) by zeroing the C1:ASC-AS_DC_YAW_OUT and C1:ASC-AS_DC_PIT_OUT. Since the AS WFS gets the AS beam in transmission through a beamsplitter, we had to correct such a beamsplitters's aligment to recenter the AS beam onto the AS110 PD (for this we looked at the signal on a scope).

We then checked the rotation (R) C1:ASC-AS_RF55_SEGX_PHASE_R and delay (D) angles C1:ASC-AS_RF55_SEGX_PHASE_D (where X = 1, 2, 3, 4 for segment) to rotate all the signal into the I quadrature. We found that this optimized the PIT content on C1:ASC-AS_RF55_I_PIT_OUT and YAW content on C1:ASC-AS_RF55_I_YAW_OUTMON which is what we want anyways.

Finally, we set up some simple integrators for these WFS on the C1ASC-DHARD_PIT and C1ASC-DHARD_YAW filter banks with a pole at 0 Hz, a zero at 0.8 Hz, and a gain of -60 dB (similar to MC WFS). Nevertheless, when we closed the loop by actuating on the BS ASC PIT and ASC YAW inputs, it seemed like the ASC model outputs are not connected to the BS SUS model ASC inputs, so we might need to edit accordingly and restart the model.

16272   Fri Aug 6 17:10:19 2021 PacoUpdateIMCMC rollercoaster

[anchal, yehonatan, paco]

For whatever reason (i.e. we don't really know) the MC unlocked into a weird state at ~ 10:40 AM today. We first tried to find a likely cause as we saw it couldn't recover itself after ~ 40 min... so we decided to try a few things. First we verified that no suspensions were acting weird by looking at the OSEMs on MC1, MC2, and MC3. After validating that the sensors were acting normally, we moved on to the WFS. The WFS loops were disabled the moment the IMC unlocked, as they should. We then proceeded to the last resort of tweaking the MC alignment a bit, first with MC2 and then MC1 and MC3 in that order to see if we could help the MC catch its lock. This didn't help much initially and we paused at about noon.

At about 5 pm, we resumed since the IMC had remained locked to some higher order mode (TEM-01 by the looks of it). While looking at C1:IOO-MC_TRANS_SUMFILT_OUT on ndscope, we kept on shifting the MC2 Yaw alignment slider (steps = +-0.01 counts) slowly to help the right mode "hop". Once the right mode caught on, the WFS loops triggered and the IMC was restored. The transmission during this last stage is shown in Attachment #1.

Attachment 1: MC2_trans_sum_2021-08-06_17-18-54.png
16275   Wed Aug 11 11:35:36 2021 PacoUpdateLSCPRMI MICH orthogonality plan

[yehonathan, paco]

Yesterday we discussed a bit about working on the PRMI sensing matrix.

In particular we will start with the "issue" of non-orthogonality in the MICH actuated by BS + PRM. Yesterday afternoon we played a little with the oscillators and ran sensing lines in MICH and PRCL (gains of 50 and 5 respectively) in the times spanning [1312671582 -> 1312672300], [1312673242 -> 1312677350] for PRMI carrier and [1312673832 -> 1312674104] for PRMI sideband. Today we realize that we could have enabled the notchSensMat filter, which is a notch filter exactly at the oscillator's frequency, in FM10 and run a lower gain to get a similar SNR. We anyways want to investigate this in more depth, so here is our tentative plan of action which implies redoing these measurements:

Task: investigate orthogonality (or lack thereof) in the MICH when actuated by BS & PRM
1) Run sensing MICH and PRCL oscillators with PRMI Carrier locked (remember to turn NotchSensMat filter on).
2) Analyze data and establish the reference sensing matrix.
3) Write a script that performs steps 2 and 3 in a robust and safe way.
4) Scan the C1:LSC-LOCKIN_OUTMTRX, MICH to BS and PRM elements around their nominal values.
5) Scan the MICH and PRCL RFPD rotation angles around their nominal values.

We also talked about the possibility that the sensing matrix is strongly frequnecy dependant such that measuring it at 311Hz doesn't give us accurate estimation of it. Is it worthwhile to try and measure it at lower frequencies using an appropriate notch filter?

Wed Aug 11 15:28:32 2021 Updated plan after group meeting

- The problem may be in the actuators since the orthogonality seems fine when actuating on the ITMX/ITMY, so we should instead focus on measuring the actuator transfer functions using OpLevs for example (same high freq. excitation so no OSEM will work > 10 Hz).

16277   Thu Aug 12 11:04:27 2021 PacoUpdateGeneralPSL shutter was closed this morning

Thu Aug 12 11:04:42 2021 Arrived to find the PSL shutter closed. Why? Who? When? How? No elog, no fun. I opened it, IMC is now locked, and the arms were restored and aligned.

16280   Mon Aug 16 23:30:34 2021 PacoUpdateCDSAS WFS commissioning; restarting models

[koji, ian, tega, paco]

With the remote/local assistance of Tega/Ian last friday I made changes on the c1sus model by connecting the C1:ASC model outputs (found within a block in c1ioo) to the BS and PRM suspension inputs (pitch and yaw). Then, Koji reviewed these changes today and made me notice that no changes are actually needed since the blocks were already in place, connected in the right ports, but the model probably just wasn't rebuilt...

So, today we ran "rtcds make", "rtcds install" on the c1ioo and c1sus models (in that order) but the whole system crashed. We spent a great deal of time restarting the machines and their processes but we struggled quite a lot with setting up the right dates to match the GPS times. What seemed to work in the end was to follow the format of the date in the fb1 machine and try to match the timing to the sub-second level. This is especially tricky when performed by a human action so the whole task is tedious. We anyways completed the reboot for almost all the models except the c1oaf (which tends to make things crashy) since we won't need it right away for the tasks ahead. One potential annoying issue we found was in manually rebooting c1iscey because one of its network ports is loose (the ethernet cable won't click in place) and it appears to use this link to boot (!!) so for a while this machine just wasn't coming back up.

Finally, as we restored the suspension controls and reopened the shutters, we noticed a great deal of misalignment to the point no reflected beam was coming back to the RFPD table. So we spent some time verifying the PRM alignment and TT1 and TT2 (tip tilts) and it turned out to be mostly the latter pair that were responsible for it. We used the green beams to help optimize the XARM and YARM transmissions and were able to relock the arms. We ran ASS on them, and then aligned the PRM OpLevs which also seemed off. This was done by giving a pitch offset to the input PRM oplev beam path and then correcting for it downstream (before the qpd). We also adjusted the BS OpLev in the end.

Summary; the ASC BS and PRM outputs are now built into the SUS models. Let the AS WFS loops be closed soon!

- Upon the RTS restarting,

• Date/Time adjustment
sudo date --set='xxxxxx'
• If the time on the CDS status medm screen for each IOP match with the FB local time, we ran
rtcds start c1x01
(or c1x02, etc)
• Every time we restart the IOPs, fb was restarted by
telnet fb1 8083 > shutdown
and restarted mx_stream from the CDS screen because these actions change the "DC" status.

- Today we once succeeded to restart the vertex machines. However, the RFM signal transmission did fail. So the end two machines were power cycled as well as c1rfm, but this made all the machines in RED again. Hell...

- We checked the PRM oplev. The spot was around the center but was clipped. This made us so confused. Our conclusion was that the oplev was like that before the RTS reboot.

16287   Mon Aug 23 10:17:21 2021 PacoSummaryComputerssystem reboot glitch

[paco]

At 09:34 PST I noted a glitch in the controls room as the machines went down except for c1ioo. Briefly, the video feeds disappeared from the screens, though the screens themselves didn't lose power. At first I though this was some kind of power glitch, but upon checking with Jordan, it most likely was related to some system crash. Coming back to the controls room, I could see the MC reflection beam swinging, but unfortunately all the FE models came down. I noticed that the DAQ status channels were blank.

I ssh into c1ioo no problem and ran "rtcds stop c1ioo c1als c1omc", then "rtcds restart c1x03" to do a soft restart. This worked, but the DAQ status was still blank. I then tried to ssh into c1sus and c1lsc without success, similarly c1iscex and c1iscey were unreachable. I went and did a hard restart on c1iscex by switching it off, then its extension chassis, then unplugging the power cords, then inverting these steps, and could ssh into it from rossa. I ran "rtcds start c1x01" and saw the same blank DAQ status. I noticed the elog was also down... so nodus was also affected?

[paco, anchal]

Anchal got on zoom to offer some assistance. We discovered that the fb1 and nodus were subject to some kind of system reboot at precisely 09:34. The "systemctl --failed" command on fb1 displayed both the daqd_dc.service and rc-local.service as loaded but failed (inactive). Is it a good idea to try and reboot the fb1 machine? ... Anchal was able to bring elog back up from nodus (ergo, this post).

[paco]

Although it probably needs the DAQ service from the fb1 machine to be up and running, I tried running the scripts/cds/rebootC1LSC.sh script. This didn't work. I tried running sudo systemctl restart daqd_dc from the fb1 machine without success. Running systemctl reset-failed "worked" for daqd_dc and rc-local services on fb1 in the sense that they were no longer output from systemctl --failed, but they remained inactive (dead) when running systemctl status on them. Following from  15303   I succeeded in restarting the daqd services. Turned out I needed to manually start the open-mx and mx services in fb1. I rerun the restartC1LSC script without success. The script fails because some machines need to be rebooted by hand.

16293   Tue Aug 24 18:11:27 2021 PacoUpdateGeneralTime synchronization not really working

tl;dr: NTP servers and clients were never synchronized, are not synchronizing even with ntp... nodus is synchronized but uses chronyd; should we use chronyd everywhere?

Spent some time investigating the ntp synchronization. In the morning, after Anchal set up all the ntp servers / FE clients I tried restarting the rts IOPs with no success. Later, with Tega we tried the usual manual matching of the date between c1iscex and fb1 machines but we iterated over different n-second offsets from -10 to +10, also without success.

This afternoon, I tried debugging the FE and fb1 timing differences. For this I inspected the ntp configuration file under /etc/ntp.conf in both the fb1 and /diskless/root.jessie/etc/ntp.conf (for the FE machines) and tried different combinations with and without nodus, with and without restrict lines, all while looking at the output of sudo journalctl -f on c1iscey. Everytime I changed the ntp config file, I restarted the service using sudo systemctl restart ntp.service . Looking through some online forums, people suggested basic pinging to see if the ntp servers were up (and broadcasting their times over the local network) but this failed to run (read-only filesystem) so I went into fb1, and ran sudo chroot /diskless/root.jessie/ /bin/bash to allow me to change file permissions. The test was first done with /bin/ping which couldn't even open a socket (root access needed) by running chmod 4755 /bin/ping then ssh-ing into c1iscey and pinging the fb1 machine successfully. After this, I ran chmod 4755 /usr/sbin/ntpd so that the ntp daemon would have no problem in reaching the server in case this was blocking the synchronization. I exited the chroot shell and the ntp daemon in c1iscey; but the ntpstat still showed unsynchronised status. I also learned that when running an ntp query with ntpq -p if a client has succeeded in synchronizing its time to the server time, an asterisk should be appended at the end. This was not the case in any FE machine... and looking at fb1, this was also not true. Although the fb1 peers are correctly listed as nodus, the caltech ntp server, and a broadcast (.BCST.) server from local time (meant to serve the FE machines), none appears to have synchronized... Going one level further, in nodus I checked the time synchronization servers by running chronyc sources the output shows

controls@nodus|~> chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* testntp1.superonline.net      1  10   377   280  +1511us[+1403us] +/-   92ms
^+ 38.229.59.9                   2  10   377   206  +8219us[+8219us] +/-  117ms
^+ tms04.deltatelesystems.ru     2  10   377   23m    -17ms[  -17ms] +/-  183ms
^+ ntp.gnc.am                    3  10   377   914  -8294us[-8401us] +/-  168ms

I then ran chronyc clients to find if fb1 was listed (as I would have expected) but the output shows this --

Hostname                   Client    Peer CmdAuth CmdNorm  CmdBad  LstN  LstC
=========================  ======  ======  ======  ======  ======  ====  ====
501 Not authorised

So clearly chronyd succeeded in synchronizing nodus' time to whatever server it was pointed at but downstream from there, neither the fb1 or any FE machines seem to be synchronizing properly. It may be as simple as figuring out the correct ntp configuration file, or switching to chronyd for all machines (for the sake of homogeneity?)

16298   Wed Aug 25 17:31:30 2021 PacoUpdateCDSFB is writing the frames with a year old date

[paco, tega, koji]

After invaluable assistance from Jamie in fixing this yearly offset in the gps time reported by cat /proc/gps, we managed to restart the real time system correctly (while still manually synchronizing the front end machine times). After this, we recovered the mode cleaner and were able to lock the arms with not much fuss.

Nevertheless, tega and I noticed some weird noise in the C1:LSC-TRX_OUT which was not present in the YARM transmission, and that is present even in the absence of light (we unlocked the arms and just saw it on the ndscope as shown in Attachment #1). It seems to affect the XARM and in general the lock acquisition...

We took some quick spectrum with diaggui (Attachment #2) but it doesn't look normal; there seems to be broadband excess noise with a remarkable 1 kHz component. We will probably look into it in more detail.

Attachment 1: TRX_noise_2021-08-25_17-40-55.png
Attachment 2: TRX_TRY_power_spectra.pdf
16300   Thu Aug 26 10:10:44 2021 PacoUpdateCDSFB is writing the frames with a year old date

[paco, ]

We went over the X end to check what was going on with the TRX signal. We spotted the ground terminal coming from the QPD is loosely touching the handle of one of the computers on the rack. When we detached it completely from the rack the noise was gone (attachment 1).

We taped this terminal so it doesn't touch anything accidently. We don't know if this is the best solution since it is probably needs a stable voltage reference. In the Y end those ground terminals are connected to the same point on the rack. The other ground terminals in the X end are just cut.

We also took the PSD of these channels (attachment 2). The noise seem to be gone but TRX is still a bit noisier than TRY. Maybe we should setup a proper ground for the X arm QPD?

We saw that the X end station ALS laser was off. We turned it on and also the crystal oven and reenabled the temperature controller. Green light immidiately appeared. We are now working to restore the ALS lock. After running XARM ASS we were unable to lock the green laser so we went to the XEND and moved the piezo X ALS alignment mirrors until we maximized the transmission in the right mode. We then locked the ALS beams on both arms successfully. It very well could be that the PZT offsets were reset by the power glitch. The XARM ALS still needs some tweaking, its level is ~ 25% of what it was before the power glitch.

Attachment 1: Screenshot_from_2021-08-26_10-09-50.png
Attachment 2: TRXTRY_Spectra.pdf
16303   Mon Aug 30 17:49:43 2021 PacoSummaryLSCXARM POX OLTF

Used diaggui to get OLTF in preparation for optimal system identification / calibration. The excitation was injected at the control point of the XARM loop C1:LSC-XARM_EXC. Attachment 1 shows the TF (red scatter) taken from 35 Hz to 2.3 kHz with 201 points. The swept sine excitation had an envelope amplitude of 50 counts at 35 Hz, 0.2 counts at 100 Hz, and 0.2 at 200 Hz. In purple continous line, the model for the OLTF using all the digital control filters as well as a simple 1 degree of freedom plant (single pole at 0.99 Hz) is overlaid. Note the disagreement of the OLTF "model" at higher frequencies which we may be able to improve upon using vector fitting.

Attachment 2 shows the coherence (part of this initial measurement was to identify an appropriately large frequency range where the coherence is good before we script it).

Attachment 1: XARM_POX_OLTF.pdf
Attachment 2: XARM_POX_Coh.pdf
16307   Thu Sep 2 17:53:15 2021 PacoSummaryComputerschiara down, vac interlock tripped

[paco, koji, tega, ian]

Today in the morning the name server / network file system running in chiara failed. This resulted in donatella/pianosa/rossa shell prompts to hang forever. It also made sitemap crash and even dropping into a bash shell and just listing files from some directory in the file system froze the computer. Remote ssh sessions on nodus also had the same symptoms.

A little after 1 pm, we started debugging this issue with help from Koji. He suggested we hook a monitor, keyboard, and mouse onto chiara as it should still work locally even if something with the NFS (network file system) failed. We did this and then we tried for a while to unmount the /dev/sdc1/ from /home/cds/ (main file system) and mount /dev/sdb1/ from /media/40mBackup (backup copy) such that they swap places. We had no trouble unmounting the backup drive, but only succeeded in unmounting the main drive with the "lazy" unmount, or running "umount -l". Running "df" we could see that the disk space was 100% used, with only ~ 1 GB of free space which may have been the cause for the issue. After swapping these disks by editing the /etc/fstab file to implement the aforementioned swapping, we rebooted chiara and we recovered the shell prompts in all workstations, sitemap, etc... due to the backup drive mounting. We then started investigating what caused the main drive to fill up that quickly, and noted that weirdly now the capacity was at 85% or about 500GB less than before (after reboot and remount), so some large file was probably dumped into chiara that froze the NFS causing the issue.

At this point we tried opening the PSL shutter to recover the IMC. The shutter would not open and we suspected the vacuum interlock was still tripped... and indeed there was an uncleared error in the VAC screen. So with Koji's guidance we walked to the c1vac near the HV station and did the following at ~ 5:13 PM -->

1. Open V4; apart from a brief pressure spike in PTP2, everything looked ok so we proceeded to
2. Open V1; P2 spiked briefly and then started to drop. Then, Koji suggested that we could
3. Close V4; but we saw P2 increasing by a factor of~ 10 in a few seconds, so we
4. Reopened V4;

We made sure that P1a (main vacuum pressure) was dropping and before continuing we decided to look back to see what the nominal vacuum state was that we should try to restore.

We are currently searching the two systems for diffrences to see if we can narrow down the culprit of the failure.

16313   Thu Sep 2 21:49:03 2021 PacoSummaryComputerschiara down, vac interlock tripped

[tega, paco]

We found the files that took excess space in the chiara filesystem (see Attachment 1). They were error files from the summary pages that were ~ 50 GB in size or so located under /home/cds/caltech/users/public_html/detcharsummary/logs/. We manually removed them and then copied the rest of the summary page contents into the main file system drive (this is to preserve the information backup before it gets deleted by the cron job at the end of today) and checked carefully to identify the actual issue for why these files were as large in the first place.

We then copied the /detcharsummary directory from /media/40mBackup into /home/cds to match the two disks.

Attachment 1: 2021-09-02_21-51-15.png
16320   Mon Sep 13 09:15:15 2021 PacoUpdateLSCMC unlocked?

Came in at ~ 9 PT this morning to find the IFO "down". The IMC had lost its lock ~ 6 hours before, so at about 03:00 AM. Nothing seemed like the obvious cause; there was no record of increased seismic activity, all suspensions were damped and no watchdog had tripped, and the pressure trends similar to those in recent pressure incidents show nominal behavior (Attachment #1). What happened?

Anyways I simply tried reopening the PSL shutter, and the IMC caught its lock almost immediately. I then locked the arms and everything seems fine for now .

Attachment 1: VAC_2021-09-13_09-32-45.png
16329   Tue Sep 14 17:19:38 2021 PacoSummaryPEMExcess seismic noise in 0.1 - 0.3 Hz band

For the past couple of days the 0.1 to 0.3 Hz RMS seismic noise along BS-X has increased. Attachment 1 shows the hour trend in the last ~ 10 days. We'll keep monitoring it, but one thing to note is how uncorrelated it seems to be from other frequency bands. The vertical axis in the plot is in um / s

Attachment 1: SEIS_2021-09-14_17-33-12.png
16343   Mon Sep 20 12:20:31 2021 PacoSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

[yehonathan, paco, anchal]

We attempted to find any symptoms for actuation problems in the PRMI configuration when actuated through BS and PRM.

Our logic was to check angular (PIT and YAW) actuation transfer function in the 30 to 200 Hz range by injecting appropriately (f^2) enveloped excitations in the SUS-ASC EXC points and reading back using the SUS_OL (oplev) channels.

From the controls, we first restored the PRMI Carrier to bring the PRM and BS to their nominal alignment, then disabled the LSC output (we don't need PRMI to be locked), and then turned off the damping from the oplev control loops to avoid supressing the excitations.

We used diaggui to measure the 4 transfer functions magnitudes PRM_PIT, PRM_YAW, BS_PIT, BS_YAW, as shown below in Attachments #1 through #4. We used the Oplev calibrations to plot the magnitude of the TFs in units of urad / counts, and verified the nominal 1/f^2 scaling for all of them. The coherence was made as close to 1 as possible by adjusting the amplitude to 1000 counts, and is also shown below. A dip at 120 Hz is probably due to line noise. We are also assuming that the oplev QPDs have a relatively flat response over the frequency range below.

Attachment 1: PRM_PIT_ACT_TF.pdf
Attachment 2: PRM_YAW_ACT_TF.pdf
Attachment 3: BS_PIT_ACT_TF.pdf
Attachment 4: BS_YAW_ACT_TF.pdf
16352   Tue Sep 21 11:13:01 2021 PacoSummaryCalibrationXARM calibration noise

Here are some plots from analyzing the C1:LSC-XARM calibration. The experiment is done with the XARM (POX) locked, a single line is injected at C1:LSC-XARM_EXC at f0 with some amplitude determined empirically using diaggui and awggui tools. For the analysis detailed in this post, f0 = 19 Hz, amp = 1 count, and gain = 300 (anything larger in amplitude would break the lock, and anything lower in frequency would not show up because of loop supression). Clearly, from Attachment #3 below, the calibration line can be detected with SNR > 1.

We read the test point right after the excitation C1:LSC-XARM_IN2 which, in a simplified loop will carry the excitation suppressed by 1 - OLTF, the open loop transfer function. The line is on for 5 minutes, and then we read for another 5 minutes but with the excitation off to have a reference. Both the calibration and reference signal time series are shown in Attachment #1 (decimated by 8). The corresponding ASDs are shown in Attachment #2. Then, we demodulate at 19 Hz and a 30 Hz, 4th-order butterworth LPF, and get an I and Q timeseries (shown in Attachment #3). Even though they look similar, the Q is centered about 0.2 counts, while the I is centered about 0.0. From this time series, we can of course show the noise ASDs in Attachment #3.

The ASD uncertainty bands in the last plot are statistical estimates and depend on the number of segments used in estimating the PSD. A thing to note is that the noise features surrounding the signal ASD around f0 are translated into the ASD in the demodulated signals, but now around dc. I guess from Attachment #3 there is no difference in the noise spectra around the calibration line with and without the excitation. This is what I would have expected from a linear system. If there was a systematic contribution, I would expect it to show at very low frequencies.

Attachment 1: XARM_signal_asd.pdf
Attachment 2: XARM_demod_timeseries.pdf
Attachment 3: XARM_demod_asds.pdf
Attachment 4: XARM_cal_0921_timeseries.pdf
16358   Thu Sep 23 15:29:11 2021 PacoSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

[Anchal, Paco]

We had a second go at this with an increased number of averages (from 10 to 100) and higher excitation amplitudes (from 1000 to 10000). We did this to try to reduce the relative uncertainty a-la-Bendat-and-Pearsol

$\delta G / G = \frac{1}{\gamma \sqrt{n_{\rm avg}}}$

where $\gamma, n_{\rm avg}$ are the coherence and number of averages respectively. Before, this estimate had given us a ~30% relative uncertainty and now it has been improved to ~ 10%. The re-measured TFs are in Attachment #1. We did 4 sweeps for each optic (BS, PRM) and removed the 1/f^2 slope for clarity. We note a factor of ~ 4 difference in the magnitude of the coil to angle TFs from BS to PRM (the actuation strength in BS is smaller).

For future reference:

With complex G, we get complex error in G using the formula above. To get uncertainity in magnitude and phase from real-imaginary uncertainties, we do following (assuming the noise in real and imaginary parts of the measured transfer function are incoherent with each other):
$G = \alpha + i\beta$

$\delta G = \delta\alpha + i\delta \beta$

$\delta |G| = \frac{1}{|G|}\sqrt{\alpha^2 \delta\alpha^2 + \beta^2 \delta \beta^2}$

$\delta(\angle G) = \frac{1}{|G|^2}\sqrt{\alpha^2 \delta\alpha^2 + \beta^2 \delta\beta^2} = \frac{\delta |G|}{|G|}$

Attachment 1: BS_PRM_ANG_ACT_TF.pdf
16363   Tue Sep 28 16:31:52 2021 PacoSummaryCalibrationXARM OLTF (calibration) at 55.511 Hz

[anchal, paco]

Here is a demonstration of the methods leading to the single (X)arm calibration with its budget uncertainty. The steps towards this measurement are the following:

1. We put a single line excitation through the C1:SUS-ETMX_LSC_EXC at 55.511 Hz, amp = 1 counts, gain = 300 (ramptime=10 s).
2. With the arm locked, we grab a long timeseries of the C1:LSC-XARM_IN1_DQ (error point) and C1:SUS-ETMX_LSC_OUT_DQ (control point) channels.
3. We assume the single arm loop to have the four blocks shown in Attachment #1, A (actuator + sus), plant (mainly the cavity pole), D (detection + electronics), and K (digital control).
1. At this point, Anchal made a model of the single arm loop including the appropriate filter coefficients and other parameters. See Attachments #2-3 for the split and total model TFs.
2. Our line would actually probe a TF from point b (error point) to point d (control point). We multiplied our measurement with open loop TF from b to d from model to get complete OLTF.
3. Our initial estimate from documents and elog made overall loop shape correct but it was off by an overall gain factor. This could be due to wrong assumption on RFPD transimpedance or analog gains of AA or whitening filters. We have corrected for this factor in the RFPD transimpedance, but this needs to be checked (if we really care).
4. We demodulate decimated timeseries (final sampling rate ~ 2.048 kHz) and I & Q for both the b and d signals. From this and our model for K, we estimate the OLTF. Attachment #4 shows timeseries for magnitude and phase.
5. Finally, we compute the ASD for the OLTF magnitude. We plot it in Attachment #5 together with the ASD of the XARM transmission (C1:LSC-TRX_OUT_DQ) times the OLTF to estimate the optical gain noise ASD (this last step was a quick attempt at budgeting the calibration noise).
1. For each ASD we used N = 24 averages, from which we estimate rms (statistical) uncertainties which are depicted by error bands ($\pm \sigma$) around the lines.

** Note: We ran the same procedure using dtt (diaggui) to validate our estimates at every point, as well as check our SNR in b and d before taking the ~3.5 hours of data.

Attachment 1: OLTF_Calibration_Scheme.jpg
Attachment 2: XARM_POX_Lock_Model_TF.pdf
Attachment 3: XARM_OLTF_Total_Model.pdf
Attachment 4: XARM_OLTF_55p511_Hz_timeseries.pdf
Attachment 5: Gmag_55p511_Hz_ASD.pdf
16369   Thu Sep 30 18:04:31 2021 PacoSummaryCalibrationXARM OLTF (calibration) with three lines

[anchal, paco]

We repeated the same procedure as before, but with 3 different lines at 55.511, 154.11, and 1071.11 Hz. We overlay the OLTF magnitudes and phases with our latest model (which we have updated with Koji's help) and include the rms uncertainties as errorbars in Attachment #1.

We also plot the noise ASDs of calibrated OLTF magnitudes at the line frequencies in Attachment #2. These curves are created by calculating power spectral density of timeseries of OLTF values at the line frequencies generated by demodulated XARM_IN and ETMX_LSC_OUT signals. We have overlayed the TRX noise spectrum here as an attempt to see if we can budget the noise measured in values of G to the fluctuation in optical gain due to changing power in the arms. We multiplied the the transmission ASD with the value of OLTF at those frequencies as the transfger function from normalized optical gain to the total transfer function value.

It is weird that the fluctuations in transmission power at 1 mHz always crosses the total noise in the OLTF value in all calibration lines. This could be an artificat of our data analysis though.

Even if the contribution of the fluctuating power is correct, there is remaining excess noise in the OLTF to be budgeted.

Attachment 1: XARM_OLTF_Model_and_Meas.pdf
Attachment 2: Gmag_ASD_nb_withTRX.pdf
16377   Mon Oct 4 18:35:12 2021 PacoUpdateElectronicsSatellite amp box adapters

[Paco]

I have finished assembling the 1U adapters from 8 to 5 DB9 conn. for the satellite amp boxes. One thing I had to "hack" was the corners of the front panel end of the PCB. Because the PCB was a bit too wide, it wasn't really flush against the front panel (see Attachment #1), so I just filed the corners by ~ 3 mm and covered with kapton tape to prevent contact between ground planes and the chassis. After this, I made DB9 cables, connected everything in place and attached to the rear panel (Attachment #2). Four units are resting near the CAD machine (next to the bench area), see Attachment #3.

Attachment 1: pcb_no_flush.jpg
Attachment 2: 1U_assembly.jpg
Attachment 3: fourunits.jpg
16383   Tue Oct 5 20:04:22 2021 PacoSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

[Paco, Rana]

We had a look at the BS actuation. Along the way we created a couple of issues that we fixed. A summary is below.

1. First, we locked MICH. While doing this, we used the /users/Templates/ndscope/LSC/MICH.yml ndscope template to monitor some channels. I edited the yaml file to look at C1:LSC-ASDC_OUT_DQ instead of the REFL_DC. Rana pointed out that the C1:LSC-MICH_OUT_DQ (MICH control point) had a big range (~ 5000 counts rms) and this should not be like that.
2. We tried to investigate the aforementioned thing by looking at the whitening / uwhitening filters but all the slow epics channels where "white" on the medm screen. Looking under CDS/slow channel monitors, we realized that both c1iscaux and c1auxey were weird, so we tried telnet to c1iscaux without success. Therefore, we followed the recommended wiki procedure of hard rebooting this machine. While inside the lab and looking for this machine, we touched things around the 'rfpd' rack and once we were back in the control room, we couldn't see any light on the AS port camera. But the whitening filter medm screens were back up.
3. While rana ssh'd into c1auxey to investigate about its status, and burtrestored the c1iscaux channels, we looked at trends to figure out if anything had changed (for example TT1 or TT2) but this wasn't the case. We decided to go back inside to check the actual REFL beams and noticed it was grossly misaligned (clipping)... so we blamed it on the TTs and again, went around and moved some stuff around the 'rfpd' rack. We didn't really connect or disconnect anything, but once we were back in the control room, light was coming from the AS port again. This is a weird mystery and we should systematically try to repeat this and fix the actual issue.
4. We restored the MICH, and returned to BS actuation problems. Here, we essentially devised a scheme to inject noise at 310.97 Hz and 313.74. The choice is twofold, first it lies outside the MICH loop UGF (~150 Hz), and second, it matches the sensing matrix OSC frequencies, so it's more appropriate for a comparison.
5. We injected two lines using the BS SUS LOCKIN1 and LOCKIN2 oscilators so we can probe two coils at once, with the LSC loop closed, and read back using the C1:LSC-MICH_IN1_DQ channel. We excited with an amplitude of 1234.0 counts and 1254 counts respectively (to match the ~ 2 % difference in frequency) and noted that the magnitude response in UR was 10% larger than UL, LL, and LR which were close to each other at the 2% level.

[Paco]

After rana left, I did a second pass at the BS actuation. I took TF measurements at the oscilator frequencies noted above using diaggui, and summarize the results below:

TF UL (310.97 Hz) UR (313.74 Hz) LL (310.97 Hz) LR (313.74 Hz)
Magnitude (dB) 93.20 92.20 94.27 93.85
Phase (deg) -128.3 -127.9 -128.4 -127.5

This procedure should be done with PRM as well and using the PRCL instead of MICH.

16429   Tue Oct 26 16:56:22 2021 PacoSummaryBHDPart I of BHR upgrade - Locked PMC and IMC

[Paco, Ian]

We opened the laser head shutter. Then, we scanned around the PMC resonance and locked it. We then opened the PSL shutter, touched the MC1, MC2 and MC3 alignment (mostly yaw) and managed to lock the IMC. The transmission peaked at ~ 1070 counts (typical is 14000 counts, so at 10% of PSL power we would expect a peak transmission of 1400 counts, so there might still be some room for improvement). The lock was engaged at ~ 16:53, we'll see for how long it lasts.

There should be IR light entering the BSC!!! Be alert and wear laser safety goggles when working there.

We should be ready to move forward into the TT2 + PR3 alignment.

16437   Thu Oct 28 16:32:32 2021 PacoSummaryBHDPart IV of BHR upgrade - Removal of BSC eastern optics

[Ian, Paco, Anchal]

We turned off the BSC oplev laser by turning the key counterclockwise. Ian then removed the following optics from the east end in the BSC:

• OM4-PJ (wires were disconnected before removal)
• GRX_SM1
• OM3
• BSOL1

We placed them in the center-front area of the XEND flow bench.

16444   Tue Nov 2 16:42:00 2021 PacoSummaryBHD1Y1 rack work

[paco, ian]

After the new 1Y0 rack was placed near the 1Y1 rack by Chub and Anchal, today we worked on the 1Y1 rack. We removed some rails from spaces ~ 25 - 30. We then drilled a pair of ~ 10-32 thru-holes on some L-shaped bars to help support the c1sus2 machine weight. The hole spacing was set to 60 cm; this number is not a constant across all racks. Then, we mounted c1sus2. While doing this, Paco's knee clicked some of the video MUX box buttons (29 and 8 at least). We then opened the rack's side door to investigate the DC power strips on it before removing stuff. We did power off the DC33 supplies on there. No connections were made to allow us to keep building this rack.

When coming back to the control room, we noticed 3/4 video feed (analog) for the Test masses had gone down... why?

Next steps:

• Remove sorensen (x5) power supplies from top of 1Y1 .. what are they actually powering???
• Make more bars to support heavy IO exp and acromag chassis.
• Make all connections (neat).

Update Tue Nov 2 18:52:39 2021

• After turning Sorensens back up, the ETM/ITM video feed was restored. I will need to hunt the power lines carefully before removing these.
16453   Mon Nov 8 10:13:52 2021 PacoSummaryBHD1Y1 rack work; Sorensens removed

[Paco, Chub]

Removed all sorensen power supplies from this rack except for 12 VDC one; that one got pushed to the top of the rack and is still powering the cameras.

16455   Mon Nov 8 15:29:05 2021 PacoSummaryBHD1Y1 rack work; New power for cameras

[Paco, Anchal]

In reference to Koji's concern (see previous elog), we have completely removed sorensen power supplies from 1Y1. We added a 12 Volts / 2 Amps AC-to-DC power supply for the cameras and verified it works. We stripped off all unused hardware from shutters and other power lines in the strips, and saved the relays and fuses.

We then mounted SR2, PR3, PR2 Sat Amps, 1Y1 Sat amp adapter, and C1SUS2 AA (2) and AI (3) boards. We made all connections we could make with the cables from the test stand, as well as power connections to an 18 VDC power strip.

16488   Tue Nov 30 17:11:06 2021 PacoUpdateGeneralMoved white rack to 1X3.5

[Paco, Ian, Tega]

We moved the white rack (formerly unused along the YARM) to a position between 1X3, and 1X4. For this task we temporarily removed the hepas near the enclosures, but have since restored them.

Attachment 1: IMG_8749.JPG
Attachment 2: IMG_8750.JPG
16499   Fri Dec 10 15:59:23 2021 PacoUpdateBHDFinished Coil driver (even serial number) units tests

[Paco, Anchal]

We have completed modifications and testing of the HAM Coil driver D1100687 units with serial numbers listed below. The DCC tree reflects these changes and tests (Run/Acq modes transfer functions).

 SERIAL # TEST result S2100608 PASS S2100610 PASS S2100612 PASS S2100614 PASS S2100616 PASS S2100618 PASS S2100620 PASS S2100622 PASS S2100624 PASS S2100626 PASS S2100628 PASS S2100630 PASS S2100632 PASS S2101648** FAIL (Ch1, Ch3 run mode) S2101650** FAIL (Ch3 run mode) S2101652** PASS S2101654** PASS

** A fix had to be done on the DC power supply for these. The units' regulated power boards were not connected to the raw DC power, so the cabling had to be modified accordingly (see Attachment #1)

Attachment 1: dc_fail.jpg
16506   Tue Dec 14 19:29:42 2021 PacoUpdateBHD1Y0 rack work for LO1

[Paco]

Two coil drivers have been installed on 1Y0 (slots 6, 7, for LO1 SOS). All connections have been made from the DAC, AI board, DAC adapter, Coil driver, Sat Amp box. Then no SOS load installed, all return connections have been made from Sat Amp box, ADC adapter, AA board, and to ADC. We will continue this work tomorrow, and try to test everything before closing the loop for LO1 suspension.

16507   Wed Dec 15 13:57:59 2021 PacoUpdateComputersupgraded ubuntu on zita

[Paco]

Upgraded zita's ubuntu and restarted the striptool script.

16539   Mon Jan 3 12:05:08 2022 PacoUpdateBHD1Y0 rack work for LO2 AS1 AS4

[Paco, Anchal]

Continue working on 1Y0. Added coil drivers for LO2, AS1, AS4. Anchal made additional labels for cables and boxes. We lined up all cables, connected the different units and powered them without major events.

16540   Mon Jan 3 16:46:41 2022 PacoUpdateBHD1Y1 rack work for SR2, PR2, PR3

[Paco, Anchal]

Continued working on 1Y1 rack. Populated the 6 coil drivers, made all connections between sat amp, AA chassis, DAC, and ADC adapters for SR2, PR2, and PR3 suspensions. Powered all boxes and labeled them and cables where needed. Near the end, we had to increase the current limit on the positive rail sorensen (+18 V) from ~ 7 to > 8.0 Amps to feed all the instruments. We also increased the negative (-18 V) current limit proportionally.

We think we are ready for all the new SOS on this side electronics-wise.

16542   Tue Jan 4 18:27:23 2022 PacoUpdateBHDSOS assembly -- PR3

[yehonathan, paco, anchal]

We continue suspending PR3 today. Yehonathan and Paco suspended the thick optic in its adapter. After fixing some nominal height and undoing any residual roll angle (see Attachments 1,2 for pictures), we noticed a problem with the pitch angle, so we insert the counterweights all the way in. Nevertheless, we soon found out that we needed to shift one of the two counterweights to the back of the adapter side (so one on each side) in order to tare the pitch angle. This is a newly experienced maneuver that may apply for further thick optics.

After taring the pitch angle roughly, we noted another issue. The wedge (~ 1 deg) on the optic made it such that the protruding socket heads on the thick side bumped against the lower clamp (not the earthquake stop tip itself). Attachments #4,5 show the before/after situation which was solved provisionally by replacing the socket head screws with lower profile (flat) head screws in situ. Again, this operation was highly delicate and specific to wedged thick optics, so for future SOS we should keep it in mind.

Another issue that we had with the new thick optic adapters is that for some reason there is a recession in the upper backside of the adapter (attachment coming soon). This makes the upper back EQ stop too short to touch the adapter. We replaced it with a longer screw. When inserted it doesn't really hit the back of the adapter. Rather, it touches the corner of the recession, stoping the optic with friction.

While all this was happening, Anchal started mounting AS4 on its adapter. After one of the magnets broke off, he switched to another one and succeeded. This is the next target for suspension. We still need to check the orientation of the wedge. Furthermore, we started a gluing session in the afternoon to prepare as much as possible for further SOS during the week. 3 side magnets were glued to side blocks. 3 magnets were glued to 3 adapters that were missing 1 magnet each.

In the afternoon, Yehonathan and Paco set up the QPD and did all the usual balancing, and then Anchal took the data of which the result is shown in Attachment #3. The major peaks are located at 723mHz, 953mHz, and 1.05Hz. Very similar to the case of the thin optic adapters.

Anchal progressed with OSEM installation, and engraving and yehonathan glued the counterweight setscrew in place. After securing the EQ stops, and wrapping the wires in foil, we declare PR3 is ready to be installed.

Attachment 1: PR3_roll_balance.png
Attachment 2: PR3_magnet_height.png
Attachment 3: FreeSwingingSpectra.pdf
Attachment 4: PXL_20220104_231742123.jpg
Attachment 5: PXL_20220104_232809203.jpg
16559   Sat Jan 8 16:01:42 2022 PacoSummaryBHDPart IX of BHR upgrade - Placed LO2 filters

Added input filters, input matrix, damping filters, output matrix, coil filters, and copy the state over from ITMX into LO2 screen in anticipation for damping.

16563   Mon Jan 10 15:45:55 2022 PacoUpdateElectronicsITMY feedthroughs and in-vac cables installed - part I

The ITMY 10" flange with 10 DSUB-25 feedthroughs has been installed with the cables connected at the in-vac side.  This is the first of two flanges, and includes 5 cables ordered vertically in stacks of 3 & 2 for [[OMC-DCPDs, OMC-QPDs, OMC-PZTs/Pico]] and [[SRM1, SRM2]] respectively from right to left. During installation, two 12-point silver plated bolts were stripped, so Chub had to replace them.

16569   Tue Jan 11 10:23:18 2022 PacoUpdateElectronicsITMY feedthroughs and in-vac cables installed - part II

[Paco, Chub]

The ITMY 10" flange with 4 DSUB-25 feedthroughs has been installed with the cables connected at the in-vac side. This is the second of two flanges, and includes 4 cables ordered vertically in stacks of 2 & 2 for [[AS1-1, AS1-2, AS4-1, AS4-2]] respectively. No major incidents during this one, except maybe a note that all the bolts were extremely dirty and covered with gunk, so we gave a quick swipe with wet cloths before reinstalling them.

16574   Tue Jan 11 14:21:53 2022 PacoUpdateElectronicsBS feedthroughs and in-vac cables installed

[Paco, Yehonathan, Chub]

The BS chamber 10" flange with 4 DSUB-25 feedthroughs has been installed with the cables connected at the in-vac side. This is the second of two flanges, and includes 4 cables ordered vertically in stacks of 2 & 2 for [[LO2-1, LO2-2, PR3-1, PR3-2]] respectively.

16588   Fri Jan 14 14:04:51 2022 PacoUpdateElectronicsRFSoC 2x2 board arrived

The Xilinx RFSoC 2x2 board arrived right before the winter break, so this is kind of an overdue elog. I unboxed it, it came with two ~15 cm SMA M-M cables, an SD card preloaded with the ARM processor and a few overlay jupyter notebooks, a two-piece AC/DC adapter (kind of like a laptop charger), and a USB 3.0 cable. I got a 1U box, lid, and assembled a prototype box to hold this board, but this need not be a permanent solution (see Attachment #1). I drilled 4 thru holes on the bottom of the box to hold the board in place. A large component exceeds the 1U height, but is thin enough to clear one of the thin slits at the top (I believe this is a fuse of some sort). Then, I found a brand new front panel, and drilled 4x 13/32 thru holes in the front for SMA F-F connectors.

I powered the board, and quickly accessed its tutorial notebooks, including a spectrum analyzer and signal generators just to quickly check it works normally. The board has 2 fast RFADCs and 2 RFDACs exposed, 12 and 14 bit respectively, running at up to 4 GSps.

Attachment 1: PXL_20220114_211249499.jpg
16591   Tue Jan 18 14:26:09 2022 PacoSummaryBHDPart IX of BHR upgrade - Placed remaining filters SR2, PR3, PR2

[Anchal, Paco]

Added input filters, input matrix, damping filters, output matrix, coil filters, and copy the state over from LO1 into SR2, PR2, PR3 screens in anticipation for damping.

16592   Tue Jan 18 15:19:49 2022 PacoSummaryBHDPart IV of BHR upgrade - Replaced old SR3 with SOS SR2. OSEM tuning attemtped.

[Tega, Anchal, Paco]

We started working on SR2 installation. Preliminary work involved

• Removing SR3 from BS chamber. For whatever reason this optic was still installed. It got relocated in the west corner of the ETMX flow bench.

That was pretty much it. After identifying the cabling situation, we proceeded to bring SR2 from the cleanroom. The magnets and wires remained well through their travel.

Connected OSEM one-by-one. Starting from top right  to left (PIn1)

1st connector: LL -> UR -> UL

2nd connector: LR -> SD** (we had some trouble here where the first time we made a connection we didn't see any signal, after a brief review of cables, sat amp unit, cables again with Koji, and sat amp again, we found out a connection was not done in the front of the SR2 SatAmp box, after which we saw the sensor signals).

Loosening all OSEMs and taking them out and noting full bright readings:

• SD: 29600 -> 14980
• LR:  21840 -> 10920
• UR:  25900 -> 12950
• LL: 28020  -> 14010
• UL: 25750 -> 12875

After finishing the initial SD osem tuning, we moved onto UL, and then to UR, but we noticed that the UR was not able to drop to its target value of ~13000 counts, even when the OSEM face was < 1 mm from the adapter (see Attachments #1-2). Apart from becoming harder to push in, it became apparent that the dark level (full shadow) is not consistent with ~ 0 counts; is there an offset coming from SatAmp? We quickly checked the OSEM by replacing it in-situ with another working one from the cleanroom batch, but the issue persisted. We decided to stop here, as we suspect the SatAmp box might have some issue.

Attachment 1: PXL_20220119_012954365.jpg
Attachment 2: PXL_20220119_013003522.jpg
16596   Wed Jan 19 12:56:52 2022 PacoSummaryBHDAS4 OSEMs installation

[Paco, Tega, Anchal]

Today, we started work on AS4 SOS by checking the OSEM and cable. Swapping the connection preserved the failure (no counts) so we swapped the long OSEM for a short one that we knew was working instead, and this solved the issue. We proceeded to swap in a "yellow label", long OSEM in place and then noticed the top plate had issues with the OSEM threads. We took out the bolt and inspected its thread, and even borrowed the screw from PR2 plate but saw the same effect. Even using a silver plated setscrew such as the SD OSEM one resulted in trouble... Then, we decided to not keep trying weird things, and took our sweet time to remove the UL, UR OSEMs, top earthquake stops, and top plate carefully in-situ. Then, we continued the surgery by installing a new top plate which we borrowed from the clean room (the only difference is the OSEM aperture barrels are teflon (?) rather than stainless. The operation was a success, and we moved on to OSEM installation.

After reaching a good place with the OSEM installation, where most sensors were at 50% brightness level and we were happy with the damping action (!!), we fixed all EQ stops and proceeded to push the SOS to its nominal placement. Then upong releasing the EQ stops, we found out that the sensor readings were shifted.

16604   Thu Jan 20 16:42:55 2022 PacoUpdateBHDAS4 OSEMs installation - part 2

[Paco]

Turns out, the shifting was likely due to the table level. Because I didn't take care the first time to "zero" the level of the table as I tuned the OSEMs, the installation was b o g u s. So today I took time to,

a) Shift AS4 close to the center of the table.

b) Use the clean level tool to pick a plane of reference. To do this, I iteratively placed two counterweights (from the ETMX flow bench) in two locations in the breadboard such that I nominally balanced the table under this configuration to zome reference plane z0. The counterweight placement is of course temporary, and as soon as we make further changes such as final placement of AS4 SOS, or installation of AS1, their positions will need to change to recover z=z0.

c) Install OSEMs until I was happy with the damping. ** Here, I noticed the new suspension screens had been misconfigured (probably c1sus2 rebooted and we don't have any BURT), so quickly restored the input and output matrices.

SUSPENSION STATUS UPDATED HERE

16620   Mon Jan 24 21:18:40 2022 PacoSummaryBHDPart IX of BHR upgrade - AS1 placed and OSEM tuned

[Paco]

AS1 was installed in the ITMY chamber today. For this I moved AS4 to its nominal final placement and clamped it down with a single dog clamp. Then, I placed AS1 near the center of the table, and quickly checked AS4 could still be damped. After this, I leveled the table using a heavier/lighter counterweight pair.

Once things were leveled, I proceeded to install AS1 OSEMs. The LL, UL, UR OSEMs had a bright level of 27000 counts, while SD and LR were at 29500, and 29900 respectively. After a while, I managed to damp all degrees of freedom around the 50% thousand count levels, and decided to stop.

UL 27000.  -> 16000
UR 27000. -> 13800
LL 27000 -> 14600
LR 29900 -> 14900
SD 29500 -> 12900

Free swinging test set to trigger

AS1 is set to go through a free swinging test at 3 am this evening. We have used this script (Git/40m/scripts/SUS/InMatCalc/freeSwing.py) reliably in the past so we expect no issues, it has a error catching block to restore all changes at the end of the test or if something goes wrong.

To access the test, on allegra, type:

tmux a -t AS1

Then you can kill the script if required by Ctrl-C, it will restore all changes while exiting.

SUSPENSION STATUS UPDATED HERE

16625   Thu Jan 27 08:27:34 2022 PacoSummaryBHDPart IX of BHR upgrade - AS1 inspection and correction

[Paco]

This morning, I went into ITMY chamber to inspect AS1 after the free swinging test failed. Indeed, as forecasted by Anchal, the top front EQ stop was slightly touching, which means AS1 was not properly installed before. I proceeded by removing it well behind any chance of touching the optic, and did the same for all the other stops, of which most were already recessed significantly. Finally, the OSEMs changed accordingly to produced a PITCHed optic (top front EQ was slightly biasing the pitch angle), so I did a reinstallation until the levels were around the 14000 count region. After damping AS1 relatively quickly, I closed the ITMY chamber.

 Quote: For some reasonf the free swing test showed only one resonance peak (see attachment 1). This probably happened because one of the earthquake stops is touching the optic. Maybe after the table balancing, the table moved a little over its long relazation time and by the time the free swing test was performed at 3 am, one of the earthquake stops was touching the optic. We need to check this when we open the chamber next.

16628   Thu Jan 27 18:03:36 2022 PacoSummaryBHDPart III of BHR upgrade - Install PR2, balance, and attempt OSEM tuning

[Paco, Anchal, Tega]

After installing the short OSEMs into PR2, we moved it into ITMX Chamber. While Tega loaded some of the damping filters and other settings, we took time to balance the heavily tilted ITMX chamber. After running out of counterweigths, Anchal had to go into the cleanroom and bring the SOS stands, two of which had to be stacked near the edge of the breadoard. Finally, we connected the OSEMs following the canonical order

LL -> UR-> UL

LR -> SD

But found that UR was reading -14000 counts. So, we did a quick swap of the UR and UL sensors and verified that the OSEM itself is working, just in a different channel... So it's time to debug the electronics (probably PR2 Sat Amp?)...

PR2 Sat Amp preliminary investigation:

• The UR channel (CH1 or CH4, on PR2 Sat Amp)  gives negative value as soon as an OSEM is connected.
• Without anything connected, the channel reports the usual 0V output.
• With the PDA inputs of PR2 Sat Amp shorted, we again see 0V output.
• So it seems like when non-zero photodiode current goes, the circuit sees a reversal of gain polarity and the gain is roughly half in magnitude as the correct one.
16630   Fri Jan 28 10:37:59 2022 PacoSummaryBHDPart III of BHR upgrade - PR2 OSEM finalized, reinsall LO1 OSEMs

[Paco]

Thanks to Koji's hotfix on the PR2 SatAmp box last evening, this morning I was able to finish the OSEM installation for PR2. PR2 is now fully damped. Then, I realized that with the extreme rebalancing done in ITMX chamber, LO1 needed to be reinstalled, so I proceeded to do that. I verified all the degrees of freedom remained damped.

I think all SOS are nominally damped, so we are 90% done with suspension installation!

SUSPENSION STATUS UPDATED HERE

16648   Mon Feb 7 09:00:26 2022 PacoUpdateGeneralScheduled power outage recovery

[Paco]

Started recovering from scheduled (Feb 05) power outage. Basically, time-reversing through this list.

== Office area ==

• Power martian network switches, WiFi routers on the north-rack.
• Power windows (CAD) machine on.

== Main network stations ==

• Power on nodus, try ping (fail).
• Power on network switches, try ping (success), try ssh controls@nodus.ligo.caltech.edu (success).
• Power on chiara to serve names for other stations, try ssh chiara (success).
• Power on fb1, try ping (success), try ssh fb1 (success).
• Power on paola (xend laptop), viviana (yend laptop), optimus, megatron.

== Control workstations ==

• Power on zita (success)
• Power on donatella (success)
• Power on allegra (fail)  **
• Power on pianosa (success)
• Power on rossa (success)
• From nodus, started elog (success).

== PSL + Vertex instruments ==

• Turn on newport PD power supplies on PSL table.
• Turn on TC200 temp controller on (setpoint --> 36.9 C)
• Turn on two oscilloscopes in PSL table.
• Turn on PSL (current setpoint --> 2.1 A, other settings seem nominal)
• Turn on Thorlabs HV pzt supply.
• Turn on ITMX OpLev / laser instrument AC strip.

== YEND and XEND instruments ==

• Turn on XEND AUX pump on (current setpoint -->1.984 A)
• Turn on XEND AUX SHG oven on (setpoint --> 37.1 C) (see green beam)
• Turn on XEND AUX shutter controller on.
• Turn on DCPD supply, and OpLev supply AC strip on.
• Turn on YEND AUX pump on (fail) *
• With the controller on STDBY, I tried setting up the current but got HD FAULT (or according to the manual this is what the head reports when the diode temperature is too high...)
• Upon power cycling the controller, even the controller display stopped working... YAUX controller + head died? maybe just the diode? maybe just the controller?
• I borrowed a spare LW125 controller from the PSL table (Yehonathan pointed me to it) and swapped it in.
• Got YEND AUX to lase with this controller, so the old controller is busted but at least the laser head is fine.
• Even saw SHG light. We switched the laser head off to "STDBY" (so it remains warm) and took the faulty controller out of there.
• Turn on YEND AUX SHG oven on (setpoint -->35.7 C)
• Turn on YEND AUX shutter controller on.

== YARM Electronic racks ==

== XARM Electronic racks ==

* Top priority, this needs to be fixed.

** Non-priority, but to be debugged

16655   Wed Feb 9 16:43:35 2022 PacoUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

[Paco, Anchal]

• We went in and measured the power after the power splitting HWP at the PSL table. Almost right before the PSL shutter (which was closed), when the PMC was locked we saw ~ 598 mW (!!)
• Checking back on ESP300, it seems the channel was not enabled even though the right angle was punched in, so it got enabled.
• No change.
• The power adjustment MEDM screen is not really working...
• Going back to the controller, press HOME on the Axis 1 (our HWP) and see it go to zero...
• Now the power measured is ~ 78 mW.
• Not sure why the MEDM screen didn't really work (this needs to be fixed later)

We proceeded to align the MC optics because all offsets in MC_ALIGN screen were zeroed. After opening the PSL shutter, we used values from last year as a reference, and try to steadily recover the alignment. The IMC lock remains at large.

16669   Mon Feb 14 18:31:50 2022 PacoUpdateGeneralScheduled power outage recovery - IMC recovery progress

[Paco, Anchal, Tega]

We have been realigning the IMC as of last Friday (02/11). Today we made some significant progress (still at high input power), but the IMC autolocker is unable to engage a stable mode lock. We have made some changes to reach this point, including re-centering of the MC1 REFL beam on the ccd, centering of MC2 QPD trans (using flashes), and centering of the MC REFL RFPD beam. The IMC is flashing to peak transmission of > 50% its max (near 14,000 counts average on 2021), and all PDs seem to be working ok... We will keep the PSL shutter closed (especially with high input power) for now.

ELOG V3.1.3-