[yehonathan, anchal, paco]
Yesterday around 9:30 pm, we centered the BS, ITMY, ETMY, ITMX and ETMX oplevs (in that order) in their respective QPDs by turning the last mirror before the QPDs. We did this after running the ASS dither for the XARM/YARM configurations to use as the alignment reference. We did this in preparation for PRFPMI lock acquisition which we had to stop due to an earthquake around midnight
We picked up AS WFS comissioning for daytime work as suggested by gautam. In the end we want to comission this for the PRFPMI, but also for PRMI, and MICH for completeness. MICH is the simplest so we are starting here.
We started by restoromg the MICH configuration and aligning the AS DC QPD (on the AS table) by zeroing the C1:ASC-AS_DC_YAW_OUT and C1:ASC-AS_DC_PIT_OUT. Since the AS WFS gets the AS beam in transmission through a beamsplitter, we had to correct such a beamsplitters's aligment to recenter the AS beam onto the AS110 PD (for this we looked at the signal on a scope).
We then checked the rotation (R) C1:ASC-AS_RF55_SEGX_PHASE_R and delay (D) angles C1:ASC-AS_RF55_SEGX_PHASE_D (where X = 1, 2, 3, 4 for segment) to rotate all the signal into the I quadrature. We found that this optimized the PIT content on C1:ASC-AS_RF55_I_PIT_OUT and YAW content on C1:ASC-AS_RF55_I_YAW_OUTMON which is what we want anyways.
Finally, we set up some simple integrators for these WFS on the C1ASC-DHARD_PIT and C1ASC-DHARD_YAW filter banks with a pole at 0 Hz, a zero at 0.8 Hz, and a gain of -60 dB (similar to MC WFS). Nevertheless, when we closed the loop by actuating on the BS ASC PIT and ASC YAW inputs, it seemed like the ASC model outputs are not connected to the BS SUS model ASC inputs, so we might need to edit accordingly and restart the model.
[anchal, yehonatan, paco]
For whatever reason (i.e. we don't really know) the MC unlocked into a weird state at ~ 10:40 AM today. We first tried to find a likely cause as we saw it couldn't recover itself after ~ 40 min... so we decided to try a few things. First we verified that no suspensions were acting weird by looking at the OSEMs on MC1, MC2, and MC3. After validating that the sensors were acting normally, we moved on to the WFS. The WFS loops were disabled the moment the IMC unlocked, as they should. We then proceeded to the last resort of tweaking the MC alignment a bit, first with MC2 and then MC1 and MC3 in that order to see if we could help the MC catch its lock. This didn't help much initially and we paused at about noon.
At about 5 pm, we resumed since the IMC had remained locked to some higher order mode (TEM-01 by the looks of it). While looking at C1:IOO-MC_TRANS_SUMFILT_OUT on ndscope, we kept on shifting the MC2 Yaw alignment slider (steps = +-0.01 counts) slowly to help the right mode "hop". Once the right mode caught on, the WFS loops triggered and the IMC was restored. The transmission during this last stage is shown in Attachment #1.
Yesterday we discussed a bit about working on the PRMI sensing matrix.
In particular we will start with the "issue" of non-orthogonality in the MICH actuated by BS + PRM. Yesterday afternoon we played a little with the oscillators and ran sensing lines in MICH and PRCL (gains of 50 and 5 respectively) in the times spanning [1312671582 -> 1312672300], [1312673242 -> 1312677350] for PRMI carrier and [1312673832 -> 1312674104] for PRMI sideband. Today we realize that we could have enabled the notchSensMat filter, which is a notch filter exactly at the oscillator's frequency, in FM10 and run a lower gain to get a similar SNR. We anyways want to investigate this in more depth, so here is our tentative plan of action which implies redoing these measurements:
Task: investigate orthogonality (or lack thereof) in the MICH when actuated by BS & PRM.
1) Run sensing MICH and PRCL oscillators with PRMI Carrier locked (remember to turn NotchSensMat filter on).
2) Analyze data and establish the reference sensing matrix.
3) Write a script that performs steps 2 and 3 in a robust and safe way.
4) Scan the C1:LSC-LOCKIN_OUTMTRX, MICH to BS and PRM elements around their nominal values.
5) Scan the MICH and PRCL RFPD rotation angles around their nominal values.
We also talked about the possibility that the sensing matrix is strongly frequnecy dependant such that measuring it at 311Hz doesn't give us accurate estimation of it. Is it worthwhile to try and measure it at lower frequencies using an appropriate notch filter?
Wed Aug 11 15:28:32 2021 Updated plan after group meeting
- The problem may be in the actuators since the orthogonality seems fine when actuating on the ITMX/ITMY, so we should instead focus on measuring the actuator transfer functions using OpLevs for example (same high freq. excitation so no OSEM will work > 10 Hz).
Thu Aug 12 11:04:42 2021 Arrived to find the PSL shutter closed. Why? Who? When? How? No elog, no fun. I opened it, IMC is now locked, and the arms were restored and aligned.
[koji, ian, tega, paco]
With the remote/local assistance of Tega/Ian last friday I made changes on the c1sus model by connecting the C1:ASC model outputs (found within a block in c1ioo) to the BS and PRM suspension inputs (pitch and yaw). Then, Koji reviewed these changes today and made me notice that no changes are actually needed since the blocks were already in place, connected in the right ports, but the model probably just wasn't rebuilt...
So, today we ran "rtcds make", "rtcds install" on the c1ioo and c1sus models (in that order) but the whole system crashed. We spent a great deal of time restarting the machines and their processes but we struggled quite a lot with setting up the right dates to match the GPS times. What seemed to work in the end was to follow the format of the date in the fb1 machine and try to match the timing to the sub-second level. This is especially tricky when performed by a human action so the whole task is tedious. We anyways completed the reboot for almost all the models except the c1oaf (which tends to make things crashy) since we won't need it right away for the tasks ahead. One potential annoying issue we found was in manually rebooting c1iscey because one of its network ports is loose (the ethernet cable won't click in place) and it appears to use this link to boot (!!) so for a while this machine just wasn't coming back up.
Finally, as we restored the suspension controls and reopened the shutters, we noticed a great deal of misalignment to the point no reflected beam was coming back to the RFPD table. So we spent some time verifying the PRM alignment and TT1 and TT2 (tip tilts) and it turned out to be mostly the latter pair that were responsible for it. We used the green beams to help optimize the XARM and YARM transmissions and were able to relock the arms. We ran ASS on them, and then aligned the PRM OpLevs which also seemed off. This was done by giving a pitch offset to the input PRM oplev beam path and then correcting for it downstream (before the qpd). We also adjusted the BS OpLev in the end.
Summary; the ASC BS and PRM outputs are now built into the SUS models. Let the AS WFS loops be closed soon!
Addenda by KA
- Upon the RTS restarting,
sudo date --set='xxxxxx'
rtcds start c1x01
telnet fb1 8083
- Today we once succeeded to restart the vertex machines. However, the RFM signal transmission did fail. So the end two machines were power cycled as well as c1rfm, but this made all the machines in RED again. Hell...
- We checked the PRM oplev. The spot was around the center but was clipped. This made us so confused. Our conclusion was that the oplev was like that before the RTS reboot.
At 09:34 PST I noted a glitch in the controls room as the machines went down except for c1ioo. Briefly, the video feeds disappeared from the screens, though the screens themselves didn't lose power. At first I though this was some kind of power glitch, but upon checking with Jordan, it most likely was related to some system crash. Coming back to the controls room, I could see the MC reflection beam swinging, but unfortunately all the FE models came down. I noticed that the DAQ status channels were blank.
I ssh into c1ioo no problem and ran "rtcds stop c1ioo c1als c1omc", then "rtcds restart c1x03" to do a soft restart. This worked, but the DAQ status was still blank. I then tried to ssh into c1sus and c1lsc without success, similarly c1iscex and c1iscey were unreachable. I went and did a hard restart on c1iscex by switching it off, then its extension chassis, then unplugging the power cords, then inverting these steps, and could ssh into it from rossa. I ran "rtcds start c1x01" and saw the same blank DAQ status. I noticed the elog was also down... so nodus was also affected?
Anchal got on zoom to offer some assistance. We discovered that the fb1 and nodus were subject to some kind of system reboot at precisely 09:34. The "systemctl --failed" command on fb1 displayed both the daqd_dc.service and rc-local.service as loaded but failed (inactive). Is it a good idea to try and reboot the fb1 machine? ... Anchal was able to bring elog back up from nodus (ergo, this post).
Although it probably needs the DAQ service from the fb1 machine to be up and running, I tried running the scripts/cds/rebootC1LSC.sh script. This didn't work. I tried running sudo systemctl restart daqd_dc from the fb1 machine without success. Running systemctl reset-failed "worked" for daqd_dc and rc-local services on fb1 in the sense that they were no longer output from systemctl --failed, but they remained inactive (dead) when running systemctl status on them. Following from 15303 I succeeded in restarting the daqd services. Turned out I needed to manually start the open-mx and mx services in fb1. I rerun the restartC1LSC script without success. The script fails because some machines need to be rebooted by hand.
tl;dr: NTP servers and clients were never synchronized, are not synchronizing even with ntp... nodus is synchronized but uses chronyd; should we use chronyd everywhere?
Spent some time investigating the ntp synchronization. In the morning, after Anchal set up all the ntp servers / FE clients I tried restarting the rts IOPs with no success. Later, with Tega we tried the usual manual matching of the date between c1iscex and fb1 machines but we iterated over different n-second offsets from -10 to +10, also without success.
This afternoon, I tried debugging the FE and fb1 timing differences. For this I inspected the ntp configuration file under /etc/ntp.conf in both the fb1 and /diskless/root.jessie/etc/ntp.conf (for the FE machines) and tried different combinations with and without nodus, with and without restrict lines, all while looking at the output of sudo journalctl -f on c1iscey. Everytime I changed the ntp config file, I restarted the service using sudo systemctl restart ntp.service . Looking through some online forums, people suggested basic pinging to see if the ntp servers were up (and broadcasting their times over the local network) but this failed to run (read-only filesystem) so I went into fb1, and ran sudo chroot /diskless/root.jessie/ /bin/bash to allow me to change file permissions. The test was first done with /bin/ping which couldn't even open a socket (root access needed) by running chmod 4755 /bin/ping then ssh-ing into c1iscey and pinging the fb1 machine successfully. After this, I ran chmod 4755 /usr/sbin/ntpd so that the ntp daemon would have no problem in reaching the server in case this was blocking the synchronization. I exited the chroot shell and the ntp daemon in c1iscey; but the ntpstat still showed unsynchronised status. I also learned that when running an ntp query with ntpq -p if a client has succeeded in synchronizing its time to the server time, an asterisk should be appended at the end. This was not the case in any FE machine... and looking at fb1, this was also not true. Although the fb1 peers are correctly listed as nodus, the caltech ntp server, and a broadcast (.BCST.) server from local time (meant to serve the FE machines), none appears to have synchronized... Going one level further, in nodus I checked the time synchronization servers by running chronyc sources the output shows
controls@nodus|~> chronyc sources
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
^* testntp1.superonline.net 1 10 377 280 +1511us[+1403us] +/- 92ms
^+ 184.108.40.206 2 10 377 206 +8219us[+8219us] +/- 117ms
^+ tms04.deltatelesystems.ru 2 10 377 23m -17ms[ -17ms] +/- 183ms
^+ ntp.gnc.am 3 10 377 914 -8294us[-8401us] +/- 168ms
I then ran chronyc clients to find if fb1 was listed (as I would have expected) but the output shows this --
Hostname Client Peer CmdAuth CmdNorm CmdBad LstN LstC
========================= ====== ====== ====== ====== ====== ==== ====
501 Not authorised
So clearly chronyd succeeded in synchronizing nodus' time to whatever server it was pointed at but downstream from there, neither the fb1 or any FE machines seem to be synchronizing properly. It may be as simple as figuring out the correct ntp configuration file, or switching to chronyd for all machines (for the sake of homogeneity?)
[paco, tega, koji]
After invaluable assistance from Jamie in fixing this yearly offset in the gps time reported by cat /proc/gps, we managed to restart the real time system correctly (while still manually synchronizing the front end machine times). After this, we recovered the mode cleaner and were able to lock the arms with not much fuss.
Nevertheless, tega and I noticed some weird noise in the C1:LSC-TRX_OUT which was not present in the YARM transmission, and that is present even in the absence of light (we unlocked the arms and just saw it on the ndscope as shown in Attachment #1). It seems to affect the XARM and in general the lock acquisition...
We took some quick spectrum with diaggui (Attachment #2) but it doesn't look normal; there seems to be broadband excess noise with a remarkable 1 kHz component. We will probably look into it in more detail.
We went over the X end to check what was going on with the TRX signal. We spotted the ground terminal coming from the QPD is loosely touching the handle of one of the computers on the rack. When we detached it completely from the rack the noise was gone (attachment 1).
We taped this terminal so it doesn't touch anything accidently. We don't know if this is the best solution since it is probably needs a stable voltage reference. In the Y end those ground terminals are connected to the same point on the rack. The other ground terminals in the X end are just cut.
We also took the PSD of these channels (attachment 2). The noise seem to be gone but TRX is still a bit noisier than TRY. Maybe we should setup a proper ground for the X arm QPD?
We saw that the X end station ALS laser was off. We turned it on and also the crystal oven and reenabled the temperature controller. Green light immidiately appeared. We are now working to restore the ALS lock. After running XARM ASS we were unable to lock the green laser so we went to the XEND and moved the piezo X ALS alignment mirrors until we maximized the transmission in the right mode. We then locked the ALS beams on both arms successfully. It very well could be that the PZT offsets were reset by the power glitch. The XARM ALS still needs some tweaking, its level is ~ 25% of what it was before the power glitch.
Used diaggui to get OLTF in preparation for optimal system identification / calibration. The excitation was injected at the control point of the XARM loop C1:LSC-XARM_EXC. Attachment 1 shows the TF (red scatter) taken from 35 Hz to 2.3 kHz with 201 points. The swept sine excitation had an envelope amplitude of 50 counts at 35 Hz, 0.2 counts at 100 Hz, and 0.2 at 200 Hz. In purple continous line, the model for the OLTF using all the digital control filters as well as a simple 1 degree of freedom plant (single pole at 0.99 Hz) is overlaid. Note the disagreement of the OLTF "model" at higher frequencies which we may be able to improve upon using vector fitting.
Attachment 2 shows the coherence (part of this initial measurement was to identify an appropriately large frequency range where the coherence is good before we script it).
[paco, koji, tega, ian]
Today in the morning the name server / network file system running in chiara failed. This resulted in donatella/pianosa/rossa shell prompts to hang forever. It also made sitemap crash and even dropping into a bash shell and just listing files from some directory in the file system froze the computer. Remote ssh sessions on nodus also had the same symptoms.
A little after 1 pm, we started debugging this issue with help from Koji. He suggested we hook a monitor, keyboard, and mouse onto chiara as it should still work locally even if something with the NFS (network file system) failed. We did this and then we tried for a while to unmount the /dev/sdc1/ from /home/cds/ (main file system) and mount /dev/sdb1/ from /media/40mBackup (backup copy) such that they swap places. We had no trouble unmounting the backup drive, but only succeeded in unmounting the main drive with the "lazy" unmount, or running "umount -l". Running "df" we could see that the disk space was 100% used, with only ~ 1 GB of free space which may have been the cause for the issue. After swapping these disks by editing the /etc/fstab file to implement the aforementioned swapping, we rebooted chiara and we recovered the shell prompts in all workstations, sitemap, etc... due to the backup drive mounting. We then started investigating what caused the main drive to fill up that quickly, and noted that weirdly now the capacity was at 85% or about 500GB less than before (after reboot and remount), so some large file was probably dumped into chiara that froze the NFS causing the issue.
At this point we tried opening the PSL shutter to recover the IMC. The shutter would not open and we suspected the vacuum interlock was still tripped... and indeed there was an uncleared error in the VAC screen. So with Koji's guidance we walked to the c1vac near the HV station and did the following at ~ 5:13 PM -->
We made sure that P1a (main vacuum pressure) was dropping and before continuing we decided to look back to see what the nominal vacuum state was that we should try to restore.
We are currently searching the two systems for diffrences to see if we can narrow down the culprit of the failure.
We found the files that took excess space in the chiara filesystem (see Attachment 1). They were error files from the summary pages that were ~ 50 GB in size or so located under /home/cds/caltech/users/public_html/detcharsummary/logs/. We manually removed them and then copied the rest of the summary page contents into the main file system drive (this is to preserve the information backup before it gets deleted by the cron job at the end of today) and checked carefully to identify the actual issue for why these files were as large in the first place.
We then copied the /detcharsummary directory from /media/40mBackup into /home/cds to match the two disks.
Came in at ~ 9 PT this morning to find the IFO "down". The IMC had lost its lock ~ 6 hours before, so at about 03:00 AM. Nothing seemed like the obvious cause; there was no record of increased seismic activity, all suspensions were damped and no watchdog had tripped, and the pressure trends similar to those in recent pressure incidents show nominal behavior (Attachment #1). What happened?
Anyways I simply tried reopening the PSL shutter, and the IMC caught its lock almost immediately. I then locked the arms and everything seems fine for now .
For the past couple of days the 0.1 to 0.3 Hz RMS seismic noise along BS-X has increased. Attachment 1 shows the hour trend in the last ~ 10 days. We'll keep monitoring it, but one thing to note is how uncorrelated it seems to be from other frequency bands. The vertical axis in the plot is in um / s
[yehonathan, paco, anchal]
We attempted to find any symptoms for actuation problems in the PRMI configuration when actuated through BS and PRM.
Our logic was to check angular (PIT and YAW) actuation transfer function in the 30 to 200 Hz range by injecting appropriately (f^2) enveloped excitations in the SUS-ASC EXC points and reading back using the SUS_OL (oplev) channels.
From the controls, we first restored the PRMI Carrier to bring the PRM and BS to their nominal alignment, then disabled the LSC output (we don't need PRMI to be locked), and then turned off the damping from the oplev control loops to avoid supressing the excitations.
We used diaggui to measure the 4 transfer functions magnitudes PRM_PIT, PRM_YAW, BS_PIT, BS_YAW, as shown below in Attachments #1 through #4. We used the Oplev calibrations to plot the magnitude of the TFs in units of urad / counts, and verified the nominal 1/f^2 scaling for all of them. The coherence was made as close to 1 as possible by adjusting the amplitude to 1000 counts, and is also shown below. A dip at 120 Hz is probably due to line noise. We are also assuming that the oplev QPDs have a relatively flat response over the frequency range below.
Here are some plots from analyzing the C1:LSC-XARM calibration. The experiment is done with the XARM (POX) locked, a single line is injected at C1:LSC-XARM_EXC at f0 with some amplitude determined empirically using diaggui and awggui tools. For the analysis detailed in this post, f0 = 19 Hz, amp = 1 count, and gain = 300 (anything larger in amplitude would break the lock, and anything lower in frequency would not show up because of loop supression). Clearly, from Attachment #3 below, the calibration line can be detected with SNR > 1.
We read the test point right after the excitation C1:LSC-XARM_IN2 which, in a simplified loop will carry the excitation suppressed by 1 - OLTF, the open loop transfer function. The line is on for 5 minutes, and then we read for another 5 minutes but with the excitation off to have a reference. Both the calibration and reference signal time series are shown in Attachment #1 (decimated by 8). The corresponding ASDs are shown in Attachment #2. Then, we demodulate at 19 Hz and a 30 Hz, 4th-order butterworth LPF, and get an I and Q timeseries (shown in Attachment #3). Even though they look similar, the Q is centered about 0.2 counts, while the I is centered about 0.0. From this time series, we can of course show the noise ASDs in Attachment #3.
The ASD uncertainty bands in the last plot are statistical estimates and depend on the number of segments used in estimating the PSD. A thing to note is that the noise features surrounding the signal ASD around f0 are translated into the ASD in the demodulated signals, but now around dc. I guess from Attachment #3 there is no difference in the noise spectra around the calibration line with and without the excitation. This is what I would have expected from a linear system. If there was a systematic contribution, I would expect it to show at very low frequencies.
We had a second go at this with an increased number of averages (from 10 to 100) and higher excitation amplitudes (from 1000 to 10000). We did this to try to reduce the relative uncertainty a-la-Bendat-and-Pearsol
where are the coherence and number of averages respectively. Before, this estimate had given us a ~30% relative uncertainty and now it has been improved to ~ 10%. The re-measured TFs are in Attachment #1. We did 4 sweeps for each optic (BS, PRM) and removed the 1/f^2 slope for clarity. We note a factor of ~ 4 difference in the magnitude of the coil to angle TFs from BS to PRM (the actuation strength in BS is smaller).
For future reference:
With complex G, we get complex error in G using the formula above. To get uncertainity in magnitude and phase from real-imaginary uncertainties, we do following (assuming the noise in real and imaginary parts of the measured transfer function are incoherent with each other):
Here is a demonstration of the methods leading to the single (X)arm calibration with its budget uncertainty. The steps towards this measurement are the following:
** Note: We ran the same procedure using dtt (diaggui) to validate our estimates at every point, as well as check our SNR in b and d before taking the ~3.5 hours of data.
We repeated the same procedure as before, but with 3 different lines at 55.511, 154.11, and 1071.11 Hz. We overlay the OLTF magnitudes and phases with our latest model (which we have updated with Koji's help) and include the rms uncertainties as errorbars in Attachment #1.
We also plot the noise ASDs of calibrated OLTF magnitudes at the line frequencies in Attachment #2. These curves are created by calculating power spectral density of timeseries of OLTF values at the line frequencies generated by demodulated XARM_IN and ETMX_LSC_OUT signals. We have overlayed the TRX noise spectrum here as an attempt to see if we can budget the noise measured in values of G to the fluctuation in optical gain due to changing power in the arms. We multiplied the the transmission ASD with the value of OLTF at those frequencies as the transfger function from normalized optical gain to the total transfer function value.
It is weird that the fluctuations in transmission power at 1 mHz always crosses the total noise in the OLTF value in all calibration lines. This could be an artificat of our data analysis though.
Even if the contribution of the fluctuating power is correct, there is remaining excess noise in the OLTF to be budgeted.
I have finished assembling the 1U adapters from 8 to 5 DB9 conn. for the satellite amp boxes. One thing I had to "hack" was the corners of the front panel end of the PCB. Because the PCB was a bit too wide, it wasn't really flush against the front panel (see Attachment #1), so I just filed the corners by ~ 3 mm and covered with kapton tape to prevent contact between ground planes and the chassis. After this, I made DB9 cables, connected everything in place and attached to the rear panel (Attachment #2). Four units are resting near the CAD machine (next to the bench area), see Attachment #3.
We had a look at the BS actuation. Along the way we created a couple of issues that we fixed. A summary is below.
After rana left, I did a second pass at the BS actuation. I took TF measurements at the oscilator frequencies noted above using diaggui, and summarize the results below:
This procedure should be done with PRM as well and using the PRCL instead of MICH.
We opened the laser head shutter. Then, we scanned around the PMC resonance and locked it. We then opened the PSL shutter, touched the MC1, MC2 and MC3 alignment (mostly yaw) and managed to lock the IMC. The transmission peaked at ~ 1070 counts (typical is 14000 counts, so at 10% of PSL power we would expect a peak transmission of 1400 counts, so there might still be some room for improvement). The lock was engaged at ~ 16:53, we'll see for how long it lasts.
There should be IR light entering the BSC!!! Be alert and wear laser safety goggles when working there.
We should be ready to move forward into the TT2 + PR3 alignment.
[Ian, Paco, Anchal]
We turned off the BSC oplev laser by turning the key counterclockwise. Ian then removed the following optics from the east end in the BSC:
We placed them in the center-front area of the XEND flow bench.
After the new 1Y0 rack was placed near the 1Y1 rack by Chub and Anchal, today we worked on the 1Y1 rack. We removed some rails from spaces ~ 25 - 30. We then drilled a pair of ~ 10-32 thru-holes on some L-shaped bars to help support the c1sus2 machine weight. The hole spacing was set to 60 cm; this number is not a constant across all racks. Then, we mounted c1sus2. While doing this, Paco's knee clicked some of the video MUX box buttons (29 and 8 at least). We then opened the rack's side door to investigate the DC power strips on it before removing stuff. We did power off the DC33 supplies on there. No connections were made to allow us to keep building this rack.
When coming back to the control room, we noticed 3/4 video feed (analog) for the Test masses had gone down... why?
Update Tue Nov 2 18:52:39 2021
Removed all sorensen power supplies from this rack except for 12 VDC one; that one got pushed to the top of the rack and is still powering the cameras.
In reference to Koji's concern (see previous elog), we have completely removed sorensen power supplies from 1Y1. We added a 12 Volts / 2 Amps AC-to-DC power supply for the cameras and verified it works. We stripped off all unused hardware from shutters and other power lines in the strips, and saved the relays and fuses.
We then mounted SR2, PR3, PR2 Sat Amps, 1Y1 Sat amp adapter, and C1SUS2 AA (2) and AI (3) boards. We made all connections we could make with the cables from the test stand, as well as power connections to an 18 VDC power strip.
[Paco, Ian, Tega]
We moved the white rack (formerly unused along the YARM) to a position between 1X3, and 1X4. For this task we temporarily removed the hepas near the enclosures, but have since restored them.
We have completed modifications and testing of the HAM Coil driver D1100687 units with serial numbers listed below. The DCC tree reflects these changes and tests (Run/Acq modes transfer functions).
** A fix had to be done on the DC power supply for these. The units' regulated power boards were not connected to the raw DC power, so the cabling had to be modified accordingly (see Attachment #1)
Two coil drivers have been installed on 1Y0 (slots 6, 7, for LO1 SOS). All connections have been made from the DAC, AI board, DAC adapter, Coil driver, Sat Amp box. Then no SOS load installed, all return connections have been made from Sat Amp box, ADC adapter, AA board, and to ADC. We will continue this work tomorrow, and try to test everything before closing the loop for LO1 suspension.
Upgraded zita's ubuntu and restarted the striptool script.
Continue working on 1Y0. Added coil drivers for LO2, AS1, AS4. Anchal made additional labels for cables and boxes. We lined up all cables, connected the different units and powered them without major events.
Continued working on 1Y1 rack. Populated the 6 coil drivers, made all connections between sat amp, AA chassis, DAC, and ADC adapters for SR2, PR2, and PR3 suspensions. Powered all boxes and labeled them and cables where needed. Near the end, we had to increase the current limit on the positive rail sorensen (+18 V) from ~ 7 to > 8.0 Amps to feed all the instruments. We also increased the negative (-18 V) current limit proportionally.
We think we are ready for all the new SOS on this side electronics-wise.
We continue suspending PR3 today. Yehonathan and Paco suspended the thick optic in its adapter. After fixing some nominal height and undoing any residual roll angle (see Attachments 1,2 for pictures), we noticed a problem with the pitch angle, so we insert the counterweights all the way in. Nevertheless, we soon found out that we needed to shift one of the two counterweights to the back of the adapter side (so one on each side) in order to tare the pitch angle. This is a newly experienced maneuver that may apply for further thick optics.
After taring the pitch angle roughly, we noted another issue. The wedge (~ 1 deg) on the optic made it such that the protruding socket heads on the thick side bumped against the lower clamp (not the earthquake stop tip itself). Attachments #4,5 show the before/after situation which was solved provisionally by replacing the socket head screws with lower profile (flat) head screws in situ. Again, this operation was highly delicate and specific to wedged thick optics, so for future SOS we should keep it in mind.
Another issue that we had with the new thick optic adapters is that for some reason there is a recession in the upper backside of the adapter (attachment coming soon). This makes the upper back EQ stop too short to touch the adapter. We replaced it with a longer screw. When inserted it doesn't really hit the back of the adapter. Rather, it touches the corner of the recession, stoping the optic with friction.
While all this was happening, Anchal started mounting AS4 on its adapter. After one of the magnets broke off, he switched to another one and succeeded. This is the next target for suspension. We still need to check the orientation of the wedge. Furthermore, we started a gluing session in the afternoon to prepare as much as possible for further SOS during the week. 3 side magnets were glued to side blocks. 3 magnets were glued to 3 adapters that were missing 1 magnet each.
In the afternoon, Yehonathan and Paco set up the QPD and did all the usual balancing, and then Anchal took the data of which the result is shown in Attachment #3. The major peaks are located at 723mHz, 953mHz, and 1.05Hz. Very similar to the case of the thin optic adapters.
Anchal progressed with OSEM installation, and engraving and yehonathan glued the counterweight setscrew in place. After securing the EQ stops, and wrapping the wires in foil, we declare PR3 is ready to be installed.
Added input filters, input matrix, damping filters, output matrix, coil filters, and copy the state over from ITMX into LO2 screen in anticipation for damping.
The ITMY 10" flange with 10 DSUB-25 feedthroughs has been installed with the cables connected at the in-vac side. This is the first of two flanges, and includes 5 cables ordered vertically in stacks of 3 & 2 for [[OMC-DCPDs, OMC-QPDs, OMC-PZTs/Pico]] and [[SRM1, SRM2]] respectively from right to left. During installation, two 12-point silver plated bolts were stripped, so Chub had to replace them.
The ITMY 10" flange with 4 DSUB-25 feedthroughs has been installed with the cables connected at the in-vac side. This is the second of two flanges, and includes 4 cables ordered vertically in stacks of 2 & 2 for [[AS1-1, AS1-2, AS4-1, AS4-2]] respectively. No major incidents during this one, except maybe a note that all the bolts were extremely dirty and covered with gunk, so we gave a quick swipe with wet cloths before reinstalling them.
[Paco, Yehonathan, Chub]
The BS chamber 10" flange with 4 DSUB-25 feedthroughs has been installed with the cables connected at the in-vac side. This is the second of two flanges, and includes 4 cables ordered vertically in stacks of 2 & 2 for [[LO2-1, LO2-2, PR3-1, PR3-2]] respectively.
The Xilinx RFSoC 2x2 board arrived right before the winter break, so this is kind of an overdue elog. I unboxed it, it came with two ~15 cm SMA M-M cables, an SD card preloaded with the ARM processor and a few overlay jupyter notebooks, a two-piece AC/DC adapter (kind of like a laptop charger), and a USB 3.0 cable. I got a 1U box, lid, and assembled a prototype box to hold this board, but this need not be a permanent solution (see Attachment #1). I drilled 4 thru holes on the bottom of the box to hold the board in place. A large component exceeds the 1U height, but is thin enough to clear one of the thin slits at the top (I believe this is a fuse of some sort). Then, I found a brand new front panel, and drilled 4x 13/32 thru holes in the front for SMA F-F connectors.
I powered the board, and quickly accessed its tutorial notebooks, including a spectrum analyzer and signal generators just to quickly check it works normally. The board has 2 fast RFADCs and 2 RFDACs exposed, 12 and 14 bit respectively, running at up to 4 GSps.
Added input filters, input matrix, damping filters, output matrix, coil filters, and copy the state over from LO1 into SR2, PR2, PR3 screens in anticipation for damping.
[Tega, Anchal, Paco]
We started working on SR2 installation. Preliminary work involved
That was pretty much it. After identifying the cabling situation, we proceeded to bring SR2 from the cleanroom. The magnets and wires remained well through their travel.
Connected OSEM one-by-one. Starting from top right to left (PIn1)
1st connector: LL -> UR -> UL
2nd connector: LR -> SD** (we had some trouble here where the first time we made a connection we didn't see any signal, after a brief review of cables, sat amp unit, cables again with Koji, and sat amp again, we found out a connection was not done in the front of the SR2 SatAmp box, after which we saw the sensor signals).
Loosening all OSEMs and taking them out and noting full bright readings:
After finishing the initial SD osem tuning, we moved onto UL, and then to UR, but we noticed that the UR was not able to drop to its target value of ~13000 counts, even when the OSEM face was < 1 mm from the adapter (see Attachments #1-2). Apart from becoming harder to push in, it became apparent that the dark level (full shadow) is not consistent with ~ 0 counts; is there an offset coming from SatAmp? We quickly checked the OSEM by replacing it in-situ with another working one from the cleanroom batch, but the issue persisted. We decided to stop here, as we suspect the SatAmp box might have some issue.
[Paco, Tega, Anchal]
Today, we started work on AS4 SOS by checking the OSEM and cable. Swapping the connection preserved the failure (no counts) so we swapped the long OSEM for a short one that we knew was working instead, and this solved the issue. We proceeded to swap in a "yellow label", long OSEM in place and then noticed the top plate had issues with the OSEM threads. We took out the bolt and inspected its thread, and even borrowed the screw from PR2 plate but saw the same effect. Even using a silver plated setscrew such as the SD OSEM one resulted in trouble... Then, we decided to not keep trying weird things, and took our sweet time to remove the UL, UR OSEMs, top earthquake stops, and top plate carefully in-situ. Then, we continued the surgery by installing a new top plate which we borrowed from the clean room (the only difference is the OSEM aperture barrels are teflon (?) rather than stainless. The operation was a success, and we moved on to OSEM installation.
After reaching a good place with the OSEM installation, where most sensors were at 50% brightness level and we were happy with the damping action (!!), we fixed all EQ stops and proceeded to push the SOS to its nominal placement. Then upong releasing the EQ stops, we found out that the sensor readings were shifted.
Turns out, the shifting was likely due to the table level. Because I didn't take care the first time to "zero" the level of the table as I tuned the OSEMs, the installation was b o g u s. So today I took time to,
a) Shift AS4 close to the center of the table.
b) Use the clean level tool to pick a plane of reference. To do this, I iteratively placed two counterweights (from the ETMX flow bench) in two locations in the breadboard such that I nominally balanced the table under this configuration to zome reference plane z0. The counterweight placement is of course temporary, and as soon as we make further changes such as final placement of AS4 SOS, or installation of AS1, their positions will need to change to recover z=z0.
c) Install OSEMs until I was happy with the damping. ** Here, I noticed the new suspension screens had been misconfigured (probably c1sus2 rebooted and we don't have any BURT), so quickly restored the input and output matrices.
SUSPENSION STATUS UPDATED HERE
AS1 was installed in the ITMY chamber today. For this I moved AS4 to its nominal final placement and clamped it down with a single dog clamp. Then, I placed AS1 near the center of the table, and quickly checked AS4 could still be damped. After this, I leveled the table using a heavier/lighter counterweight pair.
Once things were leveled, I proceeded to install AS1 OSEMs. The LL, UL, UR OSEMs had a bright level of 27000 counts, while SD and LR were at 29500, and 29900 respectively. After a while, I managed to damp all degrees of freedom around the 50% thousand count levels, and decided to stop.
UL 27000. -> 16000
UR 27000. -> 13800
LL 27000 -> 14600
LR 29900 -> 14900
SD 29500 -> 12900
Free swinging test set to trigger
AS1 is set to go through a free swinging test at 3 am this evening. We have used this script (Git/40m/scripts/SUS/InMatCalc/freeSwing.py) reliably in the past so we expect no issues, it has a error catching block to restore all changes at the end of the test or if something goes wrong.
To access the test, on allegra, type:
Then you can kill the script if required by Ctrl-C, it will restore all changes while exiting.
This morning, I went into ITMY chamber to inspect AS1 after the free swinging test failed. Indeed, as forecasted by Anchal, the top front EQ stop was slightly touching, which means AS1 was not properly installed before. I proceeded by removing it well behind any chance of touching the optic, and did the same for all the other stops, of which most were already recessed significantly. Finally, the OSEMs changed accordingly to produced a PITCHed optic (top front EQ was slightly biasing the pitch angle), so I did a reinstallation until the levels were around the 14000 count region. After damping AS1 relatively quickly, I closed the ITMY chamber.
For some reasonf the free swing test showed only one resonance peak (see attachment 1). This probably happened because one of the earthquake stops is touching the optic. Maybe after the table balancing, the table moved a little over its long relazation time and by the time the free swing test was performed at 3 am, one of the earthquake stops was touching the optic. We need to check this when we open the chamber next.
[Paco, Anchal, Tega]
After installing the short OSEMs into PR2, we moved it into ITMX Chamber. While Tega loaded some of the damping filters and other settings, we took time to balance the heavily tilted ITMX chamber. After running out of counterweigths, Anchal had to go into the cleanroom and bring the SOS stands, two of which had to be stacked near the edge of the breadoard. Finally, we connected the OSEMs following the canonical order
LL -> UR-> UL
LR -> SD
But found that UR was reading -14000 counts. So, we did a quick swap of the UR and UL sensors and verified that the OSEM itself is working, just in a different channel... So it's time to debug the electronics (probably PR2 Sat Amp?)...
PR2 Sat Amp preliminary investigation:
Thanks to Koji's hotfix on the PR2 SatAmp box last evening, this morning I was able to finish the OSEM installation for PR2. PR2 is now fully damped. Then, I realized that with the extreme rebalancing done in ITMX chamber, LO1 needed to be reinstalled, so I proceeded to do that. I verified all the degrees of freedom remained damped.
I think all SOS are nominally damped, so we are 90% done with suspension installation!
Started recovering from scheduled (Feb 05) power outage. Basically, time-reversing through this list.
== Office area ==
== Main network stations ==
== Control workstations ==
== PSL + Vertex instruments ==
== YEND and XEND instruments ==
== YARM Electronic racks ==
== XARM Electronic racks ==
* Top priority, this needs to be fixed.
** Non-priority, but to be debugged
We proceeded to align the MC optics because all offsets in MC_ALIGN screen were zeroed. After opening the PSL shutter, we used values from last year as a reference, and try to steadily recover the alignment. The IMC lock remains at large.
We have been realigning the IMC as of last Friday (02/11). Today we made some significant progress (still at high input power), but the IMC autolocker is unable to engage a stable mode lock. We have made some changes to reach this point, including re-centering of the MC1 REFL beam on the ccd, centering of MC2 QPD trans (using flashes), and centering of the MC REFL RFPD beam. The IMC is flashing to peak transmission of > 50% its max (near 14,000 counts average on 2021), and all PDs seem to be working ok... We will keep the PSL shutter closed (especially with high input power) for now.