40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 136 of 339  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  4201   Tue Jan 25 20:42:46 2011 OsamuUpdateGreen LockingSlow servo for green laser

I implemented a slow servo for green laser thermal control on c1scx.mdl. Ch6,7 of ADC and ch6 of DAC are assigned for this servo as below;

 

Ch6 of ADC: PDH error signal

CH7 of ADC: PZT feedback signal

CH6 of DAC: feedback signal to thermal of green laser

 

Note that old EPICS themal control cable is not hooked anymore.

I made a simple MEDM screen(...medm/c1scx/master/C1SCX_BCX_SLOW.adl) linked from GREEN medm screen (C1GCV.adl) on sitemap.

During this work, I noticed that some of the epics switch is not recovered by autoburt. What I noticed is filter switch of SUSPOS, SUSPIT, SUSYAW, SDSEN, and all coil output for ETMX.

I had no idea to fix them, probably Joe knows. I guess other suspensitons has the same problems.

  4214   Thu Jan 27 21:10:47 2011 OsamuUpdate40m UpgradingCalibrated noise of green

I calibrated noise spectrum of green lock.

1. Measurement of conversion factor of ADC input from V to ct:

As a preparation, first I measured a conversion factor at ADC input of C1;GCX1SLOW_SERVO1.

It was measured while the output of AI ch6 as the output of C1;GCX1SLOW_SERVO2 with 1Hz, 1000ct(2000ct_pp) was directly connected into AA ch7 as the input of C1;GCX1SLOW_SERVO1. Amplitude at the output at AI ch6 was 616mVpp measured by oscilloscope, and C1;GCX1SLOW_SERVO1_IN1 read as 971.9ct_pp. So the conversion factor is calculated as 6.338e-4[V/ct].

2. Injection of a calibration signal:

When Green laser was locked to cavity with fast PZT and slow thermal, I injected 100Hz, 1000ct EXC at ETMX ASL. The signal was measured at C1:GCX1SLOW_SERVO1_IN1 as 5.314ct_rms. It can be converted into 3.368e-3Vrms using above result, and then converted into 3368Hz_rms using PZT efficiency as 1MHz/V. This efficiency was obtained from Koji's knowledge, but he says that it might have 30% or higher error. If somebody get more accurate value, put it into the conversion process from V to Hz here.

3. Conversion;

Frequency of green f=c/532nm=5.635e14[Hz] is fluctuating with above 3368Hz_rms,so the fluctuation ratio is 3368/5.635=5.977e-12, and it corresponds to length fluctuation of 37.5m. So, cavity fluctuation will be 5.977e-12*37.5=2.241e-10m_rms by 100Hz, 1000ct EXC at ETMX ASL.

4. Results;

Finally, we knew 5.314ct corresponds to 3368Hz and 2.241e-10m, so conversion factor from ct to Hz and ct to m are ;

633.8[Hz/ct] @ C1:GCX1SLOW_SERVO1

4.217e-11[m/ct] @ C1:GCX1SLOW_SERVO1

 

5. Calibration:

You can measure green noise spectrum at C1;GCX1SLOW_SERVO1_IN1 during lock,  and mutiply above result to convert Hz or m.

This calibration is effective above corner frequency of slow and fast servo around 0.5Hz and UGF of fast servo around 4kHz.

I show an example of calibrated green noise.

20110127_Calibrated_grrennoise.jpg

20110127_Calibrated_grrennoise.pdf

Each color show different band-width. Of course this results of calibration cactor does not depend on band-width. Noise around 1.2Hz is 6e-8Hz/rHz. It sounds a bit too good by factor ~2. The VCO efficiency might be too small.

 

Note that there are several assumptions in this calibration;

1. TF from actual PZT voltage to PZT mon is assumed to be 1 in all frequency. Probably this is not a bad assumption because circuit diagram shows monitor point is extracted PZT voltage directly.

2. However above assumption is not correct if the input impedance of AI is low.

3. As I said, PZT efficiency of 1MHz/V might be wrong.

 

I also measured a TF from C1:SUS-ETMX_ALS_EXC to C1:GCX1SLOW_SERVO1_IN1. It is similar as calibration injection above but for wide frequency. This shows a clear line of f^-2 of suspension.

20110127_TF_ETMXSUSEXC_to_PZTOUT.pdf

 

Files are located in /users/osamu/:20110127_Green_calibration.

  12469   Mon Sep 5 19:57:24 2016 OsamuUpdateSUSOSEM adjustments

Hi 40m people,

As Rana is saying, the bounce mode does not matter, or we cannot do anything. Generally speaking, the bounce mode cannot be damped by the setting of 40m SUS. Some tweak techniques may damp a bounce mode by res-gain or something, but it is not a proper way, I think.

This is also that Rana is already saying that the important thing is to find a good direction of OSEM to hit the LED beam to the magnet. Even if the magnet is not located at the center of OSEM hole, still you can find the optimal orientation of OSEM to hit the LED beam to the center of magnet by rotating the OSEM.

I know only an old document of T040054 that Shihori summarized how to adjust the matrix at the 40m. Too bad input/output matrix may introduce some troubles, but even roughly adjusted matrix should be still fine.

I will be at Caltech on 12-14 of September. If I can help something, I am willing to work with you!

  4985   Mon Jul 18 21:06:32 2011 PSL Table GuardianOmnistructurePSLDon't leave the PSL table open, unattended!!!!!!!!!!!!11111

I found the PSL table left open, and unattended again. 

As far as I know, Jamie and Jenne (working on the LSC rack, so no lasers / optics work involved) have been the only ones in the IFO room for several hours now. 

I'm going to start taking laser keys, or finding other suitable punishments.  Like a day of lab cleanup chores or something.  Seriously, don't leave the PSL table open if you're not actively working on it.

  15693   Wed Dec 2 12:35:31 2020 PacoSummaryComputer Scripts / ProgramsTC200 python driver

Given the similarities between the MDT694B (single channel piezo controller) and TC200 (temperature controller) serial interfaces, I added the pyserial driver here

*Warning* this first version of the driver remains untested

  15957   Wed Mar 24 09:23:49 2021 PacoUpdateSUSMC3 new Input Matrix

[Paco]

  • Found IMC locked upon arrival
  • Loaded newest MC3 Input Matrix coefficients using /scripts/SUS/InMatCalc/writeMatrix.py after unlocking the MC, and disabling the watchdog. 
  • Again, the sens signals started increasing after the WD is reenabled with the new Input matrix, so I manually tripped it and restored the old matrix; recovered MC lock.
  • Something is still off with this input matrix that makes the MC3 loop unstable.
  16152   Fri May 21 12:12:11 2021 PacoUpdateNoiseBudgetAUX PDH loop identification

[Anchal, Paco]

We went into 40m to identify where XARM PDH loop control elements are. We didn't touch anything, but this is to note we went in there twice at 10 AM and 11:10 AM.

  16156   Mon May 24 10:19:54 2021 PacoUpdateGeneralZita IOO strip

Updated IOO.strip on Zita to show WFS2 pitch and yaw trends (C1:IOO-WFS2_PIY_OUT16 and C1:IOO-WFS2_YAW_OUT16) and changed the colors slightly to have all pitch trends in the yellow/brown band and all yaw trends in the pink/purple band.

No one says, "Here I am attaching a cool screenshot, becuz else where's the proof? Am I right or am I right?"

Mon May 24 18:10:07 2021 [Update]

After waiting for some traces to fill the screen, here is a cool screenshot (Attachment 1). At around 2:30 PM the MC unlocked, and the BS_Z (vertical) seismometer readout jumped. It has stayed like this for the whole afternoon... The MC eventually caught its lock and we even locked XARM without any issue, but something happened in the 10-30 Hz band. We will keep an eye on it during the evening...

Tue May 25 08:45:33 2021 [Update]

At approximately 02:30 UTC (so 07:30 PM yesterday) the 10-30 Hz seismic step dropped back... It lasted 5 hours, mostly causing BS motion along Z (vertical) as seen by the minute trend data in Attachment 2. Could the MM library have been shaking? Was the IFO snoring during its afternoon nap?

Attachment 1: Screenshot_from_2021-05-24_18-09-37.png
Screenshot_from_2021-05-24_18-09-37.png
Attachment 2: 24and25_05_2021_PEM_BS_10_30.png
24and25_05_2021_PEM_BS_10_30.png
  16176   Wed Jun 2 17:50:50 2021 PacoUpdateEquipment loanBorrow red cart

I borrowed the little red cart 🛒 to help clear the path for new optical tables in B252 West Bridge. Will return once I am done with it.  

Attachment 1: IMG_20210602_172858.jpg
IMG_20210602_172858.jpg
  16180   Thu Jun 3 17:49:46 2021 PacoUpdateEquipment loanBorrow red cart

Returned today.

Quote:

I borrowed the little red cart 🛒 to help clear the path for new optical tables in B252 West Bridge. Will return once I am done with it.  

 

  16219   Tue Jun 22 16:52:28 2021 PacoUpdateSUSADC/Slow channels issues

After sliding the alignment bias around and browsing through elog while searching for "stuck" we concluded the ITMX osems needed to be freed. To do this, the procedure is to slide the alignment bias back and forth ("shaking") and then as the OSEMs start to vary, enable the damping. We did just this, and then restored the alignment bias sliders slowly into their original positions. Attachment 1 shows the ITMX OSEM sensor input monitors throughout this procedure.


At the end, since MC has trouble catching lock after opening PSL shutter, I tried running burt restore the ioo to 2021/Jun/17/06:19/c1iooepics.snap but the problem persists

Attachment 1: shake_and_damp.png
shake_and_damp.png
  16234   Thu Jul 1 11:37:50 2021 PacoUpdateGeneralrestarted c0rga

Physically rebooted c0rga workstation after failing to ssh into it (even as it was able to ping into it...) the RGA seems to be off though. The last log with data on it appears to date back to 2020 Nov 10, but reasonable spectra don't appear until before 11-05 logs. Gautam verified that the RGA was intentionally turned off then.

  16248   Thu Jul 15 14:25:48 2021 PacoUpdateLSCCM board

[gautam, paco]

We tested the CM board by implementing the high bandwidth IR lock (single arm). In preparation for this test we temporarily connected the POY11_Q_MON output to the CM board IN1 input and checked the YARM POY transfer function by running the AA_YARM_TEMPLATE under users/Templates/LSC/LSC_loops/YARM_POY/. We made sure the YARM dither optimized TRY so as to maximize the optical gain stage. Then we proceeded as follows:

  • From the LSC --> CM Servo screen, we controlled the REFL 1 Gain (dB) slider (nominal +25) and MC Servo IN2 Gain (dB) slider (nominal -32 dB) to transfer the low bandwidth (digital) control to the high bandwidth (analog) control of the YARM.
  • During this game, we monitored the C1:LSC-POY11_I_ERR_DQ & C1:LSC-CM_SLOW_OUT_DQ error signal channels for saturation, oscillations, or stability.
  • Once a set of gains was successful in maintaining a stable lock, we measured the OLTF using SR 785 to track the UGF as we mix the two paths.
  • Once the gains have increased, a boost and super-boost stages may be enabled as well.

Ultimately, our ability to progressively increase the control bandwidth of the YARM is a proxy that the CM board is working properly. Attachment 1 shows the OLTF progression as we increased the loop's UGF. Note how as we approached the maximum measured UGF of ~ 22 kHz, our phase margin decreased signifying poor stability.


At the end of this measurement, at about ~ 15:45 I restored the CM board IN1 input and disconnected the POY11_Q_MON

gautam: the conclusion here is that the CM board seems to work as advertised, and it's not solely responsible for not being able to achieve the IR handoff. 

Attachment 1: high_BW_TFs.pdf
high_BW_TFs.pdf
  16254   Thu Jul 22 16:06:10 2021 PacoUpdateLoss MeasurementLoss measurement

[yehonathan, anchal, paco, gautam]

We concluded estimating the XARM and YARM losses. The hardware configuration from yesterday remains, but we repeated the measurements because we realized our REFL55_I_ERR and REFL55_Q_ERR signals representing the PD520 and MC_TRANS were scaled, offset, and rotated in a way that wasn't trivially undone by our postprocessing scripts... Another caveat that we encountered today was the need to add a "macroscopic" misalignment to the ITMs when doing the measurement to avoid any accidental resonances.

The final measurements were done with 16 repetitions, 30 second duration, and the logfiles are under scripts/lossmap_scripts/armLoss/logs/20210722_1423.txt and scripts/lossmap_scripts/armLoss/logs/20210722_1513.txt

Finally, the estimated YARM loss is 39\pm7 ppm, while the estimated XARM loss is 38\pm8 ppm. This is consistent with the inferred PRC gain from Monday and a PRM loss of ~ 2%.


Future measurements may want to look into slow drift of the locked vs misaligned traces (systematic errors?) and a better way of estimating the statistical uncertainty (e.g. by splitting the raw time traces into short segments)

  16257   Mon Jul 26 17:34:23 2021 PacoUpdateLoss MeasurementLoss measurement

[gautam, yehonathan, paco]

We went back to the loss data from last week and more carefully estimated the ARM loss uncertainties.

Before we simply stitched all N=16 repetitions into a single time-series and computed the loss: e.g. see Attachment 1 for such a YARM loss data. The mean and stdev for this long time series give the quoted loss from last time. We knew that the uncertainty was most certainly overestimated, as different realizations need not sample similar alignment conditions and are sensitive to different imperfections (e.g. beam angular motion, unnormalizable power fluctuations, etc...).


Today we analyzed the individual locked/misaligned cycles individually. From each cycle, it is possible to obtain a mean value of the loss as well as a std dev *across the duration of the trace*, but because we have a measurement ensemble, it is also possible to obtain an ensemble averaged mean and a statistical uncertainty estimate *across the independent cycle realizations*. While the mean values don't change much, in the latter estimate we find a much smaller statistical uncertainty. We obtain an XARM loss of 37.6 \pm 2.6 ppm and a YARM loss of 38.9 \pm 0.6 ppm. To make the distinction more clear, Attachment 2 and  Attachment 3 the YARM and XARM loss measurement ensembles respectively with single realization (time-series) standard deviations as vertical error bars, and the 1 sigma statistical uncertainty estimate filled color band. Note that the XARM loss drifts across different realizations (which happen to be ordered in time), which we think arise from inconsistent ASS dither alignment convergence. This is yet to be tested.


For budgeting the excessive uncertainties from a single locked/misaligned cycle, we could look at beam pointing, angular drift, power, and systematic differences in the paths from both reflection signals. We should be able to estimate the power fluctuations by looking at the recorded arm transmissions, the recorded MC transmission, PD technical noise, etc... and we might be able to correlate recorded oplev signals with the reflection data to identify angular drift. We have not done this yet.

Attachment 1: LossMeasurement_RawData.pdf
LossMeasurement_RawData.pdf
Attachment 2: YARM_loss_stats.pdf
YARM_loss_stats.pdf
Attachment 3: XARM_loss_stats.pdf
XARM_loss_stats.pdf
  16266   Thu Jul 29 14:51:39 2021 PacoUpdateOptical LeversRecenter OpLevs

[yehonathan, anchal, paco]

Yesterday around 9:30 pm, we centered the BS, ITMY, ETMY, ITMX and ETMX oplevs (in that order) in their respective QPDs by turning the last mirror before the QPDs. We did this after running the ASS dither for the XARM/YARM configurations to use as the alignment reference. We did this in preparation for PRFPMI lock acquisition which we had to stop due to an earthquake around midnight

  16267   Mon Aug 2 16:18:23 2021 PacoUpdateASCAS WFS MICH commissioning

[anchal, paco]

We picked up AS WFS comissioning for daytime work as suggested by gautam. In the end we want to comission this for the PRFPMI, but also for PRMI, and MICH for completeness. MICH is the simplest so we are starting here.

We started by restoromg the MICH configuration and aligning the AS DC QPD (on the AS table) by zeroing the C1:ASC-AS_DC_YAW_OUT and C1:ASC-AS_DC_PIT_OUT. Since the AS WFS gets the AS beam in transmission through a beamsplitter, we had to correct such a beamsplitters's aligment to recenter the AS beam onto the AS110 PD (for this we looked at the signal on a scope).

We then checked the rotation (R) C1:ASC-AS_RF55_SEGX_PHASE_R and delay (D) angles C1:ASC-AS_RF55_SEGX_PHASE_D (where X = 1, 2, 3, 4 for segment) to rotate all the signal into the I quadrature. We found that this optimized the PIT content on C1:ASC-AS_RF55_I_PIT_OUT and YAW content on C1:ASC-AS_RF55_I_YAW_OUTMON which is what we want anyways.

Finally, we set up some simple integrators for these WFS on the C1ASC-DHARD_PIT and C1ASC-DHARD_YAW filter banks with a pole at 0 Hz, a zero at 0.8 Hz, and a gain of -60 dB (similar to MC WFS). Nevertheless, when we closed the loop by actuating on the BS ASC PIT and ASC YAW inputs, it seemed like the ASC model outputs are not connected to the BS SUS model ASC inputs, so we might need to edit accordingly and restart the model.

  16272   Fri Aug 6 17:10:19 2021 PacoUpdateIMCMC rollercoaster

[anchal, yehonatan, paco]

For whatever reason (i.e. we don't really know) the MC unlocked into a weird state at ~ 10:40 AM today. We first tried to find a likely cause as we saw it couldn't recover itself after ~ 40 min... so we decided to try a few things. First we verified that no suspensions were acting weird by looking at the OSEMs on MC1, MC2, and MC3. After validating that the sensors were acting normally, we moved on to the WFS. The WFS loops were disabled the moment the IMC unlocked, as they should. We then proceeded to the last resort of tweaking the MC alignment a bit, first with MC2 and then MC1 and MC3 in that order to see if we could help the MC catch its lock. This didn't help much initially and we paused at about noon.

At about 5 pm, we resumed since the IMC had remained locked to some higher order mode (TEM-01 by the looks of it). While looking at C1:IOO-MC_TRANS_SUMFILT_OUT on ndscope, we kept on shifting the MC2 Yaw alignment slider (steps = +-0.01 counts) slowly to help the right mode "hop". Once the right mode caught on, the WFS loops triggered and the IMC was restored. The transmission during this last stage is shown in Attachment #1.

Attachment 1: MC2_trans_sum_2021-08-06_17-18-54.png
MC2_trans_sum_2021-08-06_17-18-54.png
  16275   Wed Aug 11 11:35:36 2021 PacoUpdateLSCPRMI MICH orthogonality plan

[yehonathan, paco]

Yesterday we discussed a bit about working on the PRMI sensing matrix.

In particular we will start with the "issue" of non-orthogonality in the MICH actuated by BS + PRM. Yesterday afternoon we played a little with the oscillators and ran sensing lines in MICH and PRCL (gains of 50 and 5 respectively) in the times spanning [1312671582 -> 1312672300], [1312673242 -> 1312677350] for PRMI carrier and [1312673832 -> 1312674104] for PRMI sideband. Today we realize that we could have enabled the notchSensMat filter, which is a notch filter exactly at the oscillator's frequency, in FM10 and run a lower gain to get a similar SNR. We anyways want to investigate this in more depth, so here is our tentative plan of action which implies redoing these measurements:

Task: investigate orthogonality (or lack thereof) in the MICH when actuated by BS & PRM
    1) Run sensing MICH and PRCL oscillators with PRMI Carrier locked (remember to turn NotchSensMat filter on).
    2) Analyze data and establish the reference sensing matrix.
    3) Write a script that performs steps 2 and 3 in a robust and safe way.
    4) Scan the C1:LSC-LOCKIN_OUTMTRX, MICH to BS and PRM elements around their nominal values.
    5) Scan the MICH and PRCL RFPD rotation angles around their nominal values.

We also talked about the possibility that the sensing matrix is strongly frequnecy dependant such that measuring it at 311Hz doesn't give us accurate estimation of it. Is it worthwhile to try and measure it at lower frequencies using an appropriate notch filter?


Wed Aug 11 15:28:32 2021 Updated plan after group meeting

- The problem may be in the actuators since the orthogonality seems fine when actuating on the ITMX/ITMY, so we should instead focus on measuring the actuator transfer functions using OpLevs for example (same high freq. excitation so no OSEM will work > 10 Hz).

  16277   Thu Aug 12 11:04:27 2021 PacoUpdateGeneralPSL shutter was closed this morning

Thu Aug 12 11:04:42 2021 Arrived to find the PSL shutter closed. Why? Who? When? How? No elog, no fun. I opened it, IMC is now locked, and the arms were restored and aligned.

  16280   Mon Aug 16 23:30:34 2021 PacoUpdateCDSAS WFS commissioning; restarting models

[koji, ian, tega, paco]

With the remote/local assistance of Tega/Ian last friday I made changes on the c1sus model by connecting the C1:ASC model outputs (found within a block in c1ioo) to the BS and PRM suspension inputs (pitch and yaw). Then, Koji reviewed these changes today and made me notice that no changes are actually needed since the blocks were already in place, connected in the right ports, but the model probably just wasn't rebuilt...

So, today we ran "rtcds make", "rtcds install" on the c1ioo and c1sus models (in that order) but the whole system crashed. We spent a great deal of time restarting the machines and their processes but we struggled quite a lot with setting up the right dates to match the GPS times. What seemed to work in the end was to follow the format of the date in the fb1 machine and try to match the timing to the sub-second level. This is especially tricky when performed by a human action so the whole task is tedious. We anyways completed the reboot for almost all the models except the c1oaf (which tends to make things crashy) since we won't need it right away for the tasks ahead. One potential annoying issue we found was in manually rebooting c1iscey because one of its network ports is loose (the ethernet cable won't click in place) and it appears to use this link to boot (!!) so for a while this machine just wasn't coming back up.

Finally, as we restored the suspension controls and reopened the shutters, we noticed a great deal of misalignment to the point no reflected beam was coming back to the RFPD table. So we spent some time verifying the PRM alignment and TT1 and TT2 (tip tilts) and it turned out to be mostly the latter pair that were responsible for it. We used the green beams to help optimize the XARM and YARM transmissions and were able to relock the arms. We ran ASS on them, and then aligned the PRM OpLevs which also seemed off. This was done by giving a pitch offset to the input PRM oplev beam path and then correcting for it downstream (before the qpd). We also adjusted the BS OpLev in the end.


Summary; the ASC BS and PRM outputs are now built into the SUS models. Let the AS WFS loops be closed soon!


Addenda by KA
- Upon the RTS restarting,

  • Date/Time adjustment
    sudo date --set='xxxxxx'
  • If the time on the CDS status medm screen for each IOP match with the FB local time, we ran
    rtcds start c1x01
    (or c1x02, etc)
  • Every time we restart the IOPs, fb was restarted by
    telnet fb1 8083
    > shutdown

    and restarted mx_stream from the CDS screen because these actions change the "DC" status.

- Today we once succeeded to restart the vertex machines. However, the RFM signal transmission did fail. So the end two machines were power cycled as well as c1rfm, but this made all the machines in RED again. Hell...

- We checked the PRM oplev. The spot was around the center but was clipped. This made us so confused. Our conclusion was that the oplev was like that before the RTS reboot.

  16287   Mon Aug 23 10:17:21 2021 PacoSummaryComputerssystem reboot glitch

[paco]

At 09:34 PST I noted a glitch in the controls room as the machines went down except for c1ioo. Briefly, the video feeds disappeared from the screens, though the screens themselves didn't lose power. At first I though this was some kind of power glitch, but upon checking with Jordan, it most likely was related to some system crash. Coming back to the controls room, I could see the MC reflection beam swinging, but unfortunately all the FE models came down. I noticed that the DAQ status channels were blank.

I ssh into c1ioo no problem and ran "rtcds stop c1ioo c1als c1omc", then "rtcds restart c1x03" to do a soft restart. This worked, but the DAQ status was still blank. I then tried to ssh into c1sus and c1lsc without success, similarly c1iscex and c1iscey were unreachable. I went and did a hard restart on c1iscex by switching it off, then its extension chassis, then unplugging the power cords, then inverting these steps, and could ssh into it from rossa. I ran "rtcds start c1x01" and saw the same blank DAQ status. I noticed the elog was also down... so nodus was also affected?

[paco, anchal]

Anchal got on zoom to offer some assistance. We discovered that the fb1 and nodus were subject to some kind of system reboot at precisely 09:34. The "systemctl --failed" command on fb1 displayed both the daqd_dc.service and rc-local.service as loaded but failed (inactive). Is it a good idea to try and reboot the fb1 machine? ... Anchal was able to bring elog back up from nodus (ergo, this post).

[paco]

Although it probably needs the DAQ service from the fb1 machine to be up and running, I tried running the scripts/cds/rebootC1LSC.sh script. This didn't work. I tried running sudo systemctl restart daqd_dc from the fb1 machine without success. Running systemctl reset-failed "worked" for daqd_dc and rc-local services on fb1 in the sense that they were no longer output from systemctl --failed, but they remained inactive (dead) when running systemctl status on them. Following from  15303   I succeeded in restarting the daqd services. Turned out I needed to manually start the open-mx and mx services in fb1. I rerun the restartC1LSC script without success. The script fails because some machines need to be rebooted by hand.
 

  16293   Tue Aug 24 18:11:27 2021 PacoUpdateGeneralTime synchronization not really working

tl;dr: NTP servers and clients were never synchronized, are not synchronizing even with ntp... nodus is synchronized but uses chronyd; should we use chronyd everywhere?


Spent some time investigating the ntp synchronization. In the morning, after Anchal set up all the ntp servers / FE clients I tried restarting the rts IOPs with no success. Later, with Tega we tried the usual manual matching of the date between c1iscex and fb1 machines but we iterated over different n-second offsets from -10 to +10, also without success.

This afternoon, I tried debugging the FE and fb1 timing differences. For this I inspected the ntp configuration file under /etc/ntp.conf in both the fb1 and /diskless/root.jessie/etc/ntp.conf (for the FE machines) and tried different combinations with and without nodus, with and without restrict lines, all while looking at the output of sudo journalctl -f on c1iscey. Everytime I changed the ntp config file, I restarted the service using sudo systemctl restart ntp.service . Looking through some online forums, people suggested basic pinging to see if the ntp servers were up (and broadcasting their times over the local network) but this failed to run (read-only filesystem) so I went into fb1, and ran sudo chroot /diskless/root.jessie/ /bin/bash to allow me to change file permissions. The test was first done with /bin/ping which couldn't even open a socket (root access needed) by running chmod 4755 /bin/ping then ssh-ing into c1iscey and pinging the fb1 machine successfully. After this, I ran chmod 4755 /usr/sbin/ntpd so that the ntp daemon would have no problem in reaching the server in case this was blocking the synchronization. I exited the chroot shell and the ntp daemon in c1iscey; but the ntpstat still showed unsynchronised status. I also learned that when running an ntp query with ntpq -p if a client has succeeded in synchronizing its time to the server time, an asterisk should be appended at the end. This was not the case in any FE machine... and looking at fb1, this was also not true. Although the fb1 peers are correctly listed as nodus, the caltech ntp server, and a broadcast (.BCST.) server from local time (meant to serve the FE machines), none appears to have synchronized... Going one level further, in nodus I checked the time synchronization servers by running chronyc sources the output shows

controls@nodus|~> chronyc sources
210 Number of sources = 4
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* testntp1.superonline.net      1  10   377   280  +1511us[+1403us] +/-   92ms
^+ 38.229.59.9                   2  10   377   206  +8219us[+8219us] +/-  117ms
^+ tms04.deltatelesystems.ru     2  10   377   23m    -17ms[  -17ms] +/-  183ms
^+ ntp.gnc.am                    3  10   377   914  -8294us[-8401us] +/-  168ms

I then ran chronyc clients to find if fb1 was listed (as I would have expected) but the output shows this --

Hostname                   Client    Peer CmdAuth CmdNorm  CmdBad  LstN  LstC
=========================  ======  ======  ======  ======  ======  ====  ====
501 Not authorised

So clearly chronyd succeeded in synchronizing nodus' time to whatever server it was pointed at but downstream from there, neither the fb1 or any FE machines seem to be synchronizing properly. It may be as simple as figuring out the correct ntp configuration file, or switching to chronyd for all machines (for the sake of homogeneity?)

  16298   Wed Aug 25 17:31:30 2021 PacoUpdateCDSFB is writing the frames with a year old date

[paco, tega, koji]

After invaluable assistance from Jamie in fixing this yearly offset in the gps time reported by cat /proc/gps, we managed to restart the real time system correctly (while still manually synchronizing the front end machine times). After this, we recovered the mode cleaner and were able to lock the arms with not much fuss.

Nevertheless, tega and I noticed some weird noise in the C1:LSC-TRX_OUT which was not present in the YARM transmission, and that is present even in the absence of light (we unlocked the arms and just saw it on the ndscope as shown in Attachment #1). It seems to affect the XARM and in general the lock acquisition...

We took some quick spectrum with diaggui (Attachment #2) but it doesn't look normal; there seems to be broadband excess noise with a remarkable 1 kHz component. We will probably look into it in more detail.

Attachment 1: TRX_noise_2021-08-25_17-40-55.png
TRX_noise_2021-08-25_17-40-55.png
Attachment 2: TRX_TRY_power_spectra.pdf
TRX_TRY_power_spectra.pdf
  16300   Thu Aug 26 10:10:44 2021 PacoUpdateCDSFB is writing the frames with a year old date

[paco, ]

We went over the X end to check what was going on with the TRX signal. We spotted the ground terminal coming from the QPD is loosely touching the handle of one of the computers on the rack. When we detached it completely from the rack the noise was gone (attachment 1).

We taped this terminal so it doesn't touch anything accidently. We don't know if this is the best solution since it is probably needs a stable voltage reference. In the Y end those ground terminals are connected to the same point on the rack. The other ground terminals in the X end are just cut.

We also took the PSD of these channels (attachment 2). The noise seem to be gone but TRX is still a bit noisier than TRY. Maybe we should setup a proper ground for the X arm QPD?


We saw that the X end station ALS laser was off. We turned it on and also the crystal oven and reenabled the temperature controller. Green light immidiately appeared. We are now working to restore the ALS lock. After running XARM ASS we were unable to lock the green laser so we went to the XEND and moved the piezo X ALS alignment mirrors until we maximized the transmission in the right mode. We then locked the ALS beams on both arms successfully. It very well could be that the PZT offsets were reset by the power glitch. The XARM ALS still needs some tweaking, its level is ~ 25% of what it was before the power glitch.

Attachment 1: Screenshot_from_2021-08-26_10-09-50.png
Screenshot_from_2021-08-26_10-09-50.png
Attachment 2: TRXTRY_Spectra.pdf
TRXTRY_Spectra.pdf
  16303   Mon Aug 30 17:49:43 2021 PacoSummaryLSCXARM POX OLTF

Used diaggui to get OLTF in preparation for optimal system identification / calibration. The excitation was injected at the control point of the XARM loop C1:LSC-XARM_EXC. Attachment 1 shows the TF (red scatter) taken from 35 Hz to 2.3 kHz with 201 points. The swept sine excitation had an envelope amplitude of 50 counts at 35 Hz, 0.2 counts at 100 Hz, and 0.2 at 200 Hz. In purple continous line, the model for the OLTF using all the digital control filters as well as a simple 1 degree of freedom plant (single pole at 0.99 Hz) is overlaid. Note the disagreement of the OLTF "model" at higher frequencies which we may be able to improve upon using vector fitting.

Attachment 2 shows the coherence (part of this initial measurement was to identify an appropriately large frequency range where the coherence is good before we script it).

Attachment 1: XARM_POX_OLTF.pdf
XARM_POX_OLTF.pdf
Attachment 2: XARM_POX_Coh.pdf
XARM_POX_Coh.pdf
  16307   Thu Sep 2 17:53:15 2021 PacoSummaryComputerschiara down, vac interlock tripped

[paco, koji, tega, ian]

Today in the morning the name server / network file system running in chiara failed. This resulted in donatella/pianosa/rossa shell prompts to hang forever. It also made sitemap crash and even dropping into a bash shell and just listing files from some directory in the file system froze the computer. Remote ssh sessions on nodus also had the same symptoms.

A little after 1 pm, we started debugging this issue with help from Koji. He suggested we hook a monitor, keyboard, and mouse onto chiara as it should still work locally even if something with the NFS (network file system) failed. We did this and then we tried for a while to unmount the /dev/sdc1/ from /home/cds/ (main file system) and mount /dev/sdb1/ from /media/40mBackup (backup copy) such that they swap places. We had no trouble unmounting the backup drive, but only succeeded in unmounting the main drive with the "lazy" unmount, or running "umount -l". Running "df" we could see that the disk space was 100% used, with only ~ 1 GB of free space which may have been the cause for the issue. After swapping these disks by editing the /etc/fstab file to implement the aforementioned swapping, we rebooted chiara and we recovered the shell prompts in all workstations, sitemap, etc... due to the backup drive mounting. We then started investigating what caused the main drive to fill up that quickly, and noted that weirdly now the capacity was at 85% or about 500GB less than before (after reboot and remount), so some large file was probably dumped into chiara that froze the NFS causing the issue.

At this point we tried opening the PSL shutter to recover the IMC. The shutter would not open and we suspected the vacuum interlock was still tripped... and indeed there was an uncleared error in the VAC screen. So with Koji's guidance we walked to the c1vac near the HV station and did the following at ~ 5:13 PM -->

  1. Open V4; apart from a brief pressure spike in PTP2, everything looked ok so we proceeded to
  2. Open V1; P2 spiked briefly and then started to drop. Then, Koji suggested that we could
  3. Close V4; but we saw P2 increasing by a factor of~ 10 in a few seconds, so we
  4. Reopened V4;

We made sure that P1a (main vacuum pressure) was dropping and before continuing we decided to look back to see what the nominal vacuum state was that we should try to restore.

We are currently searching the two systems for diffrences to see if we can narrow down the culprit of the failure.

 

  16313   Thu Sep 2 21:49:03 2021 PacoSummaryComputerschiara down, vac interlock tripped

[tega, paco]

We found the files that took excess space in the chiara filesystem (see Attachment 1). They were error files from the summary pages that were ~ 50 GB in size or so located under /home/cds/caltech/users/public_html/detcharsummary/logs/. We manually removed them and then copied the rest of the summary page contents into the main file system drive (this is to preserve the information backup before it gets deleted by the cron job at the end of today) and checked carefully to identify the actual issue for why these files were as large in the first place.

We then copied the /detcharsummary directory from /media/40mBackup into /home/cds to match the two disks.

Attachment 1: 2021-09-02_21-51-15.png
2021-09-02_21-51-15.png
  16320   Mon Sep 13 09:15:15 2021 PacoUpdateLSCMC unlocked?

Came in at ~ 9 PT this morning to find the IFO "down". The IMC had lost its lock ~ 6 hours before, so at about 03:00 AM. Nothing seemed like the obvious cause; there was no record of increased seismic activity, all suspensions were damped and no watchdog had tripped, and the pressure trends similar to those in recent pressure incidents show nominal behavior (Attachment #1). What happened?

Anyways I simply tried reopening the PSL shutter, and the IMC caught its lock almost immediately. I then locked the arms and everything seems fine for now cool.

Attachment 1: VAC_2021-09-13_09-32-45.png
VAC_2021-09-13_09-32-45.png
  16329   Tue Sep 14 17:19:38 2021 PacoSummaryPEMExcess seismic noise in 0.1 - 0.3 Hz band

For the past couple of days the 0.1 to 0.3 Hz RMS seismic noise along BS-X has increased. Attachment 1 shows the hour trend in the last ~ 10 days. We'll keep monitoring it, but one thing to note is how uncorrelated it seems to be from other frequency bands. The vertical axis in the plot is in um / s

Attachment 1: SEIS_2021-09-14_17-33-12.png
SEIS_2021-09-14_17-33-12.png
  16343   Mon Sep 20 12:20:31 2021 PacoSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

[yehonathan, paco, anchal]

We attempted to find any symptoms for actuation problems in the PRMI configuration when actuated through BS and PRM.

Our logic was to check angular (PIT and YAW) actuation transfer function in the 30 to 200 Hz range by injecting appropriately (f^2) enveloped excitations in the SUS-ASC EXC points and reading back using the SUS_OL (oplev) channels.

From the controls, we first restored the PRMI Carrier to bring the PRM and BS to their nominal alignment, then disabled the LSC output (we don't need PRMI to be locked), and then turned off the damping from the oplev control loops to avoid supressing the excitations.

We used diaggui to measure the 4 transfer functions magnitudes PRM_PIT, PRM_YAW, BS_PIT, BS_YAW, as shown below in Attachments #1 through #4. We used the Oplev calibrations to plot the magnitude of the TFs in units of urad / counts, and verified the nominal 1/f^2 scaling for all of them. The coherence was made as close to 1 as possible by adjusting the amplitude to 1000 counts, and is also shown below. A dip at 120 Hz is probably due to line noise. We are also assuming that the oplev QPDs have a relatively flat response over the frequency range below.

Attachment 1: PRM_PIT_ACT_TF.pdf
PRM_PIT_ACT_TF.pdf
Attachment 2: PRM_YAW_ACT_TF.pdf
PRM_YAW_ACT_TF.pdf
Attachment 3: BS_PIT_ACT_TF.pdf
BS_PIT_ACT_TF.pdf
Attachment 4: BS_YAW_ACT_TF.pdf
BS_YAW_ACT_TF.pdf
  16352   Tue Sep 21 11:13:01 2021 PacoSummaryCalibrationXARM calibration noise

Here are some plots from analyzing the C1:LSC-XARM calibration. The experiment is done with the XARM (POX) locked, a single line is injected at C1:LSC-XARM_EXC at f0 with some amplitude determined empirically using diaggui and awggui tools. For the analysis detailed in this post, f0 = 19 Hz, amp = 1 count, and gain = 300 (anything larger in amplitude would break the lock, and anything lower in frequency would not show up because of loop supression). Clearly, from Attachment #3 below, the calibration line can be detected with SNR > 1.

We read the test point right after the excitation C1:LSC-XARM_IN2 which, in a simplified loop will carry the excitation suppressed by 1 - OLTF, the open loop transfer function. The line is on for 5 minutes, and then we read for another 5 minutes but with the excitation off to have a reference. Both the calibration and reference signal time series are shown in Attachment #1 (decimated by 8). The corresponding ASDs are shown in Attachment #2. Then, we demodulate at 19 Hz and a 30 Hz, 4th-order butterworth LPF, and get an I and Q timeseries (shown in Attachment #3). Even though they look similar, the Q is centered about 0.2 counts, while the I is centered about 0.0. From this time series, we can of course show the noise ASDs in Attachment #3.


The ASD uncertainty bands in the last plot are statistical estimates and depend on the number of segments used in estimating the PSD. A thing to note is that the noise features surrounding the signal ASD around f0 are translated into the ASD in the demodulated signals, but now around dc. I guess from Attachment #3 there is no difference in the noise spectra around the calibration line with and without the excitation. This is what I would have expected from a linear system. If there was a systematic contribution, I would expect it to show at very low frequencies.

Attachment 1: XARM_signal_asd.pdf
XARM_signal_asd.pdf
Attachment 2: XARM_demod_timeseries.pdf
XARM_demod_timeseries.pdf
Attachment 3: XARM_demod_asds.pdf
XARM_demod_asds.pdf
Attachment 4: XARM_cal_0921_timeseries.pdf
XARM_cal_0921_timeseries.pdf
  16358   Thu Sep 23 15:29:11 2021 PacoSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

[Anchal, Paco]

We had a second go at this with an increased number of averages (from 10 to 100) and higher excitation amplitudes (from 1000 to 10000). We did this to try to reduce the relative uncertainty a-la-Bendat-and-Pearsol

\delta G / G = \frac{1}{\gamma \sqrt{n_{\rm avg}}}

where \gamma, n_{\rm avg} are the coherence and number of averages respectively. Before, this estimate had given us a ~30% relative uncertainty and now it has been improved to ~ 10%. The re-measured TFs are in Attachment #1. We did 4 sweeps for each optic (BS, PRM) and removed the 1/f^2 slope for clarity. We note a factor of ~ 4 difference in the magnitude of the coil to angle TFs from BS to PRM (the actuation strength in BS is smaller).


For future reference:

With complex G, we get complex error in G using the formula above. To get uncertainity in magnitude and phase from real-imaginary uncertainties, we do following (assuming the noise in real and imaginary parts of the measured transfer function are incoherent with each other):
G = \alpha + i\beta

\delta G = \delta\alpha + i\delta \beta

\delta |G| = \frac{1}{|G|}\sqrt{\alpha^2 \delta\alpha^2 + \beta^2 \delta \beta^2}

\delta(\angle G) = \frac{1}{|G|^2}\sqrt{\alpha^2 \delta\alpha^2 + \beta^2 \delta\beta^2} = \frac{\delta |G|}{|G|}

Attachment 1: BS_PRM_ANG_ACT_TF.pdf
BS_PRM_ANG_ACT_TF.pdf BS_PRM_ANG_ACT_TF.pdf BS_PRM_ANG_ACT_TF.pdf BS_PRM_ANG_ACT_TF.pdf
  16363   Tue Sep 28 16:31:52 2021 PacoSummaryCalibrationXARM OLTF (calibration) at 55.511 Hz

[anchal, paco]

Here is a demonstration of the methods leading to the single (X)arm calibration with its budget uncertainty. The steps towards this measurement are the following:

  1. We put a single line excitation through the C1:SUS-ETMX_LSC_EXC at 55.511 Hz, amp = 1 counts, gain = 300 (ramptime=10 s).
  2. With the arm locked, we grab a long timeseries of the C1:LSC-XARM_IN1_DQ (error point) and C1:SUS-ETMX_LSC_OUT_DQ (control point) channels.
  3. We assume the single arm loop to have the four blocks shown in Attachment #1, A (actuator + sus), plant (mainly the cavity pole), D (detection + electronics), and K (digital control).
    1. At this point, Anchal made a model of the single arm loop including the appropriate filter coefficients and other parameters. See Attachments #2-3 for the split and total model TFs.
    2. Our line would actually probe a TF from point b (error point) to point d (control point). We multiplied our measurement with open loop TF from b to d from model to get complete OLTF.
    3. Our initial estimate from documents and elog made overall loop shape correct but it was off by an overall gain factor. This could be due to wrong assumption on RFPD transimpedance or analog gains of AA or whitening filters. We have corrected for this factor in the RFPD transimpedance, but this needs to be checked (if we really care).
  4. We demodulate decimated timeseries (final sampling rate ~ 2.048 kHz) and I & Q for both the b and d signals. From this and our model for K, we estimate the OLTF. Attachment #4 shows timeseries for magnitude and phase.
  5. Finally, we compute the ASD for the OLTF magnitude. We plot it in Attachment #5 together with the ASD of the XARM transmission (C1:LSC-TRX_OUT_DQ) times the OLTF to estimate the optical gain noise ASD (this last step was a quick attempt at budgeting the calibration noise).
    1. For each ASD we used N = 24 averages, from which we estimate rms (statistical) uncertainties which are depicted by error bands (\pm \sigma) around the lines.

** Note: We ran the same procedure using dtt (diaggui) to validate our estimates at every point, as well as check our SNR in b and d before taking the ~3.5 hours of data.

Attachment 1: OLTF_Calibration_Scheme.jpg
OLTF_Calibration_Scheme.jpg
Attachment 2: XARM_POX_Lock_Model_TF.pdf
XARM_POX_Lock_Model_TF.pdf
Attachment 3: XARM_OLTF_Total_Model.pdf
XARM_OLTF_Total_Model.pdf
Attachment 4: XARM_OLTF_55p511_Hz_timeseries.pdf
XARM_OLTF_55p511_Hz_timeseries.pdf
Attachment 5: Gmag_55p511_Hz_ASD.pdf
Gmag_55p511_Hz_ASD.pdf
  16369   Thu Sep 30 18:04:31 2021 PacoSummaryCalibrationXARM OLTF (calibration) with three lines

[anchal, paco]

We repeated the same procedure as before, but with 3 different lines at 55.511, 154.11, and 1071.11 Hz. We overlay the OLTF magnitudes and phases with our latest model (which we have updated with Koji's help) and include the rms uncertainties as errorbars in Attachment #1.

We also plot the noise ASDs of calibrated OLTF magnitudes at the line frequencies in Attachment #2. These curves are created by calculating power spectral density of timeseries of OLTF values at the line frequencies generated by demodulated XARM_IN and ETMX_LSC_OUT signals. We have overlayed the TRX noise spectrum here as an attempt to see if we can budget the noise measured in values of G to the fluctuation in optical gain due to changing power in the arms. We multiplied the the transmission ASD with the value of OLTF at those frequencies as the transfger function from normalized optical gain to the total transfer function value.

It is weird that the fluctuations in transmission power at 1 mHz always crosses the total noise in the OLTF value in all calibration lines. This could be an artificat of our data analysis though.

Even if the contribution of the fluctuating power is correct, there is remaining excess noise in the OLTF to be budgeted.

Attachment 1: XARM_OLTF_Model_and_Meas.pdf
XARM_OLTF_Model_and_Meas.pdf XARM_OLTF_Model_and_Meas.pdf
Attachment 2: Gmag_ASD_nb_withTRX.pdf
Gmag_ASD_nb_withTRX.pdf Gmag_ASD_nb_withTRX.pdf Gmag_ASD_nb_withTRX.pdf
  16377   Mon Oct 4 18:35:12 2021 PacoUpdateElectronicsSatellite amp box adapters

[Paco]

I have finished assembling the 1U adapters from 8 to 5 DB9 conn. for the satellite amp boxes. One thing I had to "hack" was the corners of the front panel end of the PCB. Because the PCB was a bit too wide, it wasn't really flush against the front panel (see Attachment #1), so I just filed the corners by ~ 3 mm and covered with kapton tape to prevent contact between ground planes and the chassis. After this, I made DB9 cables, connected everything in place and attached to the rear panel (Attachment #2). Four units are resting near the CAD machine (next to the bench area), see Attachment #3.

Attachment 1: pcb_no_flush.jpg
pcb_no_flush.jpg
Attachment 2: 1U_assembly.jpg
1U_assembly.jpg
Attachment 3: fourunits.jpg
fourunits.jpg
  16383   Tue Oct 5 20:04:22 2021 PacoSummarySUSPRM and BS Angular Actuation transfer function magnitude measurements

[Paco, Rana]

We had a look at the BS actuation. Along the way we created a couple of issues that we fixed. A summary is below.

  1. First, we locked MICH. While doing this, we used the /users/Templates/ndscope/LSC/MICH.yml ndscope template to monitor some channels. I edited the yaml file to look at C1:LSC-ASDC_OUT_DQ instead of the REFL_DC. Rana pointed out that the C1:LSC-MICH_OUT_DQ (MICH control point) had a big range (~ 5000 counts rms) and this should not be like that.
  2. We tried to investigate the aforementioned thing by looking at the whitening / uwhitening filters but all the slow epics channels where "white" on the medm screen. Looking under CDS/slow channel monitors, we realized that both c1iscaux and c1auxey were weird, so we tried telnet to c1iscaux without success. Therefore, we followed the recommended wiki procedure of hard rebooting this machine. While inside the lab and looking for this machine, we touched things around the 'rfpd' rack and once we were back in the control room, we couldn't see any light on the AS port camera. But the whitening filter medm screens were back up.
  3. While rana ssh'd into c1auxey to investigate about its status, and burtrestored the c1iscaux channels, we looked at trends to figure out if anything had changed (for example TT1 or TT2) but this wasn't the case. We decided to go back inside to check the actual REFL beams and noticed it was grossly misaligned (clipping)... so we blamed it on the TTs and again, went around and moved some stuff around the 'rfpd' rack. We didn't really connect or disconnect anything, but once we were back in the control room, light was coming from the AS port again. This is a weird mystery and we should systematically try to repeat this and fix the actual issue.
  4. We restored the MICH, and returned to BS actuation problems. Here, we essentially devised a scheme to inject noise at 310.97 Hz and 313.74. The choice is twofold, first it lies outside the MICH loop UGF (~150 Hz), and second, it matches the sensing matrix OSC frequencies, so it's more appropriate for a comparison.
  5. We injected two lines using the BS SUS LOCKIN1 and LOCKIN2 oscilators so we can probe two coils at once, with the LSC loop closed, and read back using the C1:LSC-MICH_IN1_DQ channel. We excited with an amplitude of 1234.0 counts and 1254 counts respectively (to match the ~ 2 % difference in frequency) and noted that the magnitude response in UR was 10% larger than UL, LL, and LR which were close to each other at the 2% level.

[Paco]

After rana left, I did a second pass at the BS actuation. I took TF measurements at the oscilator frequencies noted above using diaggui, and summarize the results below:

TF UL (310.97 Hz) UR (313.74 Hz) LL (310.97 Hz) LR (313.74 Hz)
Magnitude (dB) 93.20 92.20 94.27 93.85
Phase (deg) -128.3 -127.9 -128.4 -127.5

This procedure should be done with PRM as well and using the PRCL instead of MICH.

  16429   Tue Oct 26 16:56:22 2021 PacoSummaryBHDPart I of BHR upgrade - Locked PMC and IMC

[Paco, Ian]

We opened the laser head shutter. Then, we scanned around the PMC resonance and locked it. We then opened the PSL shutter, touched the MC1, MC2 and MC3 alignment (mostly yaw) and managed to lock the IMC. The transmission peaked at ~ 1070 counts (typical is 14000 counts, so at 10% of PSL power we would expect a peak transmission of 1400 counts, so there might still be some room for improvement). The lock was engaged at ~ 16:53, we'll see for how long it lasts.

There should be IR light entering the BSC!!! Be alert and wear laser safety goggles when working there.

We should be ready to move forward into the TT2 + PR3 alignment.

  16437   Thu Oct 28 16:32:32 2021 PacoSummaryBHDPart IV of BHR upgrade - Removal of BSC eastern optics

[Ian, Paco, Anchal]

We turned off the BSC oplev laser by turning the key counterclockwise. Ian then removed the following optics from the east end in the BSC:

  • OM4-PJ (wires were disconnected before removal)
  • GRX_SM1
  • OM3
  • BSOL1

We placed them in the center-front area of the XEND flow bench.

Photos: https://photos.app.goo.gl/rjZJD2zitDgxBfAdA

  16444   Tue Nov 2 16:42:00 2021 PacoSummaryBHD1Y1 rack work

[paco, ian]

After the new 1Y0 rack was placed near the 1Y1 rack by Chub and Anchal, today we worked on the 1Y1 rack. We removed some rails from spaces ~ 25 - 30. We then drilled a pair of ~ 10-32 thru-holes on some L-shaped bars to help support the c1sus2 machine weight. The hole spacing was set to 60 cm; this number is not a constant across all racks. Then, we mounted c1sus2. While doing this, Paco's knee clicked some of the video MUX box buttons (29 and 8 at least). We then opened the rack's side door to investigate the DC power strips on it before removing stuff. We did power off the DC33 supplies on there. No connections were made to allow us to keep building this rack.

When coming back to the control room, we noticed 3/4 video feed (analog) for the Test masses had gone down... why?


Next steps:

  • Remove sorensen (x5) power supplies from top of 1Y1 .. what are they actually powering???
  • Make more bars to support heavy IO exp and acromag chassis.
  • Make all connections (neat).

Update Tue Nov 2 18:52:39 2021

  • After turning Sorensens back up, the ETM/ITM video feed was restored. I will need to hunt the power lines carefully before removing these.
  16453   Mon Nov 8 10:13:52 2021 PacoSummaryBHD1Y1 rack work; Sorensens removed

[Paco, Chub]

Removed all sorensen power supplies from this rack except for 12 VDC one; that one got pushed to the top of the rack and is still powering the cameras.

  16455   Mon Nov 8 15:29:05 2021 PacoSummaryBHD1Y1 rack work; New power for cameras

[Paco, Anchal]

In reference to Koji's concern (see previous elog), we have completely removed sorensen power supplies from 1Y1. We added a 12 Volts / 2 Amps AC-to-DC power supply for the cameras and verified it works. We stripped off all unused hardware from shutters and other power lines in the strips, and saved the relays and fuses.

We then mounted SR2, PR3, PR2 Sat Amps, 1Y1 Sat amp adapter, and C1SUS2 AA (2) and AI (3) boards. We made all connections we could make with the cables from the test stand, as well as power connections to an 18 VDC power strip.

  16488   Tue Nov 30 17:11:06 2021 PacoUpdateGeneralMoved white rack to 1X3.5

[Paco, Ian, Tega]

We moved the white rack (formerly unused along the YARM) to a position between 1X3, and 1X4. For this task we temporarily removed the hepas near the enclosures, but have since restored them.

Attachment 1: IMG_8749.JPG
IMG_8749.JPG
Attachment 2: IMG_8750.JPG
IMG_8750.JPG
  16499   Fri Dec 10 15:59:23 2021 PacoUpdateBHDFinished Coil driver (even serial number) units tests

[Paco, Anchal]

We have completed modifications and testing of the HAM Coil driver D1100687 units with serial numbers listed below. The DCC tree reflects these changes and tests (Run/Acq modes transfer functions).

SERIAL # TEST result
S2100608 PASS
S2100610 PASS
S2100612 PASS
S2100614 PASS
S2100616 PASS
S2100618 PASS
S2100620 PASS
S2100622 PASS
S2100624 PASS
S2100626 PASS
S2100628 PASS
S2100630 PASS
S2100632 PASS
S2101648** FAIL (Ch1, Ch3 run mode)
S2101650** FAIL (Ch3 run mode)
S2101652** PASS
S2101654** PASS

** A fix had to be done on the DC power supply for these. The units' regulated power boards were not connected to the raw DC power, so the cabling had to be modified accordingly (see Attachment #1)

Attachment 1: dc_fail.jpg
dc_fail.jpg
  16506   Tue Dec 14 19:29:42 2021 PacoUpdateBHD1Y0 rack work for LO1

[Paco]

Two coil drivers have been installed on 1Y0 (slots 6, 7, for LO1 SOS). All connections have been made from the DAC, AI board, DAC adapter, Coil driver, Sat Amp box. Then no SOS load installed, all return connections have been made from Sat Amp box, ADC adapter, AA board, and to ADC. We will continue this work tomorrow, and try to test everything before closing the loop for LO1 suspension.

  16507   Wed Dec 15 13:57:59 2021 PacoUpdateComputersupgraded ubuntu on zita

[Paco]

Upgraded zita's ubuntu and restarted the striptool script.

  16539   Mon Jan 3 12:05:08 2022 PacoUpdateBHD1Y0 rack work for LO2 AS1 AS4

[Paco, Anchal]

Continue working on 1Y0. Added coil drivers for LO2, AS1, AS4. Anchal made additional labels for cables and boxes. We lined up all cables, connected the different units and powered them without major events.

  16540   Mon Jan 3 16:46:41 2022 PacoUpdateBHD1Y1 rack work for SR2, PR2, PR3

[Paco, Anchal]

Continued working on 1Y1 rack. Populated the 6 coil drivers, made all connections between sat amp, AA chassis, DAC, and ADC adapters for SR2, PR2, and PR3 suspensions. Powered all boxes and labeled them and cables where needed. Near the end, we had to increase the current limit on the positive rail sorensen (+18 V) from ~ 7 to > 8.0 Amps to feed all the instruments. We also increased the negative (-18 V) current limit proportionally.

We think we are ready for all the new SOS on this side electronics-wise.


Photos: https://photos.app.goo.gl/GviuqLQviSPo1M3G6

  16542   Tue Jan 4 18:27:23 2022 PacoUpdateBHDSOS assembly -- PR3

[yehonathan, paco, anchal]

We continue suspending PR3 today. Yehonathan and Paco suspended the thick optic in its adapter. After fixing some nominal height and undoing any residual roll angle (see Attachments 1,2 for pictures), we noticed a problem with the pitch angle, so we insert the counterweights all the way in. Nevertheless, we soon found out that we needed to shift one of the two counterweights to the back of the adapter side (so one on each side) in order to tare the pitch angle. This is a newly experienced maneuver that may apply for further thick optics.

After taring the pitch angle roughly, we noted another issue. The wedge (~ 1 deg) on the optic made it such that the protruding socket heads on the thick side bumped against the lower clamp (not the earthquake stop tip itself). Attachments #4,5 show the before/after situation which was solved provisionally by replacing the socket head screws with lower profile (flat) head screws in situ. Again, this operation was highly delicate and specific to wedged thick optics, so for future SOS we should keep it in mind.

Another issue that we had with the new thick optic adapters is that for some reason there is a recession in the upper backside of the adapter (attachment coming soon). This makes the upper back EQ stop too short to touch the adapter. We replaced it with a longer screw. When inserted it doesn't really hit the back of the adapter. Rather, it touches the corner of the recession, stoping the optic with friction.

While all this was happening, Anchal started mounting AS4 on its adapter. After one of the magnets broke off, he switched to another one and succeeded. This is the next target for suspension. We still need to check the orientation of the wedge. Furthermore, we started a gluing session in the afternoon to prepare as much as possible for further SOS during the week. 3 side magnets were glued to side blocks. 3 magnets were glued to 3 adapters that were missing 1 magnet each.

In the afternoon, Yehonathan and Paco set up the QPD and did all the usual balancing, and then Anchal took the data of which the result is shown in Attachment #3. The major peaks are located at 723mHz, 953mHz, and 1.05Hz. Very similar to the case of the thin optic adapters.

Anchal progressed with OSEM installation, and engraving and yehonathan glued the counterweight setscrew in place. After securing the EQ stops, and wrapping the wires in foil, we declare PR3 is ready to be installed.

Attachment 1: PR3_roll_balance.png
PR3_roll_balance.png
Attachment 2: PR3_magnet_height.png
PR3_magnet_height.png
Attachment 3: FreeSwingingSpectra.pdf
FreeSwingingSpectra.pdf
Attachment 4: PXL_20220104_231742123.jpg
PXL_20220104_231742123.jpg
Attachment 5: PXL_20220104_232809203.jpg
PXL_20220104_232809203.jpg
  16559   Sat Jan 8 16:01:42 2022 PacoSummaryBHDPart IX of BHR upgrade - Placed LO2 filters

Added input filters, input matrix, damping filters, output matrix, coil filters, and copy the state over from ITMX into LO2 screen in anticipation for damping.

ELOG V3.1.3-