This morning there was a confliction of tpman running on fb40m and kami1. Alex fixed it temporary but Rana suggested it was better to move both PCs outside martian. We moved both PCs physically to the control room and connected to general network with a local router. I believe it won't conflict anymore but if you guess these PC might have trouble please feel free to shutdown.
Today's work summary:
*connected expansion chassis to bscteststand
*obtained signals on dataviewer, dtt for both realtime and past data on bscteststand with 64kHz timing signal
Excitation channels are not shown, only "other" is shown.
qts.mdl should run with 16kHz but 16kHz timing causes a slow speed on dataviewer and failing data aquisition on dtt. We are using 64kHz timing but is it really correct?
I borrowed SR785 to measure AA, AI noise and TF.
We measured Open loop TF for oplev pitch on ITMX.
All feed back filter of oplev was on as same as before. Original notch filters which notches above 10Hz resonance should be modified with some measurements of present resonant frequency. Up to 10Hz, a simple f^2 filter is used, so the notch should not affect this measurement.
Measured upper UGF is about 2Hz with gain slider 1, and lower UGF is 1.3Hz. Phase margin is 40 degree, so it is not a good idea to increase the gain drastically.
I have measured the coherence also but I could not find a way to put it on this picture. Anyway coherence below 0.6Hz was not so good like ~0.95. This can be improved if larger excitation is used next time.
During this measurement around 0.2-0.3Hz, small earthquake happened but seemed OK for the control.
We will measure the other TF, yaw, ETMX or somthing, maybe tomorrow, due to free swinging ITMX and ETMX tonight.
After Kiwamu had set the free swinging mode for ITMX and ETMX, I found a big jump of ITMX pitch and yaw. This jump is shown on oplev and OSEM plots.
I talked with Kiwamu on the phone that a shutdown of suspensions does not add a big offset, and so that it should not make a big jump.
We were not sure that this jump was due to the shutdown or drift or something else. Anyway I put ITMY oplev on center again at 0;57am.
In this morning, the same thing happened but to opposit direction when Kiwamu activated ITMX and ETMX. Then it turned out that 1000ct offset was existing on pit of ITMX. Erasing the offset fixed ITMX to normal position.
However a big drift exists in 11hours plot on ITMX 0.1->-0.25 at OLPIT, -580->-605 at SUSPIT and 0.08->0.15 at OLYAW, no significan drift at SUSYAW. On the other hand, ETMX has no big drift but has 10-30minites order fluctuations.
After 6am both the drift and the fluctuation became, roughly saying, 10 times larger, probably due to the human activity.
This plot shows ETM oplev and OSEM trend for 10 hours on day before yesterday as almost the same as plot shown this entry. I reported the 10-30minites fluctuations were seen, but I noticed it comes from not suspension but from oplev power fluctuation.
After Kiwamu fixed the ETM OSEM touch yesterday afternoon, still the same trend was seen, so we had thought what we fixed was not enough. This morning I looked at the yesterday's and day before yesterday's trend and noticed the simila trend both the pit and yaw in ETM oplev but not on the OSEM trend. Kiwamu suggested me to put the oplev sum on the same plot. It was!
So, ETMX is not bad, but in fact, still alignment fluctuation exist on the cavity. ITM?
This graph shows 5 hours data in minute trend for ITMX and ETMX from 5am to 10 am today. ITM pitch drift is 3 times lager than ETM pitch if the OSEM sensitivity is assumed to be the same.
This graph is last 1 hour data of above graph in second trend.
It is clealy seen that ITM yaw is jumping between two stages. I guess ITM is something wrong, touching magnets or earthquake stops?
According to c1scy.mdl, OL signals should be connected to adc_0_24 to adc_0_27 but they were connected to adc_0_16 to adc_0_19 which are assigned to QPD signals.
Actually cable connections were messed up. One ribbon cable was connected from QPD driver and ADC ports assigned for OL, and another ribbon cable was connected from the board combining the signals of oplev and QPD to ADC port assigned for QPD.
Now ETMY oplev is working well and aligned to center.
I got a new adapter board for expansion chassis from CDS and exchanged the existing adapter board which was laid on the floor around ETMX to new one.
Then I connected the chassis to c1lcs, c1lsc seems to be running now. I will return the old board to CDS since Rolf says he wants to return it to manufacture.
I found an interface box from ADC to D-SUB37pin and a cable to connect them.
I needed to make cables to connect the interface box to existing LSC whitening filters that has a 37 pin female D-SUB connector on one end and a 40pin female flat connector on the other end. We should use shielded cables for them, but unfortunately CDS did not have right one. Temporarily I made one cable for 1-8ch using a ribbon twist cable like Joe did.
I found a saturation at ch5 of ADC0 on c1lsc. I did not check carefully but it seemed to come from the LSC whitening board. Input of ch5 of the whitening board was not terminated and had a huge output voltage, but also ch6 was not terminated and had no big output. I guess something wrong on the LSC whitening board. Needs to be checked. Anyway I unplugged the small ribbon cable between the whitening board and the next LSC AA filter board.
Finally I realized that fiber connection of RFM did not exist. What I saw was the fiber cable of Dolphin. We need a RFM PCIe interface board, and a long fiber cable between c1lsc and RFM hub.
I'll be gone to Hanford site next week and come back to Caltech on 24th's week.
I setup a standalone RT system at the desk around circuit stock in the 40m.
Please leave this setup until I come back. I'll keep working when I come back.
I implemented a slow servo for green laser thermal control on c1scx.mdl. Ch6,7 of ADC and ch6 of DAC are assigned for this servo as below;
Ch6 of ADC: PDH error signal
CH7 of ADC: PZT feedback signal
CH6 of DAC: feedback signal to thermal of green laser
Note that old EPICS themal control cable is not hooked anymore.
I made a simple MEDM screen(...medm/c1scx/master/C1SCX_BCX_SLOW.adl) linked from GREEN medm screen (C1GCV.adl) on sitemap.
During this work, I noticed that some of the epics switch is not recovered by autoburt. What I noticed is filter switch of SUSPOS, SUSPIT, SUSYAW, SDSEN, and all coil output for ETMX.
I had no idea to fix them, probably Joe knows. I guess other suspensitons has the same problems.
I calibrated noise spectrum of green lock.
1. Measurement of conversion factor of ADC input from V to ct:
As a preparation, first I measured a conversion factor at ADC input of C1;GCX1SLOW_SERVO1.
It was measured while the output of AI ch6 as the output of C1;GCX1SLOW_SERVO2 with 1Hz, 1000ct(2000ct_pp) was directly connected into AA ch7 as the input of C1;GCX1SLOW_SERVO1. Amplitude at the output at AI ch6 was 616mVpp measured by oscilloscope, and C1;GCX1SLOW_SERVO1_IN1 read as 971.9ct_pp. So the conversion factor is calculated as 6.338e-4[V/ct].
2. Injection of a calibration signal:
When Green laser was locked to cavity with fast PZT and slow thermal, I injected 100Hz, 1000ct EXC at ETMX ASL. The signal was measured at C1:GCX1SLOW_SERVO1_IN1 as 5.314ct_rms. It can be converted into 3.368e-3Vrms using above result, and then converted into 3368Hz_rms using PZT efficiency as 1MHz/V. This efficiency was obtained from Koji's knowledge, but he says that it might have 30% or higher error. If somebody get more accurate value, put it into the conversion process from V to Hz here.
Frequency of green f=c/532nm=5.635e14[Hz] is fluctuating with above 3368Hz_rms,so the fluctuation ratio is 3368/5.635=5.977e-12, and it corresponds to length fluctuation of 37.5m. So, cavity fluctuation will be 5.977e-12*37.5=2.241e-10m_rms by 100Hz, 1000ct EXC at ETMX ASL.
Finally, we knew 5.314ct corresponds to 3368Hz and 2.241e-10m, so conversion factor from ct to Hz and ct to m are ;
633.8[Hz/ct] @ C1:GCX1SLOW_SERVO1
4.217e-11[m/ct] @ C1:GCX1SLOW_SERVO1
You can measure green noise spectrum at C1;GCX1SLOW_SERVO1_IN1 during lock, and mutiply above result to convert Hz or m.
This calibration is effective above corner frequency of slow and fast servo around 0.5Hz and UGF of fast servo around 4kHz.
I show an example of calibrated green noise.
Each color show different band-width. Of course this results of calibration cactor does not depend on band-width. Noise around 1.2Hz is 6e-8Hz/rHz. It sounds a bit too good by factor ~2. The VCO efficiency might be too small.
Note that there are several assumptions in this calibration;
1. TF from actual PZT voltage to PZT mon is assumed to be 1 in all frequency. Probably this is not a bad assumption because circuit diagram shows monitor point is extracted PZT voltage directly.
2. However above assumption is not correct if the input impedance of AI is low.
3. As I said, PZT efficiency of 1MHz/V might be wrong.
I also measured a TF from C1:SUS-ETMX_ALS_EXC to C1:GCX1SLOW_SERVO1_IN1. It is similar as calibration injection above but for wide frequency. This shows a clear line of f^-2 of suspension.
Files are located in /users/osamu/:20110127_Green_calibration.
Hi 40m people,
As Rana is saying, the bounce mode does not matter, or we cannot do anything. Generally speaking, the bounce mode cannot be damped by the setting of 40m SUS. Some tweak techniques may damp a bounce mode by res-gain or something, but it is not a proper way, I think.
This is also that Rana is already saying that the important thing is to find a good direction of OSEM to hit the LED beam to the magnet. Even if the magnet is not located at the center of OSEM hole, still you can find the optimal orientation of OSEM to hit the LED beam to the center of magnet by rotating the OSEM.
I know only an old document of T040054 that Shihori summarized how to adjust the matrix at the 40m. Too bad input/output matrix may introduce some troubles, but even roughly adjusted matrix should be still fine.
I will be at Caltech on 12-14 of September. If I can help something, I am willing to work with you!
I found the PSL table left open, and unattended again.
As far as I know, Jamie and Jenne (working on the LSC rack, so no lasers / optics work involved) have been the only ones in the IFO room for several hours now.
I'm going to start taking laser keys, or finding other suitable punishments. Like a day of lab cleanup chores or something. Seriously, don't leave the PSL table open if you're not actively working on it.
Given the similarities between the MDT694B (single channel piezo controller) and TC200 (temperature controller) serial interfaces, I added the pyserial driver here.
*Warning* this first version of the driver remains untested
We went into 40m to identify where XARM PDH loop control elements are. We didn't touch anything, but this is to note we went in there twice at 10 AM and 11:10 AM.
Updated IOO.strip on Zita to show WFS2 pitch and yaw trends (C1:IOO-WFS2_PIY_OUT16 and C1:IOO-WFS2_YAW_OUT16) and changed the colors slightly to have all pitch trends in the yellow/brown band and all yaw trends in the pink/purple band.
No one says, "Here I am attaching a cool screenshot, becuz else where's the proof? Am I right or am I right?"
Mon May 24 18:10:07 2021 [Update]
After waiting for some traces to fill the screen, here is a cool screenshot (Attachment 1). At around 2:30 PM the MC unlocked, and the BS_Z (vertical) seismometer readout jumped. It has stayed like this for the whole afternoon... The MC eventually caught its lock and we even locked XARM without any issue, but something happened in the 10-30 Hz band. We will keep an eye on it during the evening...
Tue May 25 08:45:33 2021 [Update]
At approximately 02:30 UTC (so 07:30 PM yesterday) the 10-30 Hz seismic step dropped back... It lasted 5 hours, mostly causing BS motion along Z (vertical) as seen by the minute trend data in Attachment 2. Could the MM library have been shaking? Was the IFO snoring during its afternoon nap?
I borrowed the little red cart 🛒 to help clear the path for new optical tables in B252 West Bridge. Will return once I am done with it.
After sliding the alignment bias around and browsing through elog while searching for "stuck" we concluded the ITMX osems needed to be freed. To do this, the procedure is to slide the alignment bias back and forth ("shaking") and then as the OSEMs start to vary, enable the damping. We did just this, and then restored the alignment bias sliders slowly into their original positions. Attachment 1 shows the ITMX OSEM sensor input monitors throughout this procedure.
At the end, since MC has trouble catching lock after opening PSL shutter, I tried running burt restore the ioo to 2021/Jun/17/06:19/c1iooepics.snap but the problem persists
Physically rebooted c0rga workstation after failing to ssh into it (even as it was able to ping into it...) the RGA seems to be off though. The last log with data on it appears to date back to 2020 Nov 10, but reasonable spectra don't appear until before 11-05 logs. Gautam verified that the RGA was intentionally turned off then.
We tested the CM board by implementing the high bandwidth IR lock (single arm). In preparation for this test we temporarily connected the POY11_Q_MON output to the CM board IN1 input and checked the YARM POY transfer function by running the AA_YARM_TEMPLATE under users/Templates/LSC/LSC_loops/YARM_POY/. We made sure the YARM dither optimized TRY so as to maximize the optical gain stage. Then we proceeded as follows:
Ultimately, our ability to progressively increase the control bandwidth of the YARM is a proxy that the CM board is working properly. Attachment 1 shows the OLTF progression as we increased the loop's UGF. Note how as we approached the maximum measured UGF of ~ 22 kHz, our phase margin decreased signifying poor stability.
At the end of this measurement, at about ~ 15:45 I restored the CM board IN1 input and disconnected the POY11_Q_MON
gautam: the conclusion here is that the CM board seems to work as advertised, and it's not solely responsible for not being able to achieve the IR handoff.
[yehonathan, anchal, paco, gautam]
We concluded estimating the XARM and YARM losses. The hardware configuration from yesterday remains, but we repeated the measurements because we realized our REFL55_I_ERR and REFL55_Q_ERR signals representing the PD520 and MC_TRANS were scaled, offset, and rotated in a way that wasn't trivially undone by our postprocessing scripts... Another caveat that we encountered today was the need to add a "macroscopic" misalignment to the ITMs when doing the measurement to avoid any accidental resonances.
The final measurements were done with 16 repetitions, 30 second duration, and the logfiles are under scripts/lossmap_scripts/armLoss/logs/20210722_1423.txt and scripts/lossmap_scripts/armLoss/logs/20210722_1513.txt
Finally, the estimated YARM loss is 397 ppm, while the estimated XARM loss is 388 ppm. This is consistent with the inferred PRC gain from Monday and a PRM loss of ~ 2%.
Future measurements may want to look into slow drift of the locked vs misaligned traces (systematic errors?) and a better way of estimating the statistical uncertainty (e.g. by splitting the raw time traces into short segments)
[gautam, yehonathan, paco]
We went back to the loss data from last week and more carefully estimated the ARM loss uncertainties.
Before we simply stitched all N=16 repetitions into a single time-series and computed the loss: e.g. see Attachment 1 for such a YARM loss data. The mean and stdev for this long time series give the quoted loss from last time. We knew that the uncertainty was most certainly overestimated, as different realizations need not sample similar alignment conditions and are sensitive to different imperfections (e.g. beam angular motion, unnormalizable power fluctuations, etc...).
Today we analyzed the individual locked/misaligned cycles individually. From each cycle, it is possible to obtain a mean value of the loss as well as a std dev *across the duration of the trace*, but because we have a measurement ensemble, it is also possible to obtain an ensemble averaged mean and a statistical uncertainty estimate *across the independent cycle realizations*. While the mean values don't change much, in the latter estimate we find a much smaller statistical uncertainty. We obtain an XARM loss of 37.6 2.6 ppm and a YARM loss of 38.9 0.6 ppm. To make the distinction more clear, Attachment 2 and Attachment 3 the YARM and XARM loss measurement ensembles respectively with single realization (time-series) standard deviations as vertical error bars, and the 1 sigma statistical uncertainty estimate filled color band. Note that the XARM loss drifts across different realizations (which happen to be ordered in time), which we think arise from inconsistent ASS dither alignment convergence. This is yet to be tested.
For budgeting the excessive uncertainties from a single locked/misaligned cycle, we could look at beam pointing, angular drift, power, and systematic differences in the paths from both reflection signals. We should be able to estimate the power fluctuations by looking at the recorded arm transmissions, the recorded MC transmission, PD technical noise, etc... and we might be able to correlate recorded oplev signals with the reflection data to identify angular drift. We have not done this yet.
[yehonathan, anchal, paco]
Yesterday around 9:30 pm, we centered the BS, ITMY, ETMY, ITMX and ETMX oplevs (in that order) in their respective QPDs by turning the last mirror before the QPDs. We did this after running the ASS dither for the XARM/YARM configurations to use as the alignment reference. We did this in preparation for PRFPMI lock acquisition which we had to stop due to an earthquake around midnight
We picked up AS WFS comissioning for daytime work as suggested by gautam. In the end we want to comission this for the PRFPMI, but also for PRMI, and MICH for completeness. MICH is the simplest so we are starting here.
We started by restoromg the MICH configuration and aligning the AS DC QPD (on the AS table) by zeroing the C1:ASC-AS_DC_YAW_OUT and C1:ASC-AS_DC_PIT_OUT. Since the AS WFS gets the AS beam in transmission through a beamsplitter, we had to correct such a beamsplitters's aligment to recenter the AS beam onto the AS110 PD (for this we looked at the signal on a scope).
We then checked the rotation (R) C1:ASC-AS_RF55_SEGX_PHASE_R and delay (D) angles C1:ASC-AS_RF55_SEGX_PHASE_D (where X = 1, 2, 3, 4 for segment) to rotate all the signal into the I quadrature. We found that this optimized the PIT content on C1:ASC-AS_RF55_I_PIT_OUT and YAW content on C1:ASC-AS_RF55_I_YAW_OUTMON which is what we want anyways.
Finally, we set up some simple integrators for these WFS on the C1ASC-DHARD_PIT and C1ASC-DHARD_YAW filter banks with a pole at 0 Hz, a zero at 0.8 Hz, and a gain of -60 dB (similar to MC WFS). Nevertheless, when we closed the loop by actuating on the BS ASC PIT and ASC YAW inputs, it seemed like the ASC model outputs are not connected to the BS SUS model ASC inputs, so we might need to edit accordingly and restart the model.
[anchal, yehonatan, paco]
For whatever reason (i.e. we don't really know) the MC unlocked into a weird state at ~ 10:40 AM today. We first tried to find a likely cause as we saw it couldn't recover itself after ~ 40 min... so we decided to try a few things. First we verified that no suspensions were acting weird by looking at the OSEMs on MC1, MC2, and MC3. After validating that the sensors were acting normally, we moved on to the WFS. The WFS loops were disabled the moment the IMC unlocked, as they should. We then proceeded to the last resort of tweaking the MC alignment a bit, first with MC2 and then MC1 and MC3 in that order to see if we could help the MC catch its lock. This didn't help much initially and we paused at about noon.
At about 5 pm, we resumed since the IMC had remained locked to some higher order mode (TEM-01 by the looks of it). While looking at C1:IOO-MC_TRANS_SUMFILT_OUT on ndscope, we kept on shifting the MC2 Yaw alignment slider (steps = +-0.01 counts) slowly to help the right mode "hop". Once the right mode caught on, the WFS loops triggered and the IMC was restored. The transmission during this last stage is shown in Attachment #1.
Yesterday we discussed a bit about working on the PRMI sensing matrix.
In particular we will start with the "issue" of non-orthogonality in the MICH actuated by BS + PRM. Yesterday afternoon we played a little with the oscillators and ran sensing lines in MICH and PRCL (gains of 50 and 5 respectively) in the times spanning [1312671582 -> 1312672300], [1312673242 -> 1312677350] for PRMI carrier and [1312673832 -> 1312674104] for PRMI sideband. Today we realize that we could have enabled the notchSensMat filter, which is a notch filter exactly at the oscillator's frequency, in FM10 and run a lower gain to get a similar SNR. We anyways want to investigate this in more depth, so here is our tentative plan of action which implies redoing these measurements:
Task: investigate orthogonality (or lack thereof) in the MICH when actuated by BS & PRM.
1) Run sensing MICH and PRCL oscillators with PRMI Carrier locked (remember to turn NotchSensMat filter on).
2) Analyze data and establish the reference sensing matrix.
3) Write a script that performs steps 2 and 3 in a robust and safe way.
4) Scan the C1:LSC-LOCKIN_OUTMTRX, MICH to BS and PRM elements around their nominal values.
5) Scan the MICH and PRCL RFPD rotation angles around their nominal values.
We also talked about the possibility that the sensing matrix is strongly frequnecy dependant such that measuring it at 311Hz doesn't give us accurate estimation of it. Is it worthwhile to try and measure it at lower frequencies using an appropriate notch filter?
Wed Aug 11 15:28:32 2021 Updated plan after group meeting
- The problem may be in the actuators since the orthogonality seems fine when actuating on the ITMX/ITMY, so we should instead focus on measuring the actuator transfer functions using OpLevs for example (same high freq. excitation so no OSEM will work > 10 Hz).
Thu Aug 12 11:04:42 2021 Arrived to find the PSL shutter closed. Why? Who? When? How? No elog, no fun. I opened it, IMC is now locked, and the arms were restored and aligned.
[koji, ian, tega, paco]
With the remote/local assistance of Tega/Ian last friday I made changes on the c1sus model by connecting the C1:ASC model outputs (found within a block in c1ioo) to the BS and PRM suspension inputs (pitch and yaw). Then, Koji reviewed these changes today and made me notice that no changes are actually needed since the blocks were already in place, connected in the right ports, but the model probably just wasn't rebuilt...
So, today we ran "rtcds make", "rtcds install" on the c1ioo and c1sus models (in that order) but the whole system crashed. We spent a great deal of time restarting the machines and their processes but we struggled quite a lot with setting up the right dates to match the GPS times. What seemed to work in the end was to follow the format of the date in the fb1 machine and try to match the timing to the sub-second level. This is especially tricky when performed by a human action so the whole task is tedious. We anyways completed the reboot for almost all the models except the c1oaf (which tends to make things crashy) since we won't need it right away for the tasks ahead. One potential annoying issue we found was in manually rebooting c1iscey because one of its network ports is loose (the ethernet cable won't click in place) and it appears to use this link to boot (!!) so for a while this machine just wasn't coming back up.
Finally, as we restored the suspension controls and reopened the shutters, we noticed a great deal of misalignment to the point no reflected beam was coming back to the RFPD table. So we spent some time verifying the PRM alignment and TT1 and TT2 (tip tilts) and it turned out to be mostly the latter pair that were responsible for it. We used the green beams to help optimize the XARM and YARM transmissions and were able to relock the arms. We ran ASS on them, and then aligned the PRM OpLevs which also seemed off. This was done by giving a pitch offset to the input PRM oplev beam path and then correcting for it downstream (before the qpd). We also adjusted the BS OpLev in the end.
Summary; the ASC BS and PRM outputs are now built into the SUS models. Let the AS WFS loops be closed soon!
Addenda by KA
- Upon the RTS restarting,
sudo date --set='xxxxxx'
rtcds start c1x01
telnet fb1 8083
- Today we once succeeded to restart the vertex machines. However, the RFM signal transmission did fail. So the end two machines were power cycled as well as c1rfm, but this made all the machines in RED again. Hell...
- We checked the PRM oplev. The spot was around the center but was clipped. This made us so confused. Our conclusion was that the oplev was like that before the RTS reboot.
At 09:34 PST I noted a glitch in the controls room as the machines went down except for c1ioo. Briefly, the video feeds disappeared from the screens, though the screens themselves didn't lose power. At first I though this was some kind of power glitch, but upon checking with Jordan, it most likely was related to some system crash. Coming back to the controls room, I could see the MC reflection beam swinging, but unfortunately all the FE models came down. I noticed that the DAQ status channels were blank.
I ssh into c1ioo no problem and ran "rtcds stop c1ioo c1als c1omc", then "rtcds restart c1x03" to do a soft restart. This worked, but the DAQ status was still blank. I then tried to ssh into c1sus and c1lsc without success, similarly c1iscex and c1iscey were unreachable. I went and did a hard restart on c1iscex by switching it off, then its extension chassis, then unplugging the power cords, then inverting these steps, and could ssh into it from rossa. I ran "rtcds start c1x01" and saw the same blank DAQ status. I noticed the elog was also down... so nodus was also affected?
Anchal got on zoom to offer some assistance. We discovered that the fb1 and nodus were subject to some kind of system reboot at precisely 09:34. The "systemctl --failed" command on fb1 displayed both the daqd_dc.service and rc-local.service as loaded but failed (inactive). Is it a good idea to try and reboot the fb1 machine? ... Anchal was able to bring elog back up from nodus (ergo, this post).
Although it probably needs the DAQ service from the fb1 machine to be up and running, I tried running the scripts/cds/rebootC1LSC.sh script. This didn't work. I tried running sudo systemctl restart daqd_dc from the fb1 machine without success. Running systemctl reset-failed "worked" for daqd_dc and rc-local services on fb1 in the sense that they were no longer output from systemctl --failed, but they remained inactive (dead) when running systemctl status on them. Following from 15303 I succeeded in restarting the daqd services. Turned out I needed to manually start the open-mx and mx services in fb1. I rerun the restartC1LSC script without success. The script fails because some machines need to be rebooted by hand.
tl;dr: NTP servers and clients were never synchronized, are not synchronizing even with ntp... nodus is synchronized but uses chronyd; should we use chronyd everywhere?
Spent some time investigating the ntp synchronization. In the morning, after Anchal set up all the ntp servers / FE clients I tried restarting the rts IOPs with no success. Later, with Tega we tried the usual manual matching of the date between c1iscex and fb1 machines but we iterated over different n-second offsets from -10 to +10, also without success.
This afternoon, I tried debugging the FE and fb1 timing differences. For this I inspected the ntp configuration file under /etc/ntp.conf in both the fb1 and /diskless/root.jessie/etc/ntp.conf (for the FE machines) and tried different combinations with and without nodus, with and without restrict lines, all while looking at the output of sudo journalctl -f on c1iscey. Everytime I changed the ntp config file, I restarted the service using sudo systemctl restart ntp.service . Looking through some online forums, people suggested basic pinging to see if the ntp servers were up (and broadcasting their times over the local network) but this failed to run (read-only filesystem) so I went into fb1, and ran sudo chroot /diskless/root.jessie/ /bin/bash to allow me to change file permissions. The test was first done with /bin/ping which couldn't even open a socket (root access needed) by running chmod 4755 /bin/ping then ssh-ing into c1iscey and pinging the fb1 machine successfully. After this, I ran chmod 4755 /usr/sbin/ntpd so that the ntp daemon would have no problem in reaching the server in case this was blocking the synchronization. I exited the chroot shell and the ntp daemon in c1iscey; but the ntpstat still showed unsynchronised status. I also learned that when running an ntp query with ntpq -p if a client has succeeded in synchronizing its time to the server time, an asterisk should be appended at the end. This was not the case in any FE machine... and looking at fb1, this was also not true. Although the fb1 peers are correctly listed as nodus, the caltech ntp server, and a broadcast (.BCST.) server from local time (meant to serve the FE machines), none appears to have synchronized... Going one level further, in nodus I checked the time synchronization servers by running chronyc sources the output shows
controls@nodus|~> chronyc sources
210 Number of sources = 4
MS Name/IP address Stratum Poll Reach LastRx Last sample
^* testntp1.superonline.net 1 10 377 280 +1511us[+1403us] +/- 92ms
^+ 220.127.116.11 2 10 377 206 +8219us[+8219us] +/- 117ms
^+ tms04.deltatelesystems.ru 2 10 377 23m -17ms[ -17ms] +/- 183ms
^+ ntp.gnc.am 3 10 377 914 -8294us[-8401us] +/- 168ms
I then ran chronyc clients to find if fb1 was listed (as I would have expected) but the output shows this --
Hostname Client Peer CmdAuth CmdNorm CmdBad LstN LstC
========================= ====== ====== ====== ====== ====== ==== ====
501 Not authorised
So clearly chronyd succeeded in synchronizing nodus' time to whatever server it was pointed at but downstream from there, neither the fb1 or any FE machines seem to be synchronizing properly. It may be as simple as figuring out the correct ntp configuration file, or switching to chronyd for all machines (for the sake of homogeneity?)
[paco, tega, koji]
After invaluable assistance from Jamie in fixing this yearly offset in the gps time reported by cat /proc/gps, we managed to restart the real time system correctly (while still manually synchronizing the front end machine times). After this, we recovered the mode cleaner and were able to lock the arms with not much fuss.
Nevertheless, tega and I noticed some weird noise in the C1:LSC-TRX_OUT which was not present in the YARM transmission, and that is present even in the absence of light (we unlocked the arms and just saw it on the ndscope as shown in Attachment #1). It seems to affect the XARM and in general the lock acquisition...
We took some quick spectrum with diaggui (Attachment #2) but it doesn't look normal; there seems to be broadband excess noise with a remarkable 1 kHz component. We will probably look into it in more detail.
We went over the X end to check what was going on with the TRX signal. We spotted the ground terminal coming from the QPD is loosely touching the handle of one of the computers on the rack. When we detached it completely from the rack the noise was gone (attachment 1).
We taped this terminal so it doesn't touch anything accidently. We don't know if this is the best solution since it is probably needs a stable voltage reference. In the Y end those ground terminals are connected to the same point on the rack. The other ground terminals in the X end are just cut.
We also took the PSD of these channels (attachment 2). The noise seem to be gone but TRX is still a bit noisier than TRY. Maybe we should setup a proper ground for the X arm QPD?
We saw that the X end station ALS laser was off. We turned it on and also the crystal oven and reenabled the temperature controller. Green light immidiately appeared. We are now working to restore the ALS lock. After running XARM ASS we were unable to lock the green laser so we went to the XEND and moved the piezo X ALS alignment mirrors until we maximized the transmission in the right mode. We then locked the ALS beams on both arms successfully. It very well could be that the PZT offsets were reset by the power glitch. The XARM ALS still needs some tweaking, its level is ~ 25% of what it was before the power glitch.
Used diaggui to get OLTF in preparation for optimal system identification / calibration. The excitation was injected at the control point of the XARM loop C1:LSC-XARM_EXC. Attachment 1 shows the TF (red scatter) taken from 35 Hz to 2.3 kHz with 201 points. The swept sine excitation had an envelope amplitude of 50 counts at 35 Hz, 0.2 counts at 100 Hz, and 0.2 at 200 Hz. In purple continous line, the model for the OLTF using all the digital control filters as well as a simple 1 degree of freedom plant (single pole at 0.99 Hz) is overlaid. Note the disagreement of the OLTF "model" at higher frequencies which we may be able to improve upon using vector fitting.
Attachment 2 shows the coherence (part of this initial measurement was to identify an appropriately large frequency range where the coherence is good before we script it).
[paco, koji, tega, ian]
Today in the morning the name server / network file system running in chiara failed. This resulted in donatella/pianosa/rossa shell prompts to hang forever. It also made sitemap crash and even dropping into a bash shell and just listing files from some directory in the file system froze the computer. Remote ssh sessions on nodus also had the same symptoms.
A little after 1 pm, we started debugging this issue with help from Koji. He suggested we hook a monitor, keyboard, and mouse onto chiara as it should still work locally even if something with the NFS (network file system) failed. We did this and then we tried for a while to unmount the /dev/sdc1/ from /home/cds/ (main file system) and mount /dev/sdb1/ from /media/40mBackup (backup copy) such that they swap places. We had no trouble unmounting the backup drive, but only succeeded in unmounting the main drive with the "lazy" unmount, or running "umount -l". Running "df" we could see that the disk space was 100% used, with only ~ 1 GB of free space which may have been the cause for the issue. After swapping these disks by editing the /etc/fstab file to implement the aforementioned swapping, we rebooted chiara and we recovered the shell prompts in all workstations, sitemap, etc... due to the backup drive mounting. We then started investigating what caused the main drive to fill up that quickly, and noted that weirdly now the capacity was at 85% or about 500GB less than before (after reboot and remount), so some large file was probably dumped into chiara that froze the NFS causing the issue.
At this point we tried opening the PSL shutter to recover the IMC. The shutter would not open and we suspected the vacuum interlock was still tripped... and indeed there was an uncleared error in the VAC screen. So with Koji's guidance we walked to the c1vac near the HV station and did the following at ~ 5:13 PM -->
We made sure that P1a (main vacuum pressure) was dropping and before continuing we decided to look back to see what the nominal vacuum state was that we should try to restore.
We are currently searching the two systems for diffrences to see if we can narrow down the culprit of the failure.
We found the files that took excess space in the chiara filesystem (see Attachment 1). They were error files from the summary pages that were ~ 50 GB in size or so located under /home/cds/caltech/users/public_html/detcharsummary/logs/. We manually removed them and then copied the rest of the summary page contents into the main file system drive (this is to preserve the information backup before it gets deleted by the cron job at the end of today) and checked carefully to identify the actual issue for why these files were as large in the first place.
We then copied the /detcharsummary directory from /media/40mBackup into /home/cds to match the two disks.
Came in at ~ 9 PT this morning to find the IFO "down". The IMC had lost its lock ~ 6 hours before, so at about 03:00 AM. Nothing seemed like the obvious cause; there was no record of increased seismic activity, all suspensions were damped and no watchdog had tripped, and the pressure trends similar to those in recent pressure incidents show nominal behavior (Attachment #1). What happened?
Anyways I simply tried reopening the PSL shutter, and the IMC caught its lock almost immediately. I then locked the arms and everything seems fine for now .
For the past couple of days the 0.1 to 0.3 Hz RMS seismic noise along BS-X has increased. Attachment 1 shows the hour trend in the last ~ 10 days. We'll keep monitoring it, but one thing to note is how uncorrelated it seems to be from other frequency bands. The vertical axis in the plot is in um / s
[yehonathan, paco, anchal]
We attempted to find any symptoms for actuation problems in the PRMI configuration when actuated through BS and PRM.
Our logic was to check angular (PIT and YAW) actuation transfer function in the 30 to 200 Hz range by injecting appropriately (f^2) enveloped excitations in the SUS-ASC EXC points and reading back using the SUS_OL (oplev) channels.
From the controls, we first restored the PRMI Carrier to bring the PRM and BS to their nominal alignment, then disabled the LSC output (we don't need PRMI to be locked), and then turned off the damping from the oplev control loops to avoid supressing the excitations.
We used diaggui to measure the 4 transfer functions magnitudes PRM_PIT, PRM_YAW, BS_PIT, BS_YAW, as shown below in Attachments #1 through #4. We used the Oplev calibrations to plot the magnitude of the TFs in units of urad / counts, and verified the nominal 1/f^2 scaling for all of them. The coherence was made as close to 1 as possible by adjusting the amplitude to 1000 counts, and is also shown below. A dip at 120 Hz is probably due to line noise. We are also assuming that the oplev QPDs have a relatively flat response over the frequency range below.
Here are some plots from analyzing the C1:LSC-XARM calibration. The experiment is done with the XARM (POX) locked, a single line is injected at C1:LSC-XARM_EXC at f0 with some amplitude determined empirically using diaggui and awggui tools. For the analysis detailed in this post, f0 = 19 Hz, amp = 1 count, and gain = 300 (anything larger in amplitude would break the lock, and anything lower in frequency would not show up because of loop supression). Clearly, from Attachment #3 below, the calibration line can be detected with SNR > 1.
We read the test point right after the excitation C1:LSC-XARM_IN2 which, in a simplified loop will carry the excitation suppressed by 1 - OLTF, the open loop transfer function. The line is on for 5 minutes, and then we read for another 5 minutes but with the excitation off to have a reference. Both the calibration and reference signal time series are shown in Attachment #1 (decimated by 8). The corresponding ASDs are shown in Attachment #2. Then, we demodulate at 19 Hz and a 30 Hz, 4th-order butterworth LPF, and get an I and Q timeseries (shown in Attachment #3). Even though they look similar, the Q is centered about 0.2 counts, while the I is centered about 0.0. From this time series, we can of course show the noise ASDs in Attachment #3.
The ASD uncertainty bands in the last plot are statistical estimates and depend on the number of segments used in estimating the PSD. A thing to note is that the noise features surrounding the signal ASD around f0 are translated into the ASD in the demodulated signals, but now around dc. I guess from Attachment #3 there is no difference in the noise spectra around the calibration line with and without the excitation. This is what I would have expected from a linear system. If there was a systematic contribution, I would expect it to show at very low frequencies.
We had a second go at this with an increased number of averages (from 10 to 100) and higher excitation amplitudes (from 1000 to 10000). We did this to try to reduce the relative uncertainty a-la-Bendat-and-Pearsol
where are the coherence and number of averages respectively. Before, this estimate had given us a ~30% relative uncertainty and now it has been improved to ~ 10%. The re-measured TFs are in Attachment #1. We did 4 sweeps for each optic (BS, PRM) and removed the 1/f^2 slope for clarity. We note a factor of ~ 4 difference in the magnitude of the coil to angle TFs from BS to PRM (the actuation strength in BS is smaller).
For future reference:
With complex G, we get complex error in G using the formula above. To get uncertainity in magnitude and phase from real-imaginary uncertainties, we do following (assuming the noise in real and imaginary parts of the measured transfer function are incoherent with each other):
Here is a demonstration of the methods leading to the single (X)arm calibration with its budget uncertainty. The steps towards this measurement are the following:
** Note: We ran the same procedure using dtt (diaggui) to validate our estimates at every point, as well as check our SNR in b and d before taking the ~3.5 hours of data.
We repeated the same procedure as before, but with 3 different lines at 55.511, 154.11, and 1071.11 Hz. We overlay the OLTF magnitudes and phases with our latest model (which we have updated with Koji's help) and include the rms uncertainties as errorbars in Attachment #1.
We also plot the noise ASDs of calibrated OLTF magnitudes at the line frequencies in Attachment #2. These curves are created by calculating power spectral density of timeseries of OLTF values at the line frequencies generated by demodulated XARM_IN and ETMX_LSC_OUT signals. We have overlayed the TRX noise spectrum here as an attempt to see if we can budget the noise measured in values of G to the fluctuation in optical gain due to changing power in the arms. We multiplied the the transmission ASD with the value of OLTF at those frequencies as the transfger function from normalized optical gain to the total transfer function value.
It is weird that the fluctuations in transmission power at 1 mHz always crosses the total noise in the OLTF value in all calibration lines. This could be an artificat of our data analysis though.
Even if the contribution of the fluctuating power is correct, there is remaining excess noise in the OLTF to be budgeted.
I have finished assembling the 1U adapters from 8 to 5 DB9 conn. for the satellite amp boxes. One thing I had to "hack" was the corners of the front panel end of the PCB. Because the PCB was a bit too wide, it wasn't really flush against the front panel (see Attachment #1), so I just filed the corners by ~ 3 mm and covered with kapton tape to prevent contact between ground planes and the chassis. After this, I made DB9 cables, connected everything in place and attached to the rear panel (Attachment #2). Four units are resting near the CAD machine (next to the bench area), see Attachment #3.
We had a look at the BS actuation. Along the way we created a couple of issues that we fixed. A summary is below.
After rana left, I did a second pass at the BS actuation. I took TF measurements at the oscilator frequencies noted above using diaggui, and summarize the results below:
This procedure should be done with PRM as well and using the PRCL instead of MICH.
We opened the laser head shutter. Then, we scanned around the PMC resonance and locked it. We then opened the PSL shutter, touched the MC1, MC2 and MC3 alignment (mostly yaw) and managed to lock the IMC. The transmission peaked at ~ 1070 counts (typical is 14000 counts, so at 10% of PSL power we would expect a peak transmission of 1400 counts, so there might still be some room for improvement). The lock was engaged at ~ 16:53, we'll see for how long it lasts.
There should be IR light entering the BSC!!! Be alert and wear laser safety goggles when working there.
We should be ready to move forward into the TT2 + PR3 alignment.
[Ian, Paco, Anchal]
We turned off the BSC oplev laser by turning the key counterclockwise. Ian then removed the following optics from the east end in the BSC:
We placed them in the center-front area of the XEND flow bench.
After the new 1Y0 rack was placed near the 1Y1 rack by Chub and Anchal, today we worked on the 1Y1 rack. We removed some rails from spaces ~ 25 - 30. We then drilled a pair of ~ 10-32 thru-holes on some L-shaped bars to help support the c1sus2 machine weight. The hole spacing was set to 60 cm; this number is not a constant across all racks. Then, we mounted c1sus2. While doing this, Paco's knee clicked some of the video MUX box buttons (29 and 8 at least). We then opened the rack's side door to investigate the DC power strips on it before removing stuff. We did power off the DC33 supplies on there. No connections were made to allow us to keep building this rack.
When coming back to the control room, we noticed 3/4 video feed (analog) for the Test masses had gone down... why?
Update Tue Nov 2 18:52:39 2021