No real progress tonight - I made it a bunch of times to the point where CARM was RF only, but I never got to run a measurement to determine what the DARM_B loop gain should be to make the control fully RF.
There was a jump in the main volume pressure at ~6pm PDT yesterday. The cause is unknown, but the pressure doesn't seem to be coming back down (but also isn't increasing alarmingly).
I wanted to look at the RGA scans to see if there were any clues as to what changed, but looks like the daily RGA scans stopped updating on Dec 24 2019. The c0rga machine responsible for running these scans doesn't respond to ssh. Not much to be done until the lockdown is over i guess...
I was in the lab at the time. But did not notice anything (like turbo sound etc). I was around ETMX/Y (1X9, 1Y4) rack and SUS rack (1X4/5), but did not go into the Vac region.
Some short notes, more details tomorrow.
Attachment #1 shows time series of some signals, from the time I ramp of ALS CARM control to a lockloss. With this limited set of signals, I don't see any clear indication of the cause of lockloss, but I was never able to keep the lock going for > a couple of mins.
Attachment #2 shows the CARM OLTF. Compared to last week, I was able to get the UGF a little higher. This particular measurement doesn't show it, but I was also able to engage the regular boost. I did a zeroth order test looking at the CM_SLOW input to make sure that I wasn't increasing the gain so much that the ADC was getting saturated. However, I did notice that the pk-to-pk error signal in this locked, 5kHz UGF state was still ~1000 cts, which seems large?
Attachment #3 shows the DTT measurement of the relative gains of DARM A and B paths. This measurement was taken when the DARM_A gain was 1, and DARM_B gain was 0.015. On the basis of this measurement, DARM_B (=AS55) sees the excitation injected 16dB above the ALS signal, and so the gain of the DARM_B path should be ~0.16 for the same UGF. But I was never able to get the DARM_B gain above 0.02 without breaking the lock (admittedly the lockloss may have been due to something else).
Attachment #4 shows a zoomed in version of Attachment #1 around the time when the lock was lost. Maybe POP_YAW experienced too large an excursion?
Some other misc points:
I think the feedforward filters used for stabilizing MCL with vertex seismometers would benefit from a retraining (last trained in Sep 2015).
I wanted to re-familiarize myself with the seismic feedforward methodology. Getting good stabilization of the PRC angular motion as we have been able to in the past will be a big help for lock acquisition. But remotely, it is easier to work with the IMC length feedforward (IMC is locked more often than the PRC). So I collected 2 hours of data from early Sunday morning and went through the set of steps (partially).
Attachment #1 shows the performance of a first attempt.
Attachment #2 shows a comparison between the filter used in Attachment #1 and the filters currently loaded into the OAF system.
Attachment #3 is the asd after implementing a time domain Wiener filter, while Attachment #4 is an actual measurement from earlier today - it's not quite as good as Attachment #3 would have me expect but that might also be due to the time of the day.
Conclusions and next steps:
On the basis of Attachments #3 and #4, I'd say it's worth it to complete the remaining steps for online implementation: FIR to IIR fitting and conversion to sos coefficients that Foton likes (prefereably all in python). Once I've verified that this works, I'll see if I can get some data for the motion on the POP QPD with the PRMI locked on carrier. That'll be the target signal for the PRC angular FF training. Probably can't hurt to have this implemented for the arms as well.
While this set of steps follows the traditional approach, it'd be interesting if someone wants to try Gabriele's code which I think directly gives a z-domain representation and has been very successful at the sites.
* The y-axes on the spectra are labelled in um/rtHz but I don't actually know if the calibration has been updated anytime recently. As I type this, I'm also reminded that I have to check what the whitening situation is on the Pentek board that digitizes MCL.
The email address in the N2 checking script wasn't right - I now updated it to email the 40m list if the sum of reserve tank pressures fall below 800 PSI. The checker itself is only run every 3 hours (via cron on c1vac).
I reset the remote of this git repo to the 40m version instead of Jon's personal one, to ensure consistency between what's on the vacuum machine and in the git repo. There is now a N2 checker python mailer that will email the 40m list if all the tank pressures are below 600 PSI (>12 hours left for someone to react before the main N2 line pressure drops and the interlocks kick in). For now, the script just runs as a cron job every 3 hours, but perhaps we should integrate it with the interlock process
Since there has been a proliferation of BHD Google docs recently, I've linked them all from the BHD wiki page. Let's continue adding any new docs to this central list.
I have made a wiring + channel list that need to be included in the new C!AUXEY Acromag.
It was mostly copied from C1AUXEX.
I ignored the IPANG channels since it is going to be removed from the table.
Yesterday evening I took nearly all of the masks, gloves, gowns, alcohol wipes, hats, and shoe covers. These were the ones in the cleanroom cabinets at the east end of the Y-arm, as well as the many boxes under the yarm near those cabinets.
This photo album shows the stuff, plus some other random photos I took around the same time (6-7 PM) of the state of parts of the lab.
I'd like to re-measure the transfer function from driving MC2 position to the MC_L_DQ channel (for feedforward purposes). Swept sine would be one option, but I can't get the "Envelope" feature of DTT to work, the excitation amplitude isn't getting scaled as specified in the envelope, and so I'm unable to make the measurement near 1 Hz (which is where the FF is effective). I see some scattered mentions of such an issue in past elogs but no mention of a fix (I also feel like I have gotten the envelope function to work for some other loop measurement templates). So then I thought I'd try broadband noise injection, since that seems to have been the approach followed in the past. Again, the noise injection needs to be shaped around ~1 Hz to avoid knocking the IMC out of lock, but I can't get Foton to do shaped noise injections because it doesn't inherit the sample rate when launched from inside DTT/awggui - this is not a new issue, does anyone know the fix?
Note that we are using the gds2.15 install of foton, but the pre-packaged foton that comes with the SL7 installation doesn't work either.
The envelope feature for swept-sine wasn't working because i specified the frequency grid in the wrong order apparently. Eric von Reis has been notified to include a sorting algorithm in future DTT so that this can be in arbitrary order. fixing that allows me to run a swept sine with enveloped excitation amplitude and hence get the TF I want, but still no shaped noise injections via foton 😢
do you really mean awggui cannot make shaped noise injections via its foton text box ? That has always worked for me in the past.
If this is broken I'm suspicious there's been some package installs to the shared dirs by someone.
The problem is that foton does not inherit the model sample rate when launched from DTT/awggui. This is likely some shared/linked/dynamic library issue, the binaries we are running are precompiled presumably for some other OS. I've never gotten this to work since we changed to SL7 (but I did use it successfully in 2017 with the Ubuntu12 install).
Retraining the MCL filters resulted in a slight improvement in the performance. Compared to no FF, the RMS in the 0.5-5 Hz range is reduced by approximately a factor of 3.
Attachment #1 shows my re-measurement of the MC2 position drive to MCL transfer function.
Attachment #2 shows the IIR fits to the FIR filters calculated here.
Attachment #3 shows several MCL spectra.
Conclusions + next steps
This afternoon, I kept the PRM locked for ~1hour and then measured transfer functions from the PRM angular actuators to the POP QPD spot motion for pitch and yaw between ~1pm and 4pm. After this work, the PRM was misaligned again. I will now work on the feedforward filter design.
I used Yehonathan's wiring assignments to lay the rest of groundwork for the final slow controls machine upgrade, c1auxey. Actions completed:
The "1" will be dropped after the new system is permanently installed.
Hardware-wise, this system will require:
I know that we do have these quantities left on hand. The next steps are to set up the Supermicro host and begin assembling the Acromag chassis. Both of these activities require an in-person presence, so I think this is as far as we can advance this project for now.
We want to migrate the end shutter controls from c1aux to the end acromags. Could you include them to the list if not yet?
This will let us remove c1aux from the rack, I believe.
Yehonathan's list does include C1:AUX-GREEN_Y_Shutter and I copied its definition from /cvs/cds/caltech/target/c1aux/ShutterInterlock.db into the new ETMYaux.db file.
I noticed ShutterInterlock.db still contains about a dozen channels. Some of them appear to be ghosts (like the C1:AUX-PSL_Shutter[...] set, which has since become C1:PSL-PSL_Shutter[...] hosted on c1psl) but others like C1:AUX-GREEN_X_Shutter appear to still be in active use.
I wanted to pass along a complication pointed out by K. Thorne re: our plan to use Gen1 (old) Dolphin IPC cards in the new real-time machines: c1bhd, c1sus2. The implication is that we may be forced to install a very old OS (e.g., Debian 8) for compatibility with the IPC card driver, which could lead to other complications like an incompatibility with the modern network interface.
I have a query out to Dolphin asking:
I'll add more info if I hear back from them.
Using the data I collected yesterday, the POP angular FF filters have been trained. The offline time-domain performance looks (unbelievably) good, online performance will be verified at the next available opportunity(see update).
The sequence of steps followed is the same as that done for the MCL FF filters. The trace that is missing from Attachment #1 is the measured online subtraction. Some rough notes:
Update Apr 5 1145pm:
that's pretty great performance. maybe you can also upload some code so that we can do it later too - or maybe in the 40m GIT
I wonder how much noise is getting injected into PRC length at 10-100 Hz due to this. Any change the PRC ERR?
I don't have a recent measurement of the optical gain of this config so I can't undo the loop, but in-loop performance doesn't suggest any excess in the 10-100 Hz band. Interestingly, there is considerable improvement below 10 Hz. Maybe some of this is reduced A2L noise because of the better angular stability, but there is also improvement at frequencies where the FF isn't doing anything, so could be some bilinear coupling. The two datasets were collected at approximately the same time in the evening, ~5pm, but on two different days.
Answers from Dolphin:
Since upgrading every front end is out of the question, our only option is to install an old OS (Linux kernel < 3.x) on the two new machines. Based on Keith's advice, I think we should go with Debian 8. (Link to Keith's Debian 8 instructions.)
In the past year, pygwinc has expanded to support not just fundamental noise calculations (e.g., quantum, thermal) but also any number of user-defined noises. These custom noise definitions can do anything, from evaluating an empirical model (e.g., electronics, suspension) to loading real noise measurements (e.g., laser AM/PM noise). Here is an example of the framework applied to H1.
Starting with the BHD review-era noises, I have set up the 40m pygwinc fork with a working noise budget which we can easily expand. Specific actions:
I set up our fork in this way to keep the 40m separate from the main pygwinc code (i.e., not added to as a built-in IFO type). With the 40m code all contained within one root-level directory (with a 40m-specific name), we should now always be able to upgrade to the latest pygwinc without creating intractable merge conflicts.
[Larry (on site), Koji & Gautam (remote)]
Network recovery (Larry/KA)
Asked Larry to get into the lab.
14:30 Larry went to the lab office area. He restarted (power cycled) the edge-switch (on the rack next to the printer). This recovered the ssh-access to nodus.
Also Larry turned on the CAD WS. Koji confirmed the remote access to the CAD WS.
Nodus recovery (KA)
Apr 12, 22:43 nodus was restarted.
Apache (dokuwiki, svn, etc) recovered along with the systemctl command on wiki
ELOG recovered by running the script
Control Machines / RT FE / Acromag server Status
Judging by uptime, basically only the machines that are on UPS (all control room workstations + chiara) survived the power outage. All RT FEs are down. Apart from c1susaux, the acromag servers are back up (but the modbus processes have NOT been restarted yet). Vacuum machine is not visible on the network (could just be a networking issue and the local subnet to valves/pumps is connected, but no way to tell remotely).
KA imagines that FB took some finite time to come up. However, the RT machines required FB to download the OS. That made the RTs down. If so, what we need is to power cycle them.
Acromag: unknown state
The power was lost at Apr 12 22:39:42, according to the vacuum pressure log. The power loss was for a few min.
I just now modified the /etc/rsyncd.conf file as per Dan Kozak's instructions. The old conf file is still there with the file name appended with today's date.
I then enabled the rsync daemon to run on boot using 'enable'. I'll ask Dan to start the file transfers again and see if this works.
controls@nodus|etc> sudo systemctl start rsyncd.service
controls@nodus|etc> sudo systemctl enable rsyncd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rsyncd.service to /usr/lib/systemd/system/rsyncd.service.
controls@nodus|etc> sudo systemctl status rsyncd.service
● rsyncd.service - fast remote file copy program daemon
Loaded: loaded (/usr/lib/systemd/system/rsyncd.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2020-04-13 16:49:12 PDT; 1min 28s ago
Main PID: 4950 (rsync)
└─4950 /usr/bin/rsync --daemon --no-detach
Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd: Started fast remote file copy program daemon.
Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd: Starting fast remote file copy program daemon...
[Koji / Gautam (Remote)]
sudo /sbin/ifdown eth0
sudo /sbin/ifup eth0
End RTS recovery
rtcds start --all
Vertex RTS recovery
sudo /sbin/ifup eth1
sudo systemctl start modbusIOC.service
sudo /sbin/ifdown eth1
sudo systemctl start modbusIOC.service
RTS recovery ~ part 2
sudo systemctl start open-mx.service
sudo systemctl start mx.service
sudo systemctl start daqd_*
sudo systemctl start MCautolocker.service
sudo systemctl start FSSSlow.service
Four nitrogen cylinders replaced the empties in the rack at the west entrance. Additionally, Airgas will now deliver only once a week. Let me know via email or text when the there are four empties in the rack and I'll order the next round.
I've generated specifications for the new BHD optics. This includes the suspended relay mirrors as well as the breadboard optics (but not the OMCs).
To design the mode-matching telescopes, I updated the BHD mode-matching scripts to reflect Koji's draft layout (Dec. 2019) and used A La Mode to optimize ROCs and positions. Of the relay optics, only a few have an AOI small enough for curvature (astigmatism) and most of those do not have much room to move. This reduced the optimization considerably.
These ROCs should be viewed as a first approximation. Many of the distances I had to eyeball from Koji's drawings. I also used the Gaussian PRC/SRC modes from the current IFO, even though the recycling cavities will both slightly change. I set up a running list of items like these that we still need to resolve in the BHD README.
At a glance, all the specifications can be seen in the optics summary spreadsheet.
The LO beam originates from the PR2 transmission (POP), near ITMX. It is relayed to the BHD beamsplitter (and mode-matched to the OMCs) via the following optical sequence:
The resulting beam profile is shown in Attachment 1.
The AS beam is relayed from the SRM to the BHD beamsplitter (and mode-matched to the OMCs) via the following sequence:
A lens is used because there is not enough room on the BHD breadboard for a pair of (low-AOI) telescope mirrors, like there is in the LO path. The resulting beam profile is shown in Attachment 2.
There's this elog from Stephen about better 1064 sensitivity from Basler. We should consider getting one if he finds that its actual SNR is as good as we would expect from the QE improvement.
Might allow for better scatter measurements - not that we need more signal, but it could allow us to use shorter exposure times and reduce blurring due to the wobbly beams.
Ok, now I understand my foolishness. It should definitely be 1/sqrt(f^2+fp^2) .
Covid 19 motivated me to revive the summary pages. With Alex Urban's help, the infrastructure was modernized, the wiki is now up to date. I ran a test job for 2020 March 17th, just for the IOO tab, and it works, see here. The LDAS rsync of our frames is still catching up, so once that is up, we can start the old condor jobs and have these updated on a more regular basis.
On Monday, I hooked up an AG4395 to the PMC error point (using the active probe). The idea was to take a spectrum of the PMC error point every time the FSS PC drive RMS channel indicated an excursion from the nominal value. An initial look at the results don't suggest that this technique is particularly informative. I'll have to think more about a workaround, but please share your ideas/thoughts if you have some.
Also, the feature in the spectrum at ~110 kHz makes me suspect some kind of loop instability. I'll measure the IMC loop OLG at the next opportunity.
Nice, and we should also permanently install the camera server (c1cam) which is still sitting on the electronics bench. It is running an adapted version of the Python 2/Debian 8 site code. Maybe if COVID continues long enough I'll get around to making the Python 3 version we've long discussed.
I had set up the 4395 to do this automatically a few years ago, but it looked at the FSS/IMC instead. When the PCDRIVE goes high there is this excess around ~500 kHz in a broad hump.
But the IMC loop gain changes sometimes with alignment, so I don't know if its a loop instability or if its laser noise. However, I think we have observed PCDRIVE to go up without IMC power dropping so my guess is that it was true laser noise.
This works since the IMC is much more sensitive than PMC. Perhaps one way to diagnose would be to lock IMC at a low UGF without any boosts. Then the UGF would be far away from that noise making frequency. However, the PCDRIVE also wouldn't have much activity.
The new nitrogen cylinders were delivered to the rack at the west entrance. We only get one Airgas delivery per week during the stay-at-home order, but so far they've not let us down.
It appears that the EY green steering PZTs have somehow lost their bipolar actuation range. I will check on them the next time I go to the lab for an N2 switch.
Could be that the power outage busted something in the drive electronics.
I went to EY and saw that the HV power supply was only putting out 50 V and had hit the current limit of 10 mA (nominally, it should be 100 V, drawing ~7mA). This is definitely a problem that has come up after the power shutdown event, as when I re-energized the HV power supply at EY, I had confirmed that it was putting out the nominal values (the supply was not labelled with these nominal numbers so I had to label it). Or maybe I broke it while running the dither alignment tests yesterday, even though I never drove the PZTs above 50 Hz with more than 1000cts (= 300 mV * gain 5 in the HV amplifier = 1.5 V ) amplitude.
The problem was confirmed to be with the M2 PZT (YAW channel) and not the electronics by driving the M2 PZT with the M1 channels. Separately, the M1 PZT could be driven by the M2 channels. I also measured the capacitance of the YAW channels and found it to be nearly twice (~7 uF) of the expected 3 uF - this particular PZT is different from the three others in use by the ASX and ASY system, it is an older vintage, so maybe it just failed? 😔
I don't want to leave 100 V on in this state, so the HV supply at EY was turned off. Good GTRY was recovered by manual alignment of the mirror mounts. If someone has a spare PZT, we can replace it, but for now, we just have to live with manually aligning the green beam often.
Yes, we are supposed to have a few spare PI PZTs.
I've been thinking about the IMC WFS. I want to repeat the sort of analysis done at LLO where a Finesse model was built and some inferences could be made about, for example, the Gouy phase separation b/w the sensors by comparing the Finesse sensing matrix to a measured sensing matrix. Taking the currently implemented output matrix as a "measurement" (since the IMC WFS stabilize the IMC transmission), I don't get any agreement between it and my Finesse model. Could be that the model needs tweaking, but there are several known issues with the WFS themselves (e.g. imbalanced segment gains).
Building the finesse model:
Some notes about the WFS heads:
Update 215 pm 5/6: adding in some comments from Rana raised during the meeting:
The apparent increase in the ALS noise (witnessed in-loop, e.g. Attachment #2 here) during the CARM offset reduction may have an optomechanical origin.
Update 415pm 5/6: Per the discussion at the meeting, I have now uploaded as Attachment #2 the force-->displacement (i.e. m/N) transfer functions. I now think these are appropriate units. For the ALS case, we could convert the m/N to Hz/N of extra frequency noise imprinted on the AUX laser due to the increased cavity motion. Is W/N really better here, since the mechanism is extra frequency noise on a beatnote, and there isn't really a PDH or DC error signal?
This is the doc from Keita Kawabe on why the WFS heads should be rotated.
OK so the QPD segments are in the "+" orientation when the 40m IMC WFS heads are mounted at 45 deg. I thought "+" was the natural PIT/YAW basis but I guess in the the LIGO parlance, the "X" orientation was considered more natural.
After updating the 40 m finesse file to incorporate the new SRC length (and the removal of SR2), we find that the current SRM radius curvature is fine. Thus a replacement of SRM is NOT required.
Basically, the new one-way SRC gouy phase is 11.1 deg according to Finesse, which is very close to the previous value of 10.8 deg. Thus the transmode spacing should be essentially the same.
In the first attached plot is the mode content calculated with Finesse. Here we have first offset DARM by 1m deg and misaligned the SRM by 10 urad. From the top to bottom we show the amplitude of the carrier fields, f1, and f2 sidebands, respectively. The red vertical line is the nominal operating point (thanks Koji for pointing out that we do signal recycling instead of extraction now). No direct co-resonance for the low-order TEM modes. (Note that the HOMs appeared to also have peaks at \phi_srm = 0. This is just because the 00 mode is resonant and thus the seed for the HOMs is greater. )
We can also consider a clean case without mode interactions in the second plot. Indeed we don't see co-resonances of high order modes.
Finally - Attachment #1. This plot uses 16 Hz EPICS data. All y-axes are uncalibrated for now, but TRX/TRY are normalized such that the POX/POY lock yields a transmission of 1. CARM UGF is only ~3 kHz, no boosts were turned on yet.
Attachment #2 and Attachment #3 are phone photos of the camera images of the various ports. After some alignment work, the transmitted arm powers were ~200, i.e. PRG ~10. fwiw, this is the darkest i've ever seen the 40m dark port. c.f. 2016. Of course, the exposure time / ND filter / light levels could all have changed.
This work was possible during the daytime (~6pm PDT), but probably only because it was Sunday. The other rate limiting factor here is the franky terrible IMC duty cycle. TBH, I didn't honestly expect to get so far and ran out of time, but I think the next steps are:
As usual, I would like to request that we don't change the IFO as far as possible until the BHD vent, i found it pretty difficult to get here.
Attachment #4 now shows the measured DARM OLTF when DARM is entirely on AS55_Q control. UGF is ~120 Hz and the phase margin is ~30 deg, seems okay for a first attempt. I'll now need to infer the OLTF over a wider range of frequencies by lining this measurement up with some model, so that I can undo the loop in plotting the DARM ASD.
apt install source-highlight
then modified bashrc to point to /usr/share instead of /usr/bin
Attachment #1 is meant to show that having a T=500ppm PR2 optic will not be the dominant contributor to the achievable recycling gain. Nevertheless, I think we should change this optic to start with. Here, I assume:
In relaity, I don't know how good the MM is between the PRC and the arms. All the scans of the arm cavity under ALS control and looking at the IR resonances suggest that the mode-matching into the arm is ~92%, which I think is pretty lousy. Kiwamu and co. claim 99.3% matching into the interferometer, but in all the locks, the REFL mode looks completely crazy, so idk
Is \eta_A the roundtrip loss for an arm?
Thinking about the PRG=10 you saw:
- What's the current PR2/3 AR? 100ppm? 300ppm? The beam double-passes them. So (AR loss)x4 is added.
- Average arm loss is ~150ppm?
Does this explain PRG=10?