Installation: The following equipment were installed in 1Y3, see Attachment #1:
Removal: The following equipment was removed from 1Y3:
I judge that we are good to go ahead with an install tomorrow.
Work done today:
Testing of functionality:
Much testing remains to be done, but I defer further testing till Monday - the main functionality to be verified in the short run is the whitening gain stepping. The strain-relief of cables and general cleanup will be undertaken by Chub. Current state of affairs is in Attachment #3, leaves much to be desired in terms of cleanliness.
I will also setup the autoburt for the new machine on Monday. We will also need to add some channels to C0EDCU.ini if we want to trend them over some years (e.g. RF signal powers for monitoring ERA-5 health).
* c1lsc FE was rebooted using the usual script, and everything seems to be healthy in CDS-land again, see Attachment #4.
Here is what is left to do:
Today I set up the autoburt.req file for the c1iscaux channels, and confirmed that the snapshots are getting recorded. There were a lot of channels in the old autoburt.req file which I thought were un-necessary (and several which no longer exist), so now the only channels that are burt-ed are the whitening gains and states of the AA filters. If someone feels we need more channels to be snapshot recorded, you can add them to the file.
In the old target directory, there were also various versions of a "saverestore.req" file - why do we need this in addition to an autoburt? I guess it is possible they are used by the IFOconfigure scripts to setup some whitening gains etc...
I did some more investigation of what the appropriate cavity geometry would be for the OMC. Unsurprisingly, depending on the incident mode content, the preferred operating point changes. So how do we choose what the "correct" model is? Is it accurate to model the output beam HOM content from NPROs (is this purely determined by the geometry of the lasing cavity?), which we can then propagate through the PMC, IMC, and CARM cavities? This modeling will be written up in the design document shortly.
*Colorbar label errata - instead of 1 W on BS, it should read 1 W on PRM. The heatmaps take a while to generate, so I'll fix that in a bit.
Update 230pm PDT: I realize there are some problems with these plots. The critically coupled f2 sideband getting transmitted through the T=10% SRM should have significantly more power than the transmission through a T=100ppm optic. For similar modulation depth (which we have), I think it is indeed true that there will be x1000 more f2 power than f1 power for both the IFO AS beam and the LO pickoff through the PRC. But if the LO is picked off elsewhere, we have to do the numbers again.
Attachment #1: Two candidate models. The first follows the power law assumption of G1201111, while in the second, I preserved the same scaling, but for the f1 sideband, I set the DC level by assuming a PRG of 45, modulation depth of 0.18, and 100 ppm pickoff from the PRC such that we get 50 mW of carrier light (to act as a local oscillator) for 10 W incident on the back of PRM. Is this a reasonable assumption?
Attachment #2: Heatmaps of the OMC transmission, assuming (i) 0 contrast defect light in the carrier TEM00 mode, (ii) PRG=45 and (iii) 1 W incident on the back of PRM. The color bar limits are preserved for both plots, so the "dark" areas of the plot, which indicate candidate operating points, are darker in the left-hand plot. Obviously, when there is more f1 power incident on the OMC, more of it is transmitted. But my point is that the "best operating point(s)" in both plots are different.
Why is this model refinement necessary? In the aLIGO OMC design, an assumption was made that the light level of the f1 sideband is 1/1000th that of the f2 sideband in the interferometer AS beam. This is justified as the RC lengths are chosen such that the f2 sideband is critically coupled to the AS port, but the f1 is not (it is not quite anti-resonant either). For the BHD application, this assumption is no longer true, as long as the LO beam is picked off after the RF sidebands are applied. There will be significant f1 content as well, and so the mode content of the f1 field is critical in determining the OMC filtering performance.
There were a bunch of useless / degenerate channels added - e.g. whitening gains which are alreay burt-snapshot. Maybe there are many more useless channels being trended, but no need to add more.
Copy-pasting wasn't done correctly - the first 4 added channels were duplicates. There are in fact 5 LO power mons, one for each of the frequencies 11, 33, 55, 110 and 165 MHz.
I cleaned up. Basically only the detect-mon channels, and the ALS channels, are new in the setup now. I will review if any extra channels are required later. While checking that the daqd is happy, I noticed c1lsc FEs are in their stuck state, see Attachment #1. I guess a cable was bumped when the strain relief operation was underway. I'm not attempting a remote resuscitation.
I added the list of new c1iscaux channels to /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini and restarted the framebuilder. Koji had thought some of these channels might have previously existed under slightly different names. However, after looking through C0EDCU.ini and the other _SLOW.ini files, I did not find any candidates for removal. As far as I can tell, all of these channels are being recorded for the first time.
Chub wanted to get the correct part number for the replacement UPS batteries which necessitated opening up the UPS. To be cautious, all the workstations were shutdown at ~9:30am while the unit is pulled out and inspected. While looking at the UPS, we found that the insulation on the main power cord is damaged at both ends. Chub will post photos.
However, despite these precautions, rossa reports some error on boot up (not the same xdisp junk that happened before). pianosa and donatella came back up just fine. It is remotely accessible (ssh-able) though so maybe we can recover it...
please no one touch the UPS: last time it destroyed ROSSA. Please ask Chub to order the replacement batteries so we can do this in a controlled way (fully shutting down ALL workstations first). Last time we wasted 8 hours on ROSSA rebuilding
I installed 6 of these in 1Y2. Three were for PD INTF #1-3, and I used three more for the AS110, REFL11, and REFL33 Demod board FEs, where the strain-reflief of the DC power cables to the Eurocrate was becoming a problem. So now there are only 4 units available as spares.
Once the strain-relieving of the Dsub cabling to 1Y3 is done, we can move ahead with testing. I'd like to put this to bed this week if possible.
Let's not worry about C1LSC until the c1iscaux upgrade is done.
But it still doesn't lock. We notice that the c1lsc machine doesn't work. So we run rebootCILSC.sh.
For some reason, the daqd_fw service was dead on FB. This meant that no frames were being written since Aug 23, which probably coincides with when the c1lsc frontend crashed. Sad 😢 😭 🙁 . Simply restarting the fw service does not work, it crashes again after ~20 seconds. The problem may have to do with the indeterminate state of the c1lsc expansion chassis. However, this is not something that can immediately be fixed, as Chub is still working on the wiring there. So in summary, no frame data will be available until we fix this problem (it is still unclear what exactly the problem is). Team WFS can still work by getting online data.
Why were the CDS overview DC indicators not red???
Unrelated to this work: I had to key the c1psl crate to get the IMC autolocker functioning again. However, I found that the key 🔑 turns continuously - as opposed to having two well defined states, ON and OFF. Be careful while handling this.
Came across this while looking up the BIO situation at 1Y2. For reference, the fix Koji mentions can be seen in the attached screenshot (one example, the other BIO cards also have a similar fix). The 16th bit of the BIO is grounded, and some bit-shifting magic is used to implement the desired output.
Yutaro talked about the BIO bug in KAGRA elog. http://klog.icrr.u-tokyo.ac.jp/osl/?r=9536
I think I made the similar change for the 40m model somewhere (don't remember), but be aware of the presense of this bug.
controls@fb1:~ 127$ sudo systemctl status daqd_fw.service
● daqd_fw.service - Advanced LIGO RTS daqd frame writer
Loaded: loaded (/etc/systemd/system/daqd_fw.service; enabled)
Active: active (running) since Tue 2019-09-17 21:32:25 PDT; 17min ago
Main PID: 22040 (daqd_fw)
└─22040 /usr/bin/daqd_fw -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.fw
Sep 17 21:32:31 fb1 daqd_fw: [Tue Sep 17 21:32:31 2019] Producer crc thread - label dqprodcrc pid=22108
Sep 17 21:32:31 fb1 daqd_fw: [Tue Sep 17 21:32:31 2019] [Tue Sep 17 21:32:31 2019] Producer thread - label dqproddbg pid=22109Producer crc... permitted
Sep 17 21:32:31 fb1 daqd_fw: [Tue Sep 17 21:32:31 2019] Producer crc thread put on CPU 0
Sep 17 21:32:31 fb1 daqd_fw: [Tue Sep 17 21:32:31 2019] Producer thread priority error Operation not permitted
Sep 17 21:32:31 fb1 daqd_fw: [Tue Sep 17 21:32:31 2019] Producer thread put on CPU 0
Sep 17 21:32:31 fb1 daqd_fw: [Tue Sep 17 21:32:31 2019] Producer thread - label dqprod pid=22103
Sep 17 21:32:31 fb1 daqd_fw: [Tue Sep 17 21:32:31 2019] Producer thread priority error Operation not permitted
Sep 17 21:32:31 fb1 daqd_fw: [Tue Sep 17 21:32:31 2019] Producer thread put on CPU 0
Sep 17 21:32:35 fb1 daqd_fw: [Tue Sep 17 21:32:35 2019] Minute trender made GPS time correction; gps=1252816371; gps%60=51
Sep 17 21:33:31 fb1 daqd_fw: [Tue Sep 17 21:33:31 2019] ->3: clear crc
drwxr-xr-x 2 controls controls 569344 Aug 23 05:17 12465
drwxr-xr-x 2 controls controls 565248 Aug 23 05:41 12466
drwxr-xr-x 2 controls controls 557056 Aug 23 05:53 12505
drwxr-xr-x 2 controls controls 262144 Aug 23 18:40 12506
drwxr-xr-x 2 controls controls 12288 Sep 17 21:54 12528
Unrelated to this work: c1auxey was keyed.
This meant that no frames were being written since Aug 23, which probably coincides with when the c1lsc frontend crashed. Sad 😢 😭 🙁 .
INCORRECT INFO IN THIS ELOG HAS BEEN REMOVED. SEE THIS ELOG FOR THE UPDATED INFO.
With the help of a tester board, I verified the mapping between fast BIO DB37 pins, and pins on the IDC50 connectors that are to be broken out to the whitening boards. I will enlist Chub to implement this mapping in hardware later today.
Update 2019 Sep 19 1730: The pin numbers of the IDC 50 connector are all off by 1. i.e. 3-->4 and so on. I will fix this shortly. The problem was because of me looking at the pinout for the wrong gender of IDC50 connectors.
The custom ribbon cables piping the coil driver board outputs to the eLIGO (?) TTs (a.k.a. TT1 and TT2) are damaged. They need to be re-made. I can't find any pin-mapping for them.
While waiting for the LSC photodiode whitening switching cross-connect work to be done, I thought I'd re-align the IFO a bit. However, I was unable to find any beam making it to the REFL/AS ports despite some TT steering. I remembered that Chub had undone the TT connections at 1Y2 as well, and thought I'd check the cabling to make sure all was in order. On going to the rack, however, I found that these connections were damaged at the coil-driver end (see Attachment #1), presumably during the cable extraction. These need to be re-made...😔
While debugging this problem, c1lsc models crashed. I ran the reboot script this morning to bring the models back. There was a 0x4000 error on the DC indicators for the c1lsc models (mx_stream error which couldn't be fixed by restarting the mx service) the first time I ran the script so I did it again, now the indicator lights are in their nominal state.
False alarm - the mistake was mine. Looking at the schematic diagram, the AI/Dewhite board, D000316, accepts the inputs from the DAC on the P2 connector. While restoring the connections at 1Y2, I had plugged the outputs of the DAC interface board into the P1 connectors of the AI boards. Having rectified this problem, I am now able to move the beam on the AS camera in both PIT and YAW using TT1 or TT2. So to zero-th order, this subsystem appears to work. A more in-depth analysis of the angular stability of the TTs can only be done once we re-align the arms and lock some cavities.
While working on recovering interferometer alignment, I noticed that the ETMX Oplev SUM channel reported 0 counts. Attachment #1 shows the 200 day trend - despite the missing data, the accelerating downward decay is evident. I confirmed that there is no light coming out of the HeNe by walking down to EX. The label on the HeNe says it was installed in March 2017, so the lifetime was ~30 months. Seems a little short? I may replace this later today.
I was hoping that the dark / electronics noise level on the LSC photodiodes would be sufficient for me to test the whitening gain switching on the iLIGO Pentek whitening boards. However, this does not seem to be the case. I guess to be thorough, we have to do this kind of test. It's a bit annoying to have to undo and redo the SMA connections, but I can't think of any obvious easier way to test this functionality. More annoyingly, the sensing matrix infrastructure necessary to do the kind of test described in the linked elog is only available for some PDs. I don't really want to modify the c1cal model and go through another mass reboot cycle.
While I was at it, I was also thinking about the tests we want to do. Here is a quick first pass - if you can think of other tests we ought to do, please add them to the list!
I tried to lock the Y arm cavity length to the PSL frequency using POY11_I as an error signal. Even though I think the cavity alignment is good (I see TRY flashes ~0.8), I am unable to achieve a lock. I checked the signal conditioning, and as far as I can tell, all the settings are correct, but there may be some settings that have not been re-assigned correct values. The other possibility is that something is not quite right with the new c1iscaux. The PDH error signal and arm cavity flashes all seem good though (see Attachment #1), so I'm not sure what obvious thing I'm missing.
To be continued...
I reset the normalization for both arms on Jul 9 2019.
The transmission reached just 1.00 at the end. Was the transmission recently normalized? (See attachment 5)
There is no visible PDH error signal on the POX11 channels. As a result, I am unable to lock the XARM length to the laser frequency. See Attachment #1 - the Y arm length is locked to the PSL frequency, and control is disabled for the XARM servo.
Now that several of the c1iscaux functionality tests have been completed, I wanted to push ahead with some locking. However, I was foiled at this early stage, for reasons as yet unknown. One possibility is that the
To facilitate POX locking investigations, I replaced this HeNe today with one of the spares Chub/Steve had acquired some time ago. Details:
The RIN of the sum channel with the Oplev servo engaged, along with that for the other core FPMI optics, in shown in Attachment #1. The ETMX HeNe RIN is compatible with the other HeNes in the lab (the high-frequency behaviour of the BS Oplev is different from the other four because the QPD whitening electronics are different).
Not sure what to make of the ETMY RIN profile being so different from the others, seems like some kind of glitchy behaviour, I could see the mean level of the ASD moving up and down as I was taking the averages in DTT. Needs further investigation.
The old / broken HeNe is placed i(nside the packaging of the abovementioned replacement HeNe) on Steve's old desk for disposal in the proper way.
*It looked like Steve had hooked up a thermocouple to be able to monitor the temperature of the HeNe head. I removed this feature as I figured if we don't have this hooked up to the DAQ, it isn't a really useful diagnostic. If we want, we can restore this in a more useful way.
DATED, SEE ELOG14941 for the most up-to-date info on latch.py.
I modified /cvs/cds/caltech/target/c1iscaux/latch.py and /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_CM.db to set up the mbbo logic for the other three channels on the CM board, namely REFL2 Gain, AO Gain, and the Super boosts. The systemctl processes were restarted on c1iscaux. We are now ready to perform systematic checks on the CM board functionality.
The addressing of the Acromag BIO registers is done in a way that is kind of inconvenient to use the EPICS mbboDirect protocol
I tested the new latch.py script by toggling the various sliders (one at a time) between two values and monitoring the states of the various soft and "*_BITS" channels, see Attachment #1. The behavior seems consistent to me, but to be sure, we have to use Koji's LED tester board and confirm that the physical bits are being toggled correctly. The StripTool templates live in /cvs/cds/caltech/target/c1iscaux/CMdiag.
I have not yet implemented the fix for the MBBO gain channels for all the gains - only REFL1_GAIN is set up correctly now. Need to look at the hardware for the correct addressing of bits
I confirmed that there is light incident on the POX photodiode. So the problem must lie downstream in the demod / whitening / AA electronics. With the PRM aligned (i.e. PRFPMI config with all DoFs uncontrolled), I could see the flashing beam on an IR card. I could also see the spikes in DC power incident on the photodiode using the "DC Monitor" port on the photodiode head and an oscilloscope.
Update 245 pm: I confirmed that I could see a 11 MHz sine wave by connecting the POX11 RFPD output cable at the 1Y2 end to an oscilloscope. The amplitude of this signal was also changing, corresponding to the cavity fringing in and out of resonance. I couldn't, however, see any signal on the RFPDmon port, or the I/Q demodulated output ports. So as of now, the culprit seems to be something on the Demod board. Further investigations underway...
Update 315pm: I did the following checks:
Look for the POX beam with an IR viewer.
I did the following:
I made some model changes to c1lsc. To propagate the changes, I tried the usual rtcds make sequence. But I got an error about the model file not being in the path. This is down to my re-organization of the paths to cleanly get everything under git version control. So I had to run the following path modification. Where is this variable set and how can I add the new paths to it? The model compilation, installation and restart all went smooth after I made this change.
For smooth reboot of the models, I used the reboot script. I had to restart the daqd processes on FB, but now all the CDS indicator lights are green.
I commenced the procedure of the migration, starting with making a tagged commit of the current running simulink models. A local backup was also made, plus we have the usual chiara-based backup so I think we're in good hands.
Attachment #1 shows a first look at the IR ALS noise after my re-coupling of the IR light into the fiber at EY.
CDS model changes:
Once Koji is done with his checkout of the whitening electronics, I will try and lock the PRMI.
The PRMI was locked with the carrier field resonant in the PRC 🙌. The lock is pretty stable (I only let it stay locked for ~10mins and then deliberately unlocked to see if I could readily re-lock, but it has stayed locked for the last ~20mins while I typed this up). See Attachment #1 for the DC power monitor StripTool for a short section of lock.
Next (for LSC activities):
I'm leaving the LSC mode off for tonight, but with the PRMI optics aligned and ETMs misaligned.
This morning, I restarted the c1oaf model on the c1lsc machine, so as to have the option of enabling some feedforward action. Unsurprisingly, the "DC" indicator is red, citing a "0x2bad". In the past, I've been able to correct this by simply restarting the model. But given the fragility of the c1lsc machine, I think I'll live with not having the OAF model signals in frames. Medium-term, I'd like to pare down the c1oaf model a bit - I think it has way too many options/matrices right now, and is an un-necessarily bloated and heavy model. Unless there are serious objections, I will do this work when I next feel like it.
The anaconda distribution used by the control room workstations is actually installed on the shared drive (/cvs/cds/ligo/apps/anaconda/) for consistency reasons. The version was 4.5.11. I ran the following commands to update it today. Now it is version 4.7.12.
conda update conda
conda update anaconda
The second command takes a while to resolve conflicts, so I've left it running inside a tmux session for now.
Recall that the bash alias for using the anaconda managed python is "apython". I recommend everyone set up a virtual environment when trying out new package installs, to avoid destroying the locking scripts.
I measured the OLTF of both the PRM Oplev loops. Nothing odd sticks out as odd to me in this measurement - there seems to be ~40 degrees of phase margin and >10 dB gain margin for both loops, see Attachment #1. I didn't measure down to the second UGF at ~0.2 Hz (the Oplev loops are AC coupled), so there could be something funky going on there. The problem still persists - if I misalign and realign the PRM using the ifoalign scripts, the automatic engagement of Oplev loops causes the loop to oscillate. Could be that the script doesn't wait for long enough for the alignment transient to die out.
Update 1230pm: Indeed, this was due to the integrator transient. It dies away after a couple of seconds.
The PRMI Oplev servo needs some tuning, it is currently susceptible to oscillations in Pitch.
I was able to lock the FPMI. The lock was quite stable. However, the fluctuations in the ASDC power suggest that it will be difficult to make a DC measurement of the contrast defect in this configuration. This problem can be circumvented in part by some electronics tuning. However, the alignment jitter couples some HOM light which is an independent effect. Can this be a good testbed for the proposed AS WFS system?
I didn't do any serious budgeting yet - need to think about / do some modeling on how this configuration can be made useful.
Today, I found out that this type of "0x2bad" DC error is connected to the 1e+20 cts output. The solution was to bite the bullet and stop/start the c1oaf model (at the risk of crashing the vertex FEs). Today, I was lucky and the model came back online with all CDS indicators green. At which point I was able to engage length feedforward to MC2 (with some admittedly old filter). Some subtraction is happening, see Attachment #1. This was just meant to test whether the signal routing is happening - the feedforward signal goes to the "ALTPOS" input of the suspension CDS block, which AFAIK does not have a corresponding MEDM EPICS indicator. So I couldn't figure out whether the feedforward control signal was in fact making it to the suspension. On the evidence of the suppression of MCL in the 1-3 Hz band, I would conclude that it is. Useful to be able to engage these FF filters for better lockability.
Attachment #1 - the vertex seismometer input produces 1e+20 cts at the output of the feedforward filter. Attachment #2 shows the shape of the feedforward filters - doesn't explain the saturation. Since this is a feedforward loop, a runaway loop can't be the explanation either.
I propose the following re-organization of the PDFR measurement breadboard. We have all the parts on hand, just needs ~30mins of setup work and some characterization afterwards. The fiber beamsplitter will not be PM, but for this measurement, I don't think that matters (the patch fiber from the diode laser head isn't PM anyways). We have one spare 1 GHz BW NF1611 that is fiber coupled (used to live on the ITMY in-air table, and is (conveniently) labelled "REF DET", but I'm not sure what the function of this was). In any case, we have at least 1 free-space NF1611 photodiode available as well. I suggest confirming that the FC version works as expected by calibrating against the free space PD first.
Update 245pm: Implemented, see Attachment #2. Aaron is testing it now, and will post the characterization results.
There is an imbalance between the POX and POY detector outputs reported in the CDS system. Possibilities are (i) the POX PD has a uncoated glass window whereas POY does not or (ii) there is some problem in the elctronics.
So increasingly, it looks like the electronics are the source of the problem.
I think the metric of interest here is the consistency of the AC transimpedance of the proposed new "Reference PD" (= fiber coupled NF1611) vs the old reference (free space NF1611), since everything will be calibrated against that.
Something still looks very wrong -- the PD is supposed to be flat out to 1GHz, and physical units pending, need food.
I managed to achieve a few transitions of control of the XARM length using the ALS error signal. The lock is sort of stable, but there are frequent "glitches" in the TRX level. Needs more noise hunting, but if the YARM ALS is also "good enough", I think we'd be well placed to try PRMI/DRMI locking with the arms held off resonance (while variable finesse remains an alternative).
Attachment #1: One example of a lock stretch.
Attachment #2: ASD of the frequency noise witnessed by POX with the arm controlled by ALS. The observed RMS of ~30pm is ~3-4 times higher than the best performance I have seen, which makes me question if the calibration is off. To be checked...
This elog is meant to be a summary of some of the many subtleties on the CM board. The latest schematic of the version used at the 40m can be found at D1500308 .
Acromag BIO testing:
During my bench testing of the Acromag chassis, I had not yet figured out mbboDirect and the latch logic, so I did not fully verify the channel mapping (= wiring inside the Acromag box), and whether the sitching behavior was consistent with what we expect. Koji and I verified (using the LED tester breakout board) that all the channels have the expected behavior 👏. Note that this is only a certification at the front-panel DB37 connectors of the Acromag chassis testing of the integrated electronics chain including the CM board is in progress...
I improved the alignment of the green beam into the Y arm cavity.
Other changes made today:
I managed to execute the first few transitions of locking the arm lengths to the laser frequency in the CARM/DARM basis using the IR ALS system 🎉 🎊 . The performance is not quite optimized yet, but at the very least, we are back where we were in the green days.
Over the week, I'll try some noise budgeting, to improve the performance. The next step in the larger scheme of things is to see if we can lock the PRMI/DRMI with CARM detuned off resonance.
See trend. This is NOT symptomatic of some frozen slow machine - if I disable the WFS servo inputs, the lock holds just fine.
Turns out that the beam was almost completely missing the WFS2 QPD. WTF ðŸ˜¤. I re-aligned the beam using the steering mirror immediately before the WFS2 QPD, and re-set the dark offsets for good measure. Now the IMC remains stably locked.
Please - after you work on the interferometer, return it to the state it was in. Locking is hard enough without me having to hunt down randomly misaligned/blocked beams or unplugged cables.
I took this opportunity to do some WFS offset updates.
Yesterday, Koji and I noticed (from the wall StripTool traces) that the vertex seismometer RMS between 0.1-0.3 Hz in the X-direction increased abruptly around 6pm PDT. This morning, when I came in, I noticed that the level had settled back to the normal level. Trending the BLRMS channels over the last 24 hours, I see that the 0.3-1 Hz band in the Z direction shows some anomalous behaviour almost in the exact same time-band. Hard to believe that any physical noise was so well aligned to the seismometer axes, I'm inclined to think this is indicative of some electronics issues with the Trillium interface unit, which has been known to be flaky in the past.
There is ~ 7% variation in the power seen by the MC2 trans QPD, depending on the WFS offsets applied to the MC2 PIT/YAW loops. Some more interpretation is required however, before attributing this to spot-position-dependent loss variation inside the IMC cavity.
Attachment #1: This shows a scatter plot of the MC2 transmission and IMC REFL average values after the WFS loops have converged to the set offset positions. The size of the points are proportional to the normalized variance of the quantity. The purpose of this plot is to show that there is significant variation of the transmission, much more than the variance of an individual datapoint during the course of the averaging (again, the size of the circles is only meant to be indicative, the actual variance in counts is much smaller and wouldn't be visible on this plot scale). For a critically coupled cavity, I would have expected that the TRANS/REFL to be perfectly anti-correlated, but in fact, they are, if anything, correleated. So maybe the WFS loops aren't exactly converging to optimize the inoput pointing for a given offset?
Attachment #2: Maps of the transmission/reflection as a function of the (YAW, PIT) offset applied. The radial coordinate does not yet mean anything physical - I have to figure out the calibration from offset counts to spot position motion on the optic in mm, to get an idea for how much we scanned the surface of the optic relative to the beam size. The gray circles indicate the datapoints, while the colormaps are scipy-based interpolation.
Attachment #3: After talking with Koji, I explicitly show the correlation structure between the IMC REFL DCMON and MC2 TRANS. The shaded ellipses indicate the 1, 2 and 3-sigma bounds for the 2D dataset going radially outwards. The correlation coefficient for this dataset is 0.46, which implies moderate positive correlation. 🤔
The following was implemented in a python scipt:
I am now setting the offsets to the WFS QPD loop to the place where there was maximum transmission, to see if this is repeatable. In fact it was. Looking at the QPD segment outputs, I noticed that the MC2 transmission spot was rather off-center on the photodiode. So I went to the MC2 in-air optical table and centered the beam till the output on the 4 segments were more balanced, see Attachment #4. Then I re-set the MC2 QPD offsets and re-enabled the WFS servos. The transmission is now a little lower at ~14,500 counts (but still higher than the ~14200 counts we had before), presumably because we have more of the brightest part of the beam falling on the gap between quadrants. For a more reliable measurement, we should use a single-element photodiode for the MC2 transmission.
In preparation for some locking work tonight, I did the following at the POP in air table with the PRMI locked on carrier:
Looking at the old latch.st code, looks like this is just a heartbeat signal to indicate the code is alive. I'll implement this. Aesthetically, it'd be also nice to have the hex representation of the "*_SET" channels visible on the MEDM screen.
Latch logic works. But latch alive signal is missing.
After making sure the beams were hitting the 3f photodiodes on the "AP" table, I was able to lock the PRMI with the sidebands resonant inside the RC using 3f error signals. This would be the config we run in when trying to lock some more complicated configuration, such as the PRFPMI (i.e. start with the arms controlled by ALS, held off resonance). Tonight, I will try this (even though obviously I am not ready for the CARM transition step). The 3f lock is pretty robust, I was able to stay locked for minutes at a time and re-acquisition was also pretty quick. See Attachment #1. Not sure how significant it is, but I set the offsets to the 3f paths by averaging the REFL33_I and REFL33_Q signals when the PRMI was locked with the 1f error signals.
As usual, there's a lot of angular motion of the POP spot on the CCD monitor, but the lock seems to be able to ride it out.
Lock-settings (I modified the .snap file accordingly):
REFL33_I --> PRCL, loop gain = -0.019, Trigger on POP22, ON @ 20cts, OFF@0.5cts.
REFL33_Q --> MICH, loop gain = +1.4, Trigger on POP22, ON @ 20cts, OFF@0.5 cts.
This problem has re-surfaced. Is this indicative of some problem with the on-board VGA? Even with 0dB of whitening gain, I see PDH horns that are 10,000 ADC counts in amplitude, whereas the nominal whitening gain for this channel is +18dB. I'll look at it in the daytime, not planning to use REFL55 for any locking tonight.
Hardware issues that need addressing: