[koji, rana, gautam]
This morning, we did the following;
The OSEMs remain in the EY vacuum chamber. The next set of steps are:
We will most likely work on this tomorrow. At ~1615, I briefly opened the PSL shutter and tweaked the IMC alignment. We will almost certainly change the pointing into the IMC when we remove the old OMC and rebalance that table, so care should be taken when working on that...
To be a bit more clear about what we are going to do in the OMC chamber, I marked-up some photos, see Attachments #1 and #2.
I anticipate that after this work, the only components on the table will be
Are we in agreement with this plan?
See #15656 for the updated photo
Good point - looking back, I also see that I already removed the mirror at the SW corner of the table in 2016. Revised photo in Attachment #1. There is an optic on the east edge of this table whose purpose I'm not sure of, but I'm pretty sure it isn't essential to the main functionality and so can be removed.
I believe the mirror next to IM1 is for the green beams to be delivered to the PSL table. I think we still want to keep it. Otherwise, the plan looks fine.
I got a call from Calum ~830am today saying some facilities people entered the lab, opened the south entrance door, and tripped the alarm in the process. I came to the lab shortly after and was able to reset the alarm by flipping the switch on the alarm box at the south end entrance to "Alarm OFF". Then, I double checked that the door is closed, and re-enabled the alarm. The particle count at the SP table is not unusually high and the lasers (Oplev HeNe and AUX X) were still on, so doesn't look like any lasting damage was done. The facilities people were apparently wearing laser safety goggles.
The IMC isn't resonant for a TEM00 mode at the time of writing - we are waiting for the stack to relax, at which point if the IMC isn't resonant for a TEM00 mode, we will tweak the input pointing into the IMC (we want to use the suspended cavity as the reference, since it is presumably more reliable than the table from which we removed ~50 kgs of weight and shifted the balance.
I am working on the setup of a CDS FE, so please do not attempt any remote login to the IPMI interface of c1bhd until I'm done.
So all the primary vent objectives have been achieved 🙌 . The light doors are on the chamber right now. I'm measuring the free-swinging spectra of ETMY overnight. Barring any catastrophic failures and provided all required personnel are available, we will do the final pre-close-up checks, put the heavy doors back on, and pump down starting 10 am Monday, 9 Nov 2020. Some photos here.
Attachment #1 shows the main result - there are 4 peaks. The frequencies are a little different from what I have on file for ETMY and the Qs are a factor of 3-4 lower (except SIDE) than what they are in vacuum, which is not unreasonable I hypothesize. The fits suggest that the peak shape isn't really Lorentzian, the true shape seems to have narrower tails than a Lorentzian, but around the actual peak, the fit is pretty good. More detailed diagnostic plots (e.g. coil-to-coil TFs) are in the compressed Attachment #2. The condition number of the matrix to diagonalize the sensing matrix (i.e. what we multiply the "naive" OSEM 2 Euler basis matrix by) is ~40, which is large, but I wouldn't read too much into it at this point.
I see no red flags here - the PIT peak is a little less prominent than the others, but looking back through the elog, this kind of variation in peak heights doesn't seem unreasonable to me. If anyone wants to look at the data, the suspension was kicked every ~1100seconds from 1288673974, 15 times.
I'm measuring the free-swinging spectra of ETMY overnight.
The PMC servo railed and so I re-locked it at ~half range. I've been noticing that the diurnal drift of the PZT control voltage has been larger than usual - not sure if it's entirely correlated with temperature on the PSL table. Anyway the cavity is locked again so all is good.
I was able to boot one of the 3 new Supermicro machines, which I christened c1bhd, in a diskless way (with the boot image hosted on fb, as is the case for all the other realtime FEs in the lab). This is just a first test, but it is reassuring that we can get this custom linux kernel to boot on the new hardware. Some errors about dolphin drivers are thrown at startup but this is to be expected since the server isn't connected to the dolphin network yet. We have the Dolphin adaptor card in hand, but since we have to get another PCIe card (supposedly from LLO according to the BHD spreadsheet), I defer installing this in the server chassis until we have all the necessary hardware on hand.
I also have to figure out the correct BIOS settings for this to really run effectively as a FE (we have to disable all the "un-necessary" system level services) - these machines have BIOS v3.2 as opposed to the older vintages for which there are instructions from K.T. et al.
There may yet be issues with drivers, but this is all the testing that can be done without getting an expansion chassis. After the vent and recovering the IFO, I may try experimenting with the c1ioo chassis, but I'd much prefer if we can do the testing offline on a subnet that doesn't mess with the regular IFO operation (until we need to test the IPC).
Basic IFO alignment checks were done.
Tomorrow, we should do some visual checks of the chambers / EQ stops on ETMY etc but I don't see any major problems at the moment...
Barring any catastrophic failures and provided all required personnel are available, we will do the final pre-close-up checks, put the heavy doors back on, and pump down starting 10 am Monday, 9 Nov 2020.
1100 - EY chamber inspected, no issues were found --> EY heavy door on
1200 - OMC chamber was inspected. OM6 was marginally tweaked to bring the beam down a little in pitch, and also a little northwards in Yaw. --> Heavy door on.
1230 - Pumpdown started. Initially, the annuli volume was pumped down. The procedure calls for doing this with the small turbopumps. However, V7 was left open, and hence, in the process, the TP1 foreline pressure (=P2) hit ~30 torr. This caused TP1 to shutdown. We were able to restart it without issue. This case was not caught by the interlock code, which was running at the time. It should be recitified.
1330 - OMC breadboard clean optics and DCPD hardware were wrapped up and packed into tupperware boxes and stored along the south arm. OMC cavity itself, the OMMT, and the breadboard the OMC was sitting on are wrapped in foil/Ameristat and stored in cabinet S13, lower 2 shelves.
1915 - P1a = 0.5 torr pressure reached. Switched over to pumping the main volume with TP1, backed by TP2 and TP3, which themselves are backed by their respective dry pumps and also the AUX drypump for some extra oomph. All cooling fans available in the area were turned on and directed at the turbo pumps. RV2 was used to throttle the flow suitably.
It was at this point that we hit a snag - RV2 has gotten stuck in a partially open position, see Attachment #1. We can see that the thread doesn't move in response to turning the rotary dial. Fortunately, the valve is partially open, so the main volume continues to be pumped - see Attachment #2 for the full history of today's pumping. We are leaving the main volume pumped in this configuration overnight (TP1 pumping main volume backed by TPs 2 and 3, which are in turn backed by their respective drypumps and also the AUX dry pump). I think there is little to no risk of any damage to the turbo pumps, the interlocks should catch any anomalies. The roughing pumps RP1 and RP3 were turned off and that line was disconnected and capped.
What are our options?
We need some vacuum experts to comment. Why did this happen? Is this an acceptable failure mode of the valve?
2230 - P1a = 0.025 torr. The pressure is coming down with log-linear scale. x0.1 per 2.5 hours or so.
I've uploaded some more photos here. I believe the problem is a worn out thread where the main rotary handle attaches to the shaft that operates the valve.
This morning, I changed the valve config such that TP2 backs TP1 and that combo continues to pump on the main volume through the partially open RV2. TP3 was reconfigured to pump the annuli - initially, I backed it with the AUX drypump but since the load has decreased now, I am turning the AUX drypump off. At some point, if we want to try it, we can try pumping the main volume via the RGA line using TP2/TP3 and see if that allows us to get to a lower pressure, but for now, I think this is a suitable configuration to continue the IFO work.
There was a suggestion at the meeting that the saturation of the main volume pressure at 1mtorr could be due to a leak - to test, I closed V1 for ~5 hours and saw the pressure increased by 1.5 mtorr, which is in line with our estimates from the past. So I think we can discount that possibility.
Looking back through the elog, 1mtorr is the pressure at which it is deemed safe to send the full power beam into the IMC. After replacing the HR mirror in the MCREFL path with a 10% reflective BS, I just cranked the power back up. IMC is locked. With the increased exposure on the MC2T camera, lots of new scattered light has become visible.
While proceeding with the interferometer recovery, I noticed that there appeared to be no light on WFS2. I confirmed on the AP table that the beam was indeed hitting the QPD, but the DC quadrants are all returning 0. Looking back, it appears that the failure happened on Monday 26 October at ~6pm local time. For now, I hand-aligned the IMC and centered the beams on the WFS1 and MC2T QPDs - MCT is ~15000 cts and MC REFL DC is ~0.1, all consistent with the best numbers I've been able to obtain in the past. I don't think the servo will work without 1 sensor without some retuning of the output matrix.
It would appear that both the DC and RF outputs of WFS2 are affected - I dithered the MC2 optic in pitch (with the WFS loop disabled) at 3.33 Hz, the transmission and WFS1 sensors see the dither but not WFS2. It could be that I'm just not well centerd on the PD, but by eye, I am, so it would appear that the problem is present in both the DC and RF signal paths. I am not going into the PD head debugging today.
I want to test out an AS port WFS now that I have all the parts in hand - I guess the Michelson / PRMI will suffice until I make the ALS noise good again, and anyways, there is much assembly work to be done. Overnight, I'm repeating the suspension eigenmode measurement.
The results from the ringdown are attached - in summary:
I had to go through five SR560s in the lab yesterday evening to find one that had the expected 4 nV/rtHz input noise and worked on battery power. To confirm that the batteries were charged, I left 4 of them plugged in overnight. Today, I confirmed that the little indicator light on the back is in "Maintain" and not "Charge". However, when I unplug the power cord, they immediately turn off.
One of the units has a large DC output offset voltage even when the input is terminated (though it is not present with the input itself set to "GND" rather than DC/AC). Do we want to send this in for repair? Can we replace the batteries ourselves?
I now think the excess noise in this circuit could be coming from the KEPCO switching power supply (in fact, the supplies are linear, and specd for a voltage ripple at the level of <0.002% of the output - this is pretty good I think, hard to find much better).
All component references are w.r.t. the schematic. For this test, I decided to stuff a fresh channel on the board, with new components, just to rule out some funky behavior of the channel I had already stuffed. I decoupled the HV amplifier stage and the Acromag DAC noise filtering stages by leaving R3 open. Then, I shorted the non-inverting input of the PA95 (i.e. TP3) to GND, with a jumper cable. Then I measured the noise at TP5, using the AC coupling pomona box (although in principle, there is no need for this as the DC voltage should be zero, but I opted to use it just in case). The characteristic bump in the spectra at ~100Hz-1kHz was still evident, see the bottom row of Attachment #1. The expected voltage noise in this configuration, according to my SPICE model, is ~10 nV/rtHz, see the analysis note.
As a second test, I decided to measure the voltage noise of the power supply - there isn't a convenient monitor point on the circuit to directly probe the +/- HV supply rails (I didn't want any exposed HV conductors on the PCB) - so I measured the voltage noise at the 3-pin connector supplying power to the 2U chassis (i.e. the circuit itself was disconnected for this measurement, I'm measuring the noise of the supply itself). The output is supposedly differential - so I used the SR785 input "Float" mode, and used the Pomona AC coupling box once again to block the large DC voltage and avoid damage to the SR785. The results are summarized in the top row of Attachment #1.
The shape of the spectra suggests to me that the power supply noise is polluting the output noise - Koji suggested measuring the coherence between the channels, I'll try and do this in a safe way but I'm hesitant to use hacky clips for the High Voltage. The PA95 datasheet says nothing about its PSRR, and seems like the Spice model doesn't include it either. It would seem that a PSRR of <60dB at 100 Hz would explain the excess noise seen in the output. Typically, for other Op-Amps, the PSRR falls off as 1/f. The CMRR (which is distinct from the PSRR) is spec'd at 98 dB at DC, and for other OpAmps, I've seen that the CMRR is typically higher than the PSRR. I'm trying to make a case here that it's not unreasonable if the PA95 has a PSRR <= 60dB @100 Hz.
So what are the possible coupling mechanisms and how can we mitigate it?
What do the analog electronics experts think? I may be completely off the rails and imagining things here.
Update 2130: I measured the coherence between the positive supply rail and the output, under the same conditions (i.e. HV stage isolated, input shorted to ground). See Attachment #2 - the coherence does mirror the "bump" seen in the output voltage noise - but the coherence is. only 0.1, even with 100 averages, suggesting the coupling is not directly linear - anyways, I think it's worth it to try adding some extra decoupling, I'm sourcing the HV 10uF capacitors now.
Shruti picked it up @4pm.
It is stored along with the cables that arrived a few weeks ago, awaiting the gauges which are now expected next week sometime.
Where do we want to install the interface and readout electronics for the AS port WFS? Options are:
There isn't much difference in terms of cable length that will be required - I believe the AS WFS is going to go on the AP table even in the new optical layout and not on the ITMY in-air oplev table?
The project requires a large number of new electronics modules. Here is a short update and some questions I had:
Approximately half of the assembly of the various electronics is now complete. The basic electrical testing of the interface chassis and demod chassis are also done (i.e. they get power, the LEDs light up, and are stable for a few minutes). Detailed noise and TF characterization will have to be done.
Attachment #1 - Proposed mods for 40m RF freqs.
Attachment #2 - Modelled TFs for the case where all the notches are stuffed, and where only the 2f notch is stuffed.
Attachment #3 - Modelled TFs for the case where all the notches are stuffed, and where only the 2f notch is stuffed.
Any other red flags anyone sees before I finish stuffing the board?
WFS head and housing. Need to finalize the RF transimpedance gain (i.e. the LC resonant part), and also decide which notches we want to stuff.
Optics --> Cabinet at south end (Attachment #1)
Scanned datasheets--> wiki. It would be good if someone can check the specs against what was ordered.
Five Agilent pressure gauges were delivered to the 40m. It is stored with the controller and cables in the office area. This completes the inventory for the gauge replacement - we have all the ordered parts in hand (though. not necessarily all the adaptor flanges etc). I'll see if I can find some cabinet space in the VEA to store these, the clutter is getting out of hand again...
in addition, the spare gate valve from LHO was also delivered today to the 40m. It is stored at EX with the other spare valves.
On the call last week, I claimed that there isn't much hope of directly measuring Ponderomotive Squeezing in aLIGO without some significant configurational changes. Here, I attempt to quantify this statement a bit, and explicitly state what I mean by "significant configurational changes".
The I/O relations will generally look something like:
The. magnitudes of the matrix elements C_12 and C_21 (i.e. phase to amplitude and amplitude to phase coupling coefficients) will encode the strength of the Ponderomotive squeezing.
For the inital study, let's assume DC readout (since there isn't a homodyne readout yet even in Advanced LIGO). This amounts to setting in the I/O relations, where the former angle is the "homodyne phase" and the latter is the "SRC detuning". For DC readout, the LO quadrature is fixed relative to the signal - for example, in the usual RSE operation, . So the quadrature we will read out will be purely (or nearly so, for small detunings around RSE operation). The displacement noises will couple in via the matrix element. Attachment #1 and Attachment #2 show the off-diagonal elements of the "C" matrix for detunings of the SRC near RSE and SR operation respectively. You can see that the optomechanical coupling decays pretty rapidly above ~40 Hz.
In this particular case, there is no benefit to detuning the SRC, because we are assuming the homodyne angle is fixed, which is not an unreasonable assumption as the quadrature of the LO light is fixed relative to the signal in DC readout (not sure what the residual fluctuation in this quantity is). But presumably it is at the mrad level, so the pollution due to the orthogonal anti-squeezed quadrture can be ignored for a first pass I think. I also assume ~10 degrees of detuning is possible with the Finesse ~15 SRC, as the linewidth is ~12 degrees.
To see how this would look in an actual measurement, I took the data from Lee's ponderomotive squeezing paper, as an estimate for the classical noises, and plotted the quantum noise models for a few representative SRC detunings near RSE operation - see Attachment #3. The curves labelled for various phis are the quantum noise models for those SRC detunings, assuming DC readout. I fudged the power into the IFO to make my modelled quantum noise curve at RSE line up with the high frequency part of the "Measured DARM" curve. To measure Ponderomotive Squeezing unambiguously, we need the quantum noise curve to "dip" as is seen around 40 Hz for an SRC tuning of 80 degrees, and that to be the dominant noise source. Evidently, this is not the case.
The case for balanced homodyne readout:
I haven't analyzed it in detail yet - but it may be possible that if we can access other quadratures, we might benefit from rotating away from the DARM quadrature - the strength of the optomechanical coupling would decrease, as demonstrated in Attachments #1 and #2, but the coupling of classical noise would be reduced as well, so we may be able to win overall. I'll briefly investigate whether a robust measurement can be made at the site once the BHD is implemented.
I am confused by the discussion during the call today. I revisited Hartmut's paper - the circuit in Fig 6 is essentially what I am calling "only 2f_2 notch stuffed" in my previous elog. Qualitatively, the plot I presented in Attachment #2 of the preceeding elog in this thread shows the expected behavior as in Fig 8 of the paper - the impedance seen by the photodiode is indeed lower. In Attachment #1, I show the comparison - the "V(anode)/I(I1)" curve is analogous to the "PD anode" curve in Hartmut's paper, and the "V(vout)/I(I1)" curve is analogous to the "1f-out" curve. I also plot the sensitivity analysis (Attachment #2), by varying the photodiode junction capacitance between 100pF and 200 pF (both values inclusive) in 20 pF steps. There is some variation at 55 MHz, but it is unlikely that the capacitance will change so much during normal operation?
I understand the motivation behind stuffing the other notches, to reduce intermodulation effects. But the impression I got from the call was that somehow, the model I presented was wrong. Can someone help me identify the mistake?
I didn't bother to export the LTspice data and make a matplotlib plot for this quick analysis, so pardon the poor presentation. The colors run from green=100pF to grey=200pF.
An 8 channel whitening chassis was prepared and tested. I measured:
Whitening chassis. Waiting for front panels to arrive, PCBs and interface board are in hand, stuffed and ready to go. A question here is how we want to control the whitening - it's going to be rather difficult to have fast switchable whitening. I think we can just fix the whitening state. Another option would be to control the whitening using Acromag BIO channels.
FYI, there is this. Seems pretty well maintained, and so might be more useful in the long run. The available catalog of instruments is quite impressive - TC200 temp controller and SRS345 func gen are included and are things we use in the lab. maybe you can make a pull request to add MDT694B (there is some nice API already built I think). We should also put our netgpibdata stuff and the vacuum gauge control (basically everything that isn't rtcds) on there (unless there is some intellectual property rights issues that the Caltech lawyers have to sort out).
Given the similarities between the MDT694B (single channel piezo controller) and TC200 (temperature controller) serial interfaces, I added the pyserial driver here.
*Warning* this first version of the driver remains untested
As discussed at the meeting, I commenced the recovery of the CDS status at 1750 local time.
Single arm POX/POY locking was checked, but not much more. Our IMC WFS are still out of service so I hand aligned the IMC a bit, IMC REFL DC went from ~0.3 to ~0.12, which is the usual nominal level.
The summary pages were in a sad state of disrepair - the daily jobs haven't been running for > 1 month. I only noticed today because Jordan wanted to look at some vacuum trends and I thought summary pages is nice for long term lookback. I rebooted it just now, seems to be running. @Tega, maybe you want to set up some kind of scripted health check that also sends an alert.
I'm thinking of making some modifications to the RF distribution box in 1X2, so as to have an extra 55 MHz pickoff. Koji already proposed some improvements to the layout in 2015. I've marked up his "Possible Improvement" page of the document in Attachment #1, with my proposed modifications. I believe it will be possible to get 15-16 dBm of signal into a 4 way RF splitter in the quad demod chassis. With the insertion loss of the splitter, we can have 9-10 dBm of LO reaching each demod board, which will then be boosted to +20 dBm by the Teledyne on board. The PE4140 mixer claims to require only -7 dBm of LO signal. So we have quite a bit of headroom here - as long as we limit the RF signal to 0dBm (=0.5 Vpp from the LMH6431 opamp at 55 MHz, we shouldn't be having a much larger signal anyways), we should be just fine with 15 dBm of LO power (which is what we will have after the division into the I and Q paths, and nominal insertion losses in the transmission path). These numbers may be slight overestimates given the possible degradation of the RF amps over the last 10 years, but shouldn't be a show-stopper.
Do the RF electronics experts agree with my assessment? If so, I will start working on these mods tomorrow. Technically, the splitter can be added outside the box, but it may be neater if we package it inside the box.
The latest greatest UPS has been delivered. I will move it to near the vacuum rack in its packaging for storage. It weighs >100lbs so care will have to be taken when installing - can the rack even support this?
Since we will have several new 1U / 2U aLIGO style electronics chassis installed in the racks, it is desirable to have a more compact power distribution solution than the fusable terminal blocks we use currently.
I did a quick walkaround of the lab and the electronics rack today. I estimate that we will need 5 units of the 24 V and 5 units of the 18 V power strips. Each end will need 1 each of 18 V and 24 V strips. The 1Y1/1Y2/1Y3 (LSC/OMC/BHD sus) area will be served by 1 each 18 V and 24 V. The 1X1/1X2 (IOO) area will be served by 1 each 18 V and 24 V. The 1X5/1X6 (SUS Shadow sensor / Coil driver) area will be served by 1 each of 18 V and 24 V. So I think we should get 7 pcs of each to have 2 spares.
Most of the chassis which will be installed in large numbers (AA, AI, whitening) supports 24V DC input. A few units, like the WFS interface head, OMC driver, OMC QPD interface, require 18V. It is not so clear what the input voltage for the Satellite box and Coil Drivers should be. For the former, an unregulated tap-off of the supply voltage is used to power the LT1021 reference and a transistor that is used to generate the LED drive current for the OSEMs. For the latter, the OPA544 high current opamp used to drive the coil current has its supply rails powered by again, an unregulated tap-off of the supply voltage. Doesn't seem like a great idea to drive any ICs with the unregulated switching supply voltage from a noise point of view, particularly given the recent experience with the HV coil driver testing and the PSRR, but I think it's a bit late in the game to do anything about this. The datasheet specs ~50 dB of PSRR on the negative rail, but we have a couple of decoupling caps close to the IC and this IC is itself in a feedback loop with the low noise AD8671 IC so maybe this won't be much of an issue.
For the purposes of this discussion, I think both Satellite Amp and Coil Driver chassis can be driven with +/- 24 V DC.
On a side note - after the upgrade will the "Satellite Amplifiers" be in the racks, and not close to the flange as they currently are? Or are we gonna have some mini racks next to the chambers? Not sure what the config is at the sites, and if the circuits are designed to drive long cables.
I removed the Frequency Generation box from the 1X2 rack. For the time being, the PSL shutter is closed, since none of the cavities can be locked without the RF modulation source anyways.
Prior to removal, I did the following:
One thing I noticed was that we're using very stiff coax cabling (RG405) inside this box? Do we need to stick with this option? Or can we use the more flexible RG316? I guess RG405 is lower loss, so it's better. I can't actually find any measurement of the shielding performance in my quick google searching but I think the claim on the call yesterday was that RG405 with its solder soaked braids offer superior shielding.
Before doing any modification you should check how much the distributed powers are at the ports.
Also your modification will change the relative phase between 11MHz and 55MHz.
Can you characterize how much phase difference you have between them, maybe using the modulation of the main marconi? And you might want to adjust it to keep the previous value (or any new value) after the modification by adding a cable inside?
I'm open to either approach. If the full replacement requires a lot of machining, maybe I will stick to just the 55 MHz line. But if only a couple of new holes are required, it might be advantageous to do the revamp while we have the box out? What do you think?
BTW, now that I look more closely at the RF chain, I have several questions:
I guess it is feasible to have +17 dBm of 55 MHz signal to plug into the Quad Demod chassis - e.g. drive the 55 MHz input with 20 dBm, pick off 3dBm to the front panel for ASC. Then we can even have several "spare" 55 MHz outputs and still satisfy the 9 dBm input that the ZHL-2 in the 55 MHz chain wants (though again, isn't this dangerously close to the 1dB compression point?). The design doc claims to have done some Optickle modeling, so I guess there isn't really any issue?
Are you going to full replacement of the 55MHz system? Or just remove the 7dBm and then implement the proposed modification for the 55MHz line?
This turned out to be a much more involved project than I expected. The layout is complete now, but I found several potentially damaged sections of cabling (the stiff cables don't have proper strain relief near the connectors). I will make fresh cables tomorrow before re-installing the unit in the rack. Several changes have been made to the layout so I will post more complete details after characterization and testing.
I was poring over minicircuits datasheets today, and I learned that the minicircuits bandpass filters (SBP10.7 and SBP60) are not bi-directional! The datasheet clearly indicates that the Male SMA connector is the input and the Female SMA connector is the output. Almost all the filters were installed the other way around 😱 . I'll install them the right way around now.
This work is now complete. The box was characterized and re-installed in 1X2. I am able to (briefly) lock the IMC and see PDH fringes in POX and POY so the lowest order checks pass.
Even though I did not deliberately change anything in the 29.5 MHz path, and I confirmed that the level at the output is the expected 13 dBm, I had to lower then IN1 gain of the IMC servo by 2dB to have a stable lock - should confirm if this is indeed due to higher optical gain at the IMC error point, or some electrical funkiness. I'm not delving into a detailed loop characterization today - but since my work involved all elements in the RF modulation chain, some detailed characterization of all the locking loops should be done - I will do this in the coming week.
After tweaking the servo gains for the POX/POY loops, I am able to realize the single arm locks as well (though I haven't dont the characterization of the loops yet).
I'm leaving the PSL shutter open, and allowing the IMC autolocker to run. The WFS loops remain disabled for now until I have a chance to check the RF path as well.
Unrelated to this work: Koji's swapping back of the backplane cards seems to have fixed the WFS2 issue - I now see the expected DC readbacks. I didn't check the RF readbacks tonight.
Update 7 Dec 2020 1 pm: A ZHL-2 with heat sink attached and a 11.06 MHz Wenzel source were removed from the box as part of this work (the former was no longer required and the latter wasn't being used at all). They have been stored in the RF electronics cabinet along the east arm.
The MC1 suspension has begun to show evidence of glitches again, from Friday/Saturday. You can look at the suspension Vmon tab a few days ago and see that the excess fuzz in the Vmon was not there before. The extra motion is also clearly evident on the MCREFL spot. I noticed this on Saturday evening as I was trying to recover the IMC locking, but I thought it might be Millikan so I didn't look into it further. Usually this is symptomatic of some Satellite box issues. I am not going to attempt to debug this anymore.
There seems to be significant phase loss in the TTFSS path, which is limiting the IMC OLTF to <100 kHz.
See Attachment #1 and #2. The former shows the phase loss, while the latter is just to confirm that the optical gain of the error point is roughly the same, since I noticed this after working on and replacing the RF frequency distribution unit. Unfortunately there have been many other changes also (e.g. the work that Rana and Koji did at the IMC rack, swapping of backplane controls etc etc - maybe they have an OLTF measurement from the time they were working?) so I don't know which is to blame. Off the top of my head, I don't see how the RF source can change the phase lag of the IMC servo at 100 kHz. The only part of the IMC RF chain that I touched was the short cable inside the unit that routes the output of the Wenzel source to the front panel SMA feedthrough. I confirmed with a power meter that the power level of the 29.5 MHz signal at that point is the same before and after my work.
The time domain demod monitor point signals appear somewhat noisier in todays measurement compared to some old data I had from 2018, but I think this isn't significant. Once the SR785 becomes available, I will measure the error point spectrum as well to confirm. One thing I noticed was that like many of our 1U/2U chassis units, the feedthrough returns are shorted to the chassis on the RF source box (and hence presumably also to the rack). The design doc for this box makes many statements about the precautions taken to avoid this, but stops short of saying if the desired behavior was realized, and I can't find anything about it in the elog. Can someone confirm that the shields of all the connectors on the box were ever properly isolated? My suspicion is that the shorting is happening where the all-metal N-feedthroughs touch the drilled surfaces on the front panel - while the front and back surfaces of the panel are insulating, the machined surfaces are not.
This is an unacceptable state but no clear ideas of how to troubleshoot quickly (without going piece by piece into the IMC servo chain) occur to me. I still don't understand how the freq source work could have resulted in this problem but I'm probably overlooking something basic. I'm also wondering why the differential receiving at the TTFSS error point did not require a gain adjustment of the IMC servo? Shouldn't the differential-receiving-single-ended-sending have resulted in an overall x0.5 gain?
Update 8 Dec 1200: To test the hypothesis, I bypassed the SR560 based differential receiving and restored the original config. I am then able to run with the original gain settings, and you see in Attachment #4 that the IMC OLTF UGF is back above 100 kHz. It is still a little lower than it was in June 2019, not sure why. There must be some saturation issues somewhere in the signal chain because I cannot preserve the differential receiving and retain 100 kHz UGF, either by raising the "VCO gain" on the MC servo board, setting the SR560 to G=2, or raising the "Common Gain Adjust" on the FSS box by 6 dB. I don't have a good explanation for why this worked for some weeks and failed now - maybe some issue with the SR560? We don't have many working units so I didn't try switching it.
So either there is a whole mess of lines or the frequency noise suppression is limited. Sigh.
In favor of keeping the same servo gains, I tuned the digital demod phases for the POX and POY photodiode signals to put as much of the PDH error signal in the _I quadrature as possible. The changes are summarized below:
The old locking settings seem to work fine again. This setting isn't set by the ifoconfigure scripts when they do the burt restore - do we want it to be?
Attachments #1 and #2 show some spectra and TFs for the POX/POY loops. In Attachment #2, the reference traces are from the past, while the live traces are from today. In fact, to have the same UGF as the reference traces (from ~1 year ago), I had to also raise the digital servo loop gain by ~20%. Not sure if this can be put down to a lower modulation depth - at least, at the output on the freq ref box, I measured the same output power (at the 0dB variable attenuator gain setting we nominally run in) before and after the changes. But I haven't done an optical measurement of the modulation depth yet. There is also a hint of lesser phase available at ~100 Hz now compared to a year ago.
I measured the modulation depth at 11 MHz andf 55 MHz using an optical beat + PLL setup. Both numbers are ~0.2 rad, which is consistent with previous numbers. More careful analysis forthcoming, but I think this supports my claim that the optical gain for the PDH locking loops should not have decreased.
I updated the ndscope on rossa to a bleeding edge version (0.7.9+dev0) which has many of the fixes I've requested in recent times (e.g. direct PDF export, see Attachment #1). As usual if you find issue, report it on the issue tracker. The basic functionality for looking at signals seems to be okay so this shouldn't adversely impact locking efforts.
In hindsight - I decided to roll-back to 0.7.9, and have the bleeding edge as a separate binary. So if you call ndscope from the command line, you should still get 0.7.9 and not the bleeding edge.
The ITMX Oplev (installed in March 2019) was near end of life judging by the SUM channel (see Attachment #1). I replaced it yesterday evening with a new HeNe head. Output power was ~3.25 mW. The head was labelled appropriately and the Oplev spot was recentered on its QPD. The lifetime of ~20 months is short but recall that this HeNe had already been employed as a fiber illuminator at EX and so maybe this is okay.
Loop UGFs and stability margins seem acceptable to me, see Attachment #2-#3.
Continuting the IFO recovery - I am unable to recover similar levels of TRX RIN as I had before. Attachment #1 shows that the TRX RIN is ~4x higher in RMS than TRY RIN (the latter is commensurate with what we had previously). The excess is dominated by some low frequency (~1 Hz) fluctuations. The coherence structure is confusing - why is TRY RIN coherent with IMC transmission at ~2 Hz but not TRX? But anyways, doesn't look like its intensity fluctuations on the incident light (unsurprisingly, since the TRY RIN was okay). I thought it may be because of insufficient low-frequency loop gain - but the loop shape is the same for TRX and TRY. I confirmed that the loop UGF is similar now (red trace in Attachment #2) as it was ~1 month ago (black trace in Attachment #2). Seismometers don't suggest excess motion at 1 Hz. I don't think the modulation depth at 11 MHz is to blame either. As I showed earlier, the spectrum of the error point is comparable now as it was previously.
What am I missing?
I suspect what happened here is that the IP didn't get updated when we went from the 131.215.113.xxx system to 192.168.113.xxx system. I fixed it now and can access the web interface. This system is now ready for remote debugging (from inside the martian network obviously). The IP is 192.168.113.90.
Managed to pull this operation off without crashing the RFM network, phew.
BTW, a windows laptop that used to be in the VEA (I last remember it being on the table near MC2 which was cleared sometime to hold the spare suspensions) is missing. Anyone know where this is ?
As discussed at the meeting, I decided to effect a satellite box swap for the misbehaving MC1 unit. I looked back at the summary pages Vmon for the SRM channels, and found that in the last month or so, there wasn't any significant evidence of glitchiness. So I decided to effect that swap at ~4pm today. The sequence of steps was:
One thing I was reminded of is that the motion of the MC1 optic by controlling the bias sliders is highly cross-coupled in pitch and yaw, it is almost diagonal. If this is true for the fast actuation path too, that's not great. I didn't check it just now.
While I was working on this, I took the opportunity to also check the functionality of the RF path of the IMC WFS. Both WFS heads seem to now respond to angular motion of the IMC mirror - I once again dithered MC2 and looked at the demodulated signals, and see variation at the dither frequency, see Attachment #1. However, the signals seem highly polluted with strong 60 Hz and harmonics, see the zoomed-in time domain trace in Attachment #2. This should be fixed. Also, the WFS loop needs some re-tuning. In the current config, it actually makes the MC2T RIN worse, see Attachment #3 (reference traces are with WFS loop enabled, live traces are with the loop disabled - sorry for the confusing notation, I overwrote the patched version of DTT that I got from Erik that allows the user legend feature, working on getting that back).
Around 7pm, the UPS at the vacuum rack seems to have failed. Don't ask me why I decided to check the vacuum screen 10 mins after the failure happened, but the point is, this was a silent failure so the protocols need to be looked into.
Going to the rack, I saw (unsurprisingly) that the 120V UPS was off.
For now, I think this is a safe state to leave the system in. Unless I hear otherwise, I will leave it so - I will be in the lab another hour tonight (~10pm).
Some photos and a screen-cap of the Vac medm screen attached.
I don't buy this story - P2 only briefly burped around GPStime 1291608000 which is around 8pm local time, which is when I was recovering the system.
Today. Jordan talked to Jon Feicht - apparently there is some kind of valve in the TP2 forepump, which only opens ~15-20 seconds after turning the pump on. So the loud sound I was hearing yesterday was just some transient phenomenon. So today morning at ~9am, we turned on TP2. Once again, PTP2 pressure hovered around 500 torr for about 15-20 seconds. Then it started to drop, although both Jordan and I felt that the time it took for the pressure to drop in the range 5 mtorr - 1 mtorr was unusually long. Jordan suspects some "soft-start" feature of the Turbo Pumps, which maybe spins up the pump in a more controlled way than usual after an event like a power failure. Maybe that explains why the pressure dropped so slowly? One thing is for sure - the TP2 controller displayed "TOO HIGH LOAD" yesterday when I tried the first restart (before migrating everything to the older UPS unit). This is what led me to interpret the loud sound on startup of TP2 to indicate some issue with the forepump - as it turns out, this is just the internal valve not being opened.
Anyway, we left TP2 on for a few hours, pumping only on the little volume between it and V4, and PTP2 remained stable at 20 mtorr. So we judged it's okay to open V4. For today, we will leave the system with both TP2 and TP3 backing TP1. Given the lack of any real evidence of a failure from TP2, I have no reason to believe there is elevated risk.
As for prioritising UPS swap - my opinion is that it's better to just replace the batteries in the UPS that has worked for years. We can run a parallel reliability test of the new UPS and once it has demonstrated stability for some reasonable time (>4 months), we can do the swap.
I was able to clear the FAULT indicator on the new UPS by running a "self-test". pressing and holding the "mute" button on the front panel initiates this test according to the manual, and if all is well, it will clear the FAULT indicator, which it did. I'm still not trusting this unit and have left all units powered by the old UPS.
Update 1100 Dec 11: The config remained stable overnight so today I reverted to the nominal config of TP3 pumping the annuli and TP2 backing TP1 which pumps the main volume (through the partially open RV2).
According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.
I gave one Noliac PZT from the two spare in the metal PMC kit to Paco. There is one spare left in the kit.