I removed the forepump to TP2 this morning after the vacuum failure, and tested in the C&B lab. I pumped down on a small volume 10 times, with no issue. The ultimate pressure was ~30 mtorr.
I re-installed the forepump in the afternoon, and restarted TP2, leaving V4 closed. This will run overnight to test, while TP3 backs TP1.
In order to open V1, with TP3 backing TP1, the interlock system had to be reset since it is expecting TP2 as a backing pump. TP2 is running normally, and pumping of the main volume has resumed.
Unclear why the TP2 foreline pump failed in the first place, it has been running fine for several hours now (although TP2 has no load, since V4 isolates it from the main volume). Koji's plots show that the TP2 foreline pressure did not recover even after the interlock tripped and V4 was closed (i.e. the same conditions as TP2 sees right now).
Yes, that loop was unstable. I started using the time domain response to check for the stability of loops now. I have been able to improve the filter slightly with more suppression below 20 Hz but still poor phase margin as before. This removes the lower frequency region bump due to seismic noise. The RMS noise improved only slightly with the bump near UGF still the main contributor to the noise.
For inclusion of real spectra, time delays and the anti-aliasing filters, I still need some more information.
The code used to calculate the transfer functions and plot them is in the repo 40m/ALS/noiseBudget
Related Elog post with more details: 40m/15587
## Cavity Pole
Here is the timeline. This suggests TP2 backing RP failure.
1st line: TP2 foreline pressure went up. Accordingly TP2 P, current, voltage, and temp went up. TP2 rotation went down.
2nd line: TP2 temp triggered the interlock. TP2 foreline pressure was still high (10torr) so TP2 struggled and was running at 1 torr.
3rd line: Gautam's operation. TP2 was isolated and stopped.
Between the 1st line and 2nd line, TP2 pressue (=TP1 foreline pressure) went up to 1torr. This made TP1 current increased from 0.55A to 0.68A (not shown in the plot), but TP1 rotation was not affected.
The interlocks tripped at ~630am local time. Jordan reported that TP2 was supposedly running at 52 C (!).
V1 was already closed, but TP2 was still running. With him standing by the rack, I remotely exectued the following sequence:
Jordan confirmed (by hand) that TP2 was indeed hot and this is not just some serial readback issue. I'll do the forensics later.
Dimensions / Specs
- HEPA unit dimentions
- HEPA unit manufacturer
Gautam reported that the PSL HEPA stopped running (ELOG 15592). So I came in today and started troubleshooting.
It looks like that the AC power reaches the motors. However, both motors do not run. It looks like the problem exists in the capacitors, the motors, or both.
Parts specs can be found in the next ELOG.
Attachment 1 is the connection diagram of the HEPA. The AC power is distributed by the breaker panel. The PSL HEPA is assigned to use M22 breaker (Attachment 2). I checked the breaker switch and it was (and is) ON. The power goes to the junction box above the enclosure (Attachment 3). A couple of wires goes to the HEPA switch (right above the enclosure light switch) and the output goes to the variac. The inside of the junction box looked like this (Attachment 4).
By the way, the wires were just twisted and screwed into a metal threaded (but isolated) caps (Attachment 5). Is this legit? Shouldn't we use stronger crimping? Anyway, there was nothing wrong with the caps w.r.t the connection for now.
I could easily trace the power up to the variac. The variac output was just fine (Attachment 6). The cord goes from the variac to the junction box (and then HEPAs) looked scorched. The connection from the plug to HEPAs was still OK, but this should be eventually replaced. Right now the cable was unplugged after the following tests for the safety reason.
The junction box for each HEPA unit was opened to check the voltage. The supply voltage came to the junction boxes and it was just fine. In Attachments 8 & 9, the voltages look low but this is because I just turned the variac only a little.
At the (main) junction box, the resistances of the HEPAs were checked with the Fluke. As the HEPA units are connected to the AC in parallel, the resistances were individually checked as follows.
The coils were not disconnected (... I wonder if the wiring of South HEPA was flipped? But this is not the main issue right now.)
By removing the pre-filters, the motors were inspected Attachments 10 & 11. At least the north HEPA motor was warm, indicating there was some current before. A capacitor was connected per motor. When the variac was tuned up a bit, one side of the capacitor could see the voltage. I could not judge which has the issue between the capacitor and the motor.
I got some feedback from Koji who pointed out that the phase tracker is not required here. This situation is similar to the phase locking of two lasers together, which we frequently do, except in that case, we usually we offset the absolute frequencies of the two lasers by some RF frequency, and we demodulate the resulting RF beatnote to use as an error signal. We can usually acquire the lock by simply engaging an integrator (ignoring the fact that if we actuate on the laser PZT, which is a frequency actuator, just a proportional feedback will be sufficient because of the phase->frequency conversion), the idea being that the error signal is frequently going through a zero-crossing (around which the sinusoidal error signal is approximately linear) and we can just "catch" one of these zero crossings, provided we don't run of actuation range.
So the question here becomes, is the RF44 signal a suitable error signal such that we can close a feedback loop in a similar way? To try and get more insight, I tried to work out the situation analytically. I've attached my thinking as a PDF note. I get some pretty messy complicated expressions for the RF44 signal contributions, so it's likely I've made a mistake (though Mathematica did most of the heavy lifting), it'll benefit from a second set of eyes.
Anyways, I definitely think there is some additional complications than my simple field cartoon from the preceeding elog would imply - the relative phases of the sidebands seem to have an effect, and I still think the lack of the PRC/SRC make the situation different from what Hang/Teng et. al. outlined for the A+ homodyne phase control analysis. Before the HEPA failed, I had tried closing the feedback loop using one quadrature of the demodulated RF44 signal, but had no success with even a simple integrator as the loop (which the experience with the PLL locking says should be sufficient, and pretty easily closed once we see a sinusoidally oscillating demodulated error signal). But maybe I'm overlooking something basic conceptually?
I don't think the proposed scheme for sensing and controlling the homodyne phase will work without some re-thinking of the scheme. I'll try and explain my thinking here and someone can correct me if I've made a fatal flaw in the reasoning somewhere.
I came to the lab. The control room AC was off -> Now it is on.
Here is the setting of the AC meant for continuous running
This ALS loop is not stable. Its one of those traps that comes from using only the Bode plot to estimate the loop stability. You have to also look at the time domain response - you can look at my feedback lecture for the SURF students for some functions.
This is not a reply to comments given to the last post; Still working on incorporating those suggestions.
Rana suggested looking first at what needs to be suppressed and then create a filter suited for the noise from scratch. So I discarded all earlier poles and zeros and just kept the resonant gains in the digital filter. With that, I found that all we need is three poles at 1 Hz and a gain of 8.1e5 gives the lowest RMS noise value I could get.
Now there can be some practical reasons unknown to me because of which this filter is not possible, but I just wanted to put it here as I'll add the actual noise spectra into this model now.
The HEPA filters on the PSL enclosure are no longer running. I tried cycling the power switch on the NW corner of the enclosure, and also turned the dial on the Variac down to zero and back up to maximum, no effect. Judging by the indicator LED on it, the power is making it to the Variac - bypassing the Variac and directly connecting the HEPA to the AC power seems to have no effect either.
I can't be sure, but I'm almost certain I heard them running at max speed an hour ago while I was walking around inside the VEA. Probably any damage that can happen has already been done, but I dialled down the Innolight injection current, and closed its shutter till the issue can be resolved.
I removed the forepump (Varian SH-110) for TP3 today to see why it had failed over the weekend. I tested it in the C&B lab and the ultimate pressure was only ~40torr. I checked the tip seals and they were destroyed. The scroll housing also easily pulled off of the motor drive shaft, which is indicative of bad bearings. The excess travel in the bearings likely led to significant increase in tip seal wear. This pump will need to be scrapped, or rebuilt.
I tested the spare Varian SH-110 pump located at the X-end and the ultimate pressure was ~98 mtorr. This pump had tip seals replaced on 11/5/18, and is currently at 55163 operating hours. It has been installed as the TP3 forepump.
Once installed, restarting the pump line occured as follows: V5 Closed, VA6 closed, VASE Closed, VASV closed, VABSSCI closed, VABS closed, VABSSCO closed, VAEV closed, VAEE closed,TP3 was restarted and once at normal operation, valves were opened in same order.
The pressure differential interlock condition for V5 was temporaily changed to 10 torr (by Gautam), so that valves could be opened in a controlled manner. Once, the vacuum system was back to normal state the V5 interlock condition was set back to the nominal 1 torr. Vacuum system is now running normally.
See Attachment #1.
The beam spot on ETMY looks weird (looks almost like a TEM10 mode), but the one on ITMY seems fine, see Attachment #2. Wonder what's going on there, maybe just a focusing problem?
We need a vent to fix the suspension, but until then what we can do is to redistribute the POS/PIT/YAW actuations to the three coils.
I think the digital loop in the ALS budget is too optimistic. You have to include all the digital delays and anti-aliasing filters to get the real response.
aslo, I recommend grabbing some of the actual spectra from the in-lock times with nds and using the calibrated spectra as inputs to this mode. Although we don't have good models of the stack, you can sort of infer it by using the calibrated seismometer data and the calibrated MC_F or MC_L channels (for IMC) or XARM/YARM signals for those.
While I stopped by the lab this morning to pick up some things, I took the opportunity to continue the recovery.
At some point, we should run the suspension eigenmode routine (kick optics, let them ringdown, measure peak locations and Qs) to confirm that the remaining suspensions are okay, will also help in actuation re-allocation efforts on ETMY. But I didn't do this today.
Leaving the lab at 1150.
I found out an error I did in copying some control model values from Kiwamu's matlab code. On fixing those, we get a considerably reduced amount of total noise. However, there was still an unstable region around the unity gain frequency because of a very small phase margin. Attachment 3 shows the noise budget, ALS open-loop transfer function, and AUX PDH open-loop transfer function with ALS disengaged. Attachment 4 is the yaml file containing all required zpk values for the control model used. Note that the noise budget shows out-of-loop residual arm length fluctuations with respect to PSL frequency. The RMS curve on this plot is integrated for the shown frequency region.
Adding two more poles at 100 Hz in the ALS digital filter seems to work in making the ALS loop stable everywhere and additionally provides a steeper roll-off after 100 Hz. Attachment 1 shows the noise budget, ALS open-loop transfer function, and AUX PDH open-loop transfer function with ALS disengaged. Attachment 2 is the yaml file containing all required zpk values for the control model used. Note that the noise budget shows out-of-loop residual arm length fluctuations with respect to PSL frequency. The RMS curve on this plot is integrated for the shown frequency region.
But is it really more stable?
For that, we'll have to take present noise source estimates but Gautum vaguely confirmed that this looked more realistic now 'shape-wise'. If I remember correctly, he mentioned that we currently can achieve 8 pm of residual rms motion in the arm cavity with respect to the PSL frequency. So we might be overestimating our loop's capability or underestimating some noise source. More feedback on this welcome and required.
Attachment 5 here shows a block diagram for the control loop model used. Output port 'Res_Disp' is used for referring all the noise sources at the residual arm length fluctuation in the noise budget. The open-loop transfer function for ALS is calculated by -(ALS_DAC->ALS_Out1 / ALS_DAC->ALS_Out2) (removing the -1 negative feedback by putting in the negative sign.) While the AUX PDH open-loop transfer function is calculated by python controls package with simple series cascading of all the loop elements.
Disconcerting because those tip seals were just replaced . Maybe they were just defective, but if there is a more serious problem with the pump, there is a spare Varian roughing pump (the old TP2 dry pump) sitting at the X-end.
I reset the interlock error to unfreeze the vac controls (leaving V5 closed).
So the conclusion is that RP for TP3 has failed. Presumably, the tip-seal needs to be replaced.
Right now TP3 was turned off and is ready for the tip-seal replacement. V5 was closed since the watchdog tripped.
There were two SUSs which didn't look normal.
- ITMX was easily released by the bias slider -> Shake the pitch slider and while all the OSEM values are moving, turn on the damping control (with x10 large watchdog threshold)
- ETMY has UR OSEM 0V output. This means that there is no light. And this didn't change at all with the slider move.
- Went to the Y table and tried to look at the coils. It seems that the UR magnet is detached from the optic and stuck in the OSEM.
I supplied a bottle of hand soap. Don't put water in the bottle to dilute it as it makes the soap vulnarable for cotamination.
the EQ was ~14 km south of Caltech and 17 km deep
the seismometers obviously saturated during the EQ, but the accelerometers captured some of it. It looks like there's different saturation levels on different sensors.
Also, it seems the mounting of the MC2 accelerometers is not so good. There's some ~10-20 Hz resonance its mount that's showing up. Either its the MC2 chamber legs or the accelerometers are clamped poorly to the MC2 baseplate.
I'm amazed at how much higher the noise is on the MC2 accelerometer. Is that really how much amplification of the ground motion we're getting? If so, its as if the MC has no vibration isolation from the ground in that band. We should put one set on the ground and make the more direct comparison of the spectra. Also, perhaps do some seismic FF using this sensor - I'm not sure how successful we've been in this band.
Attaching the coherence plot from ldvw.ligo.caltech.edu (apparently it has access to the 40m data, so we can use that as an alternative to dtt or python for remote analysis):
It would be interesting to see if we can use the ML based FF technology from this summer's SURF project by Nadia to increase the coherence by including some slow IMC alignment channels.
I came to the campus and Gautam notified that he just had received the alert from the vac watchdog.
I checked the vac status at c1vac. PTP3 went up to 10 torr-ish and this made the diff pressure for TP3 over 1torr. Then the watchdog kicked in.
To check the TP3 functionality, AUX RP was turned on and the manual valve (MV in the figure) was opened to pump the foreline of TP3. This easily made PTP3 <0.2 torr and TP3 happy (I didn't try to open V5 though).
Sun Sep 20 00:02:36 2020 edit: fixed indexing error in plots
* also assuming that the sensors are correctly calibrated in the front end to 1 count = 1 um/s^2 (this is what's used in the summ pages)
M4.5 EQ in LA 2020-09-19 06:38:46 (UTC) / -1d 23:38:46 (PDT) https://earthquake.usgs.gov/earthquakes/eventpage/ci38695658/executive
I only checked the watchdogs. All watchdogs were tripped. ITMX and ETMY seemed stuck (or have the OSEM magnet issue). They were left tripped. The watchdogs for the other SUSs were reloaded.
Field spectrum cartoon:
Attachment #1 shows a cartoon of the various field components.
So is there a 90 degree relative shift between the signal quadrature in the simple Michelson vs the DRFPMI? But wait, there are more problems...
Closing a feedback loop using the 44 MHz signal:
We still need to sense the 44 MHz signal with a photodiode, acquire the signal into our CDS system, and close a feedback loop.
I don't have any bright ideas at the moment - anyone has any suggestions?🤔
I wanted to check what kind of signal the photodiode sees when only the LO field is incident on the photodiode. So with the IFO field blocked, I connected the PDA10CF to the Agilent analyzer in "Spectrum" mode, through a DC block. The result is shown in Attachment #2. To calculate the PM/AM ratio, I assumed a modulation depth of 0.2. The RIN was calculated by dividing the spectrum by the DC value of the PDA10CF output, which was ~1V DC. The frequencies are a little bit off from the true modulation frequencies because (i) I didn't sync the AG4395 to a Rb 10 MHz signal, and (ii) the span/BW ratio was set rather coarsely at 3kHz.
I would expect only 44 MHz and 66 MHz peaks, from the interference between the 11 MHz and 55 MHz sideband fields, all other field products are supposed to cancel out (or are in orthogonal quadratures). This is most definitely not what I see - is this level of RIN normal and consistent with past characterization? I've got no history in this particular measurement.
I had to make a CDS change to the c1lsc model in an effort to get a few more signals into the models. Rather than risk requiring hard reboots (typcially my experience if I try to restart a model), I opted for the more deterministic scripted reboot, at the expense of spending ~20mins to get everything back up and running.
Update 2230: this was more complicated than expected - a nuclear reboot was necessary but now everything is back online and functioning as expected. While all the CDS indicators were green when I wrote this up at ~1800, the c1sus model was having frequent CPU overflows (execution time > 60 us). Not sure why this happened, or why a hard power reboot of everything fixed it, but I'm not delving into this.
The point of all this was that I can now simultaneously digitize 4 channels - 2 DCPDs, and 2 demodulated quadratures of an RF signal.
Assembled is the list of dead pressure gauges. Their locations are also circled in Attachment 1.
For replacements, I recommend we consider the Agilent FRG-700 Pirani Inverted Magnetron Gauge. It uses dual sensing techniques to cover a broad pressure range from 3e-9 torr to atmosphere in a single unit. Although these are more expensive, I think we would net save money by not having to purchase two separate gauges (Pirani + hot/cold cathode) for each location. It would also simplify the digital controls and interlocking to have a streamlined set of pressure readbacks.
For controllers, there are two options with either serial RS232/485 or Ethernet outputs. We probably want the Agilent XGS-600, as it can handle all the gauges in our system (up to 12) in a single controller and no new software development is needed to interface it with the slow controls.
This elog suggests that there is uniformly 1 stage engaged across all channels. I didn't look at the board to see what the jumper situation is, but only 1 stage of whitening is compensated digitally for both _F and _L. The Pomona box attached to the NPRO PZT input is also compensated digitally to convert counts to frequency.
I tried the gain re-allocation between VCO gain and FSS COMM (and also compensated for the cts to Hz conversion in MCF), but it doesn't seem to have the desired effect on the MCF SNR in the 5-50Hz band. Since the IMC stays locked, and I had already made the changes to mcup, I'll keep these gains for now. We can revert to the old settings if the IMC locking duty cycle is affected. Explicitly, the changes made were:
VCO gain: +7dB ---> +13 dB
FSS COMM: +6 ddB ---> +0 dB
The mcdown script wasn't modified, so the lock acquisition gains are the same as they've been.
the "Pentek" whitening board that carries the MC channels has jumpers to enable either 1 or 2 stages of 15:150 whitening. Looks lik MC_F has 2 and MC_L has 1.
After more trials, I think the phase tracker part used to provide the error signal for this scheme needs some modification for this servo to work.
Attachment #1 shows a block diagram of the control scheme.
I was using the "standard" phase tracker part used in our ALS model - but unlike the ALS case, the magnitude of the RF signal is squished to (nearly) zero by the servo. But the phase tracker, which is responsible for keeping the error signal in one (demodulated) quadrature (since our servo is a SISO system) has a UGF that is dependent on the magnitude of the RF signal. So, I think what is happening here is that the "plant" we are trying to control is substantially different in the acquisition phase (where the RF signal magnitude is large) and once the lock is enabled (where the RF signal magnitude becomes comparitively tiny).
I believe this can be fixed by dynamically normalizing the gain of the digital phase tracking loop by the magnitude of the signal = sqrt(I^2 + Q^2). I have made a modified CDS block that I think will do the job but am opting against a model reboot tonight - I will try this in the daytime tomorrow.
I'm also wondering how to confirm that the loop is doing something good - any ideas for an out-of-loop monitor? I suppose I could use the DCPD - once the homodyne phase loop is successfully engaged, I should be able to drive a line in MICH and check for drift by comparing line heights in the DCPD signal and RF signal. This will requrie some modification of the wiring arrangement at 1Y2 but shouldn't be too difficult...
The HEPAs, on the PSL table and near ITMY, were dialled down / turned off respectively, at ~8pm at the start of this work. They will be returned to their previous states before I leave the lab tonight.
that's a very curious disconnection
I guess the MC_F signal is so low because of the high gain on the FSS board. We could lower the FSS common gain and increase the IMC board's VCO gain to make up for this. Maybe 6 dB would be enough. IF that is risky, we could also up the analog gain on the whitening board.
There was an abrupt change in the MC_F spectrum between August 4 and August 5, judging by the summary pages - the 1 and 3 Hz resonances are no longer visible in the spectrum. Possibly, this indicates some electronics failure on the MC servo board / whitening board, the CDS settings don't seem to have changed. There is no record of any activity in the elog around those dates that would explain such a change. I'll poke around at 1X2 to see if anything looks different.
Update 1740: I found that the MCL / MCF cables were disconnected. So since August 5, these channels were NOT recording any physical quantity. Because their inputs weren't terminated, I guess this isn't a clean measurement of the whitening + AA noise, but particularly for MC_F, I guess we could use more whitening (see Attachment #1). Probably also means that the wandering ~10-30Hz line in the spectrogram is a electronics feature. The connections have now been restored and things look nominal again.
These were delivered to the 40m today and are on Rana's desk
I'll order a couple of these (5 ordered for delivery on Wednesday) in case there's a hot demand for the jack / plug combo that this one has.
The unit was repaired and returned to the 40m. Now, with a DMM, I measure a DC offset value that is ~1% of the AC signal amplitude. I measured the TF of a simple 1/20 voltage divider and it looks fine. In FFT mode, the high frequency noise floor levels out around 5-7nV/rtHz when the input is terminated in 50 ohms.
I will upload the repair documents to the wiki.
The "source" output of the SR785 has a DC offset of -6.66 V. I couldn't make this up.
The PMC has been unlocked since September 11 sometime (summary pages are flaky again). I re-locked it just now. I didn't mess with the HEPA settings for now as I'm not using the IFO at the moment, so everything should be running in the configuration reported here. The particulate count numbers (both 0.3um and 0.5um) reported is ~x5-8 of what was reported on Thursday, September 10, after the HEPA filters were turned on. We don't have logging of this information in any automated way so it's hard to correlate things with the conditions in Pasadena. We also don't have a working gauge of the pressure of the vacuum envelope.
The RGA scanning was NOT enabled for whatever reason after the vacuum work. I re-enabled it, and opened VM1 to expose the RGA to the main volume. The unit may still be warming up but this initial scan doesn't look completely crazy compared to the reference trace which is supposedly from a normal time based on my elog scanning (the timestamp is inherited from the c0rga machine whose clock is a bit off).
Update 1500: I checked the particle count on the PSL table and it barely registers on the unit (between 0-20 between multiple trials) so I don't know if we need a better particle coutner or if there is negligible danger of damage to optics due to particulate matter.
Turns out what was causing the instability in the aLIGO plots were the lock commands which I forgot to remove before running the simulation. Removing these also made the simulation much faster.
Other than that I improved other stuff in the simulations:
Still need to do:
Feel free to add to the todo list.
- PSL HEPA was running at 33% and is now at 100%
- South End HEPA was not on and is now running
- Yarm Portable HEPA was not running and is now running at max speed: the power was taken beneath the ITMY table. It is better to unplug it when one uses the IFO.
- Yend Portable HEPA was not running and is now running (presumably) at max speed
Particle Levels: (Not sure about the unit. The convention here is to multiply x10 of the reading)
Before running the HEPAs at their maximum
9/10/2020 15:30 / 0.3um 292180 / 0.5um 14420
(cf 9/5/2020 / 0.3um 94990 / 0.5um 6210)
After running the HEPAs at their maximum
The number gradually went down and now became constant at about half of the initial values
9/10/2020 19:30 / 0.3um 124400 / 0.5um 7410
As promised some time ago, I've obtained input noise spectra from the sites calibrated to physical units. They are located in a new subdirectory of the BHD repo: A+/input_noises. I've heavily annotated the notebook that generates them (input_noises.ipynb) with aLOG references, to make it transparent what filters, calibrations, etc. were applied and when the data were taken. Each noise term is stored as a separate HDF5 file, which are all tracked via git LFS.
So far there are measurements of the following sources:
These can be used, for example, to make more realistic Hang's bilinear noise modeling [ELOG 15503] and Yehonathan's Monte Carlo simulations [ELOG 15539]. Let me know if there are other specific noises of interest and I will try to acquire them. It's a bit time-consuming to search out individual channel calibrations, so I will have to add them on a case-by-case basis.
since the summary pages are working again, I was clicking through and noticed that there's a wandering peak in the whitened IMC spectrogram that goes from 10-30 Hz over the course of a day.
anyone know what this is ?
Attachment #1 shows the optical setup currently being used to send the LO field with RF sidebands on it to the air BHD setup.
Attachment #2 shows spectra of the relative phase drift between LO and IFO output field (from the Dark Michelson).
Attachment #3 shows the signal magntiude of the signals used to make the spectra in Attachment #2, during the observation time (10 minutes) with which the spectra were computed. The dashed vertical lines denote the 1%, 50% and 99% quantiles.
Attempts to close a feeddback loop to control the homodyne phase:
I edited /diskless/root.jessie/home/controls/.bashrc so that I don't have to keep doing this every time I do a model recompile.
Where is this variable set and how can I add the new paths to it?
- Loose fiber coupler: Sorry about that. I could not detect something was loose there, although some of the locks were not tightened.
- S incident instead of P: Sorry about that too. I completely missed that the IMC takes S-pol.
Over the last couple of days, I've been working towards getting the infrastructure ready to test out the scheme of sensing (and eventually, controlling) the homodyne phase using the so-called RF44 scheme. More details will be populated, just quick notes for now before I forget.
On Friday, I grabbed the Zurich Instruments HF2LI lock-in amplifier and brought it home. As time permits, I will work towards developing a similar readout script as we have for the SR785.
Now that the old APC Smart-UPS 2200 is no longer in use by the vacuum system, I looked into whether it can be repurposed for the framebuilder machine. Yes, it can. The max power consumption of the framebuilder (a SunFire X4600) is 1.137kW. With fresh batteries, I estimate this UPS can power the framebuilder for >10 min. and possibly as long as 30 min., depending on the exact load.
@Chub/Jordan, this UPS is ready to be moved to rack 1X6/1X7. It just has to be disconnected from the wall outlet. All of the equipment it was previously powering has been moved to the new UPS. I have ordered a replacement battery (APC #RBC43) which is scheduled to arrive 9/09-11.
2PM: Arrived at the 40m. Started the work for the coupling of the RF modulated LO beam into a fiber. -> I left the lab at 10:30 PM.
The fiber coupling setup for the phase-modulated beam was made right next to the PSL injection path. (See attachment 1)
Yesterday's UPS switchover was mostly a success. The new Tripp Lite 120V UPS is fully installed and is communicating with the slow controls system. The interlocks are configured to trigger a controlled shutdown upon an extended power outage (> ~30 s), and they have been tested. All of the 120V pumpspool equipment (the full c1vac/LAN/Acromag system, pressure gauges, valves, and the two small turbo pumps) has been moved to the new UPS. The only piece of equipment which is not 120V is TP1, which is intended to be powered by a separate 230V UPS. However that unit is still not working, and after more investigation and a call to Tripp Lite, I suspect it may be defective. A detailed account of the changes to the system follow below.
Unfortunately, I think I damaged the Hornet (the only working cathode ionization gauge in the main volume) by inadvertently unplugging it while switching over equipment to the new UPS. The electronics are run from multiple daisy-chained power strips in the bottom of the rack and it is difficult to trace where everything goes. After the switchover, the Hornet repeatedly failed to activate (either remotely or manually) with the error "HV fail." Its compatriot, the Pirani SuperBee, also failed about a year ago under similar circumstances (or at least its remote interface did, making it useless for digital monitoring and control). I think we should replace them both, ideally with ones with some built-in protection against power failures.
Four new soft channels per UPS have been created, although the interlocks are currently predicated on only C1:Vac-UPS120V_status.
These new readbacks are visible in the MEDM vacuum control/monitor screens, as circled in Attachment 1:
Yesterday I brought with me a custom power cable for the 230V UPS. It adapts from a 208/120V three-phase outlet (L21-20R) to a standard outlet receptacle (5-15P) which can mate with the UPS's C14 power cable. I installed the cable and confirmed that, at the UPS end, 208V AC was present split-phase (i.e., two hot wires separated 120 deg in phase, each at 120V relative to ground). This failed to power on the unit. Then Jordan showed up and suggested to try powering it instead from a single-phase 240V outlet (L6-20R). However we found that the voltage present at this outlet was exactly the same as what the adapter cable provides: 208V split-phase.
This UPS nominally requires 230V single-phase. I don't understand well enough how the line-noise-isolation electronics work internally, so I can think of three possible explanations:
I called Tripp Lite technical support. They thought the unit should work as powered in the configuration I described, so this leads me to suspect #3.
@Chub and Jordan: Can you please look into somehow replacing this unit, potentially with a U.S.-specific model? Let's stick with the Tripp Lite brand though, as I already have developed the code to interface those.
Unlike our older equipment, which communicates serially with the host via RS232/485, the new UPS units can be connected with a USB 3.0 cable. I found a great open-source package for communicating directly with the UPS from within Python, Network UPS Tools (NUT), which eliminates the dependency on Tripp Lite's proprietary GUI. The package is well documented, supports hundreds of power-management devices, and is available in the Debian package manager from Jessie (Debian 8) up. It consists of a large set of low-level, device-specific drivers which communicate with a "server" running as a systemd service. The NUT server can then be queried using a uniform set of programming commands across a huge number of devices.
I document the full set-up procedure below, as we may want to use this with more USB devices in the future.
First, install the NUT package and its Python binding:
$ sudo apt install nut python-nut
This automatically creates (and starts) a set of systemd processes which expectedly fail, since we have not yet set up the config. files defining our USB devices. Stop these services, delete their default definitions, and replace them with the modified definitions from the vacuum git repo:
$ sudo systemctl stop nut-*.service
$ sudo rm /lib/systemd/system/nut-*.service
$ sudo cp /opt/target/services/nut-*.service /etc/systemd/system
$ sudo systemctl daemon-reload
Next copy the NUT config. files from the vacuum git repo to the appropriate system location (this will overwrite the existing default ones). Note that the file ups.conf defines the UPS device(s) connected to the system, so for setups other than c1vac it will need to be edited accordingly.
$ sudo cp /opt/target/services/nut/* /etc/nut
Now we are ready to start the NUT server, and then enable it to automatically start after reboots:
$ sudo systemctl start nut-server.service
$ sudo systemctl enable nut-server.service
If it succeeds, the start command will return without printing any output to the terminal. We can test the server by querying all the available UPS parameters with
$ upsc 120v
which will print to the terminal screen something like
device.mfr: Tripp Lite
device.model: Tripp Lite UPS
driver.version.data: TrippLite HID 0.81
ups.mfr: Tripp Lite
ups.model: Tripp Lite UPS
Here 120v is the name assigned to the 120V UPS device in the ups.conf file, so it will vary for setups on other systems.
If all succeeds to this point, what we have set up so far is a set of command-line tools for querying (and possibly controlling) the UPS units. To access this functionality from within Python scripts, a set of official Python bindings are provided by the python-nut package. However, at the time of writing, these bindings only exist for Python 2.7. For Python 3 applications (like the vacuum system), I have created a Python 3 translation which is included in the vacuum git repo. Refer to the UPS readout script for an illustration of its usage.
The vac work is completed. All of the vacuum equipment is now running on the new 120V UPS, except for TP1. The 230V TP1 is still running off wall power, as it always has. After talking with Tripp Lite support today, I believe there is a problem with the 230V UPS. I will post a more detailed note in the morning.
The vac controls are going down now to pull and test software changes. Will advise when the work is completed.
After replacement of the fiber delivering the LO beam to the airBHD setup (some photos here), I repeated the measurement outlined here. There may be some improvement, but overall, conclusions don't change much.
The main addition I made was to implement a digital phase tracker servo (a la ALS), to make sure my arctan2 usage wasn't completely bonkers (the CDS block can be deleted later, or maybe it's useful to keep it, we will see). I didn't measure it today, but the UGF of said servo should be >100 Hz so the attached spectrum should be valid below that (loop has not been done, so above the UGF, the control signal isn't a valid representative of the free running noise). Attachment #1 shows the result. The 1 Hz and 3 Hz suspension resonances are well resolved. Anyways, what this means is that the earlier result was not crazy. I don't know what to make of the high frequency lines, but my guess is that they are electronic pickup from the Sorensens - I'm using clip-mini-grabbers to digitize these signals, and other electronics in that rack (e.g. ALS signals) also show these lines.
It is pretty easy to keep the simple Michelson locked for several minutes. Attachment #2 shows the phase-tracker servo output over several minutes. The y-axis units are degrees. If this is to be believed, the relative phase between the two fields is drifting by 12um ove an hour. This is significantly lower than my previous measurement, while the noise in the ~0.5-10 Hz band is similar, so maybe the shorter fiber patch cable did some good?
I think there is also correlation between the PSL table temperature, but of course, the evidence is weak, and there are certainly other effects at play. At first, I thought the abrupt jumps are artefacts, but they don't actually represent jumps >360 degrees over successive samples, so maybe they are indicative of some real jump in the relative phase? Either fiber slippage or TT suspension jumps? I'll double check with the offline data to make sure it's not some artefact of the phase tracker servo. If you disagree with these conclusions and think there is some meaurement/analysis/interpretation error, I'd love to hear about it.
I have left the heterodyne electronics setup at the LSC rack, but it is not powered (because there are some exposed wires). Please leave it as is.
To be continued tomorrow. I think it's a good idea to let the newly installed fiber relax into some sort of stable configuration overnight.
Using a heterodyne measurement setup to track both quadratures, I estimated the relative phase fluctuation between the LO field and the interferometer output field. It may be that a single PZT to control the homodyne phase provides insufficient actuation range. I'll also need to think about a good sensing scheme for controlling the homodyne phase, given that it goes through ~3 fringes/sec - I didn't have any success with the double demodulation scheme in my (admittedly limited) trials.
For everything in this elog, the system under study was a simple Michelson (PRM, SRM and ETMs misaligned) locked on the dark fringe.
This work was mainly motivated by my observation of rapid fringing on the BHD photodiodes with MICH locked on the dark fringe. The seismic-y appearance of these fringes reminded me that there are two tip-tilt suspensions (SR2, SR3), one SOS (SRM) + various steering optics on seismic stacks (6+ steering mirrors) between the dark port of the beamsplitter and the AS table, where the BHD readout resides. These suspensions modulate the phase of the output field of course. So even though the Michelson phase is tightly controlled by our LSC feedback loop, the field seen by the homodyne readout has additional phase noise due to these optics (this will be a problem for in vacuum BHD too, the question is whether we have sufficient actuator range to compensate).
To get a feel for how much relative phase noise there is between the LO field and the interferometer output field (this is the metric of interest), I decided to set up a heterodyne readout so that I can simultaneously monitor two orthogonal quadratures.
Attachment #1 shows the detailed measurement setup. I hijacked the ADC channels normally used by the DCPDs (along with the front-end whitening) to record these time-series.
Attachments #2, #3 shows the results in the time domain. The demodulated signal isn't very strong despite my pre-amplification of the PDA10CF output by a ZFL-500-HLN, but I think for the purposes of this measurement, there is sufficient SNR.
This would suggest that there are pretty huge (~200um) relative phase excursions between the LO and IFO fields. I suppose, over minutes, it is reasonable that the fiber length changes by 100um or so? If true, we'd need some actuator that has much more range to control the homodyne phasethan the single PZT we have available right now. Maybe some kind of thermal actuator on the fiber length? If there is some pre-packaged product available, that'd be best, making one from scratch may be a whole project in itself. Attachment #3 is just a zoomed-in version of the time series, showing the fringing more clearly.
Attachment #4 has the same information as Attachment #2, except it is in the frequency domain. The FFT length was 30 seconds. The features between ~1-3 Hz support my hypothesis that the SR2/SR3 suspensions are a dominant source of relative phase noise between LO and IFO fields at those frequencies. I guess we could infer something about the acoustic pickup in the fibers from the other peaks.