Yesterday's UPS switchover was mostly a success. The new Tripp Lite 120V UPS is fully installed and is communicating with the slow controls system. The interlocks are configured to trigger a controlled shutdown upon an extended power outage (> ~30 s), and they have been tested. All of the 120V pumpspool equipment (the full c1vac/LAN/Acromag system, pressure gauges, valves, and the two small turbo pumps) has been moved to the new UPS. The only piece of equipment which is not 120V is TP1, which is intended to be powered by a separate 230V UPS. However that unit is still not working, and after more investigation and a call to Tripp Lite, I suspect it may be defective. A detailed account of the changes to the system follow below.
Unfortunately, I think I damaged the Hornet (the only working cathode ionization gauge in the main volume) by inadvertently unplugging it while switching over equipment to the new UPS. The electronics are run from multiple daisy-chained power strips in the bottom of the rack and it is difficult to trace where everything goes. After the switchover, the Hornet repeatedly failed to activate (either remotely or manually) with the error "HV fail." Its compatriot, the Pirani SuperBee, also failed about a year ago under similar circumstances (or at least its remote interface did, making it useless for digital monitoring and control). I think we should replace them both, ideally with ones with some built-in protection against power failures.
Four new soft channels per UPS have been created, although the interlocks are currently predicated on only C1:Vac-UPS120V_status.
These new readbacks are visible in the MEDM vacuum control/monitor screens, as circled in Attachment 1:
Yesterday I brought with me a custom power cable for the 230V UPS. It adapts from a 208/120V three-phase outlet (L21-20R) to a standard outlet receptacle (5-15P) which can mate with the UPS's C14 power cable. I installed the cable and confirmed that, at the UPS end, 208V AC was present split-phase (i.e., two hot wires separated 120 deg in phase, each at 120V relative to ground). This failed to power on the unit. Then Jordan showed up and suggested to try powering it instead from a single-phase 240V outlet (L6-20R). However we found that the voltage present at this outlet was exactly the same as what the adapter cable provides: 208V split-phase.
This UPS nominally requires 230V single-phase. I don't understand well enough how the line-noise-isolation electronics work internally, so I can think of three possible explanations:
I called Tripp Lite technical support. They thought the unit should work as powered in the configuration I described, so this leads me to suspect #3.
@Chub and Jordan: Can you please look into somehow replacing this unit, potentially with a U.S.-specific model? Let's stick with the Tripp Lite brand though, as I already have developed the code to interface those.
Unlike our older equipment, which communicates serially with the host via RS232/485, the new UPS units can be connected with a USB 3.0 cable. I found a great open-source package for communicating directly with the UPS from within Python, Network UPS Tools (NUT), which eliminates the dependency on Tripp Lite's proprietary GUI. The package is well documented, supports hundreds of power-management devices, and is available in the Debian package manager from Jessie (Debian 8) up. It consists of a large set of low-level, device-specific drivers which communicate with a "server" running as a systemd service. The NUT server can then be queried using a uniform set of programming commands across a huge number of devices.
I document the full set-up procedure below, as we may want to use this with more USB devices in the future.
First, install the NUT package and its Python binding:
$ sudo apt install nut python-nut
This automatically creates (and starts) a set of systemd processes which expectedly fail, since we have not yet set up the config. files defining our USB devices. Stop these services, delete their default definitions, and replace them with the modified definitions from the vacuum git repo:
$ sudo systemctl stop nut-*.service
$ sudo rm /lib/systemd/system/nut-*.service
$ sudo cp /opt/target/services/nut-*.service /etc/systemd/system
$ sudo systemctl daemon-reload
Next copy the NUT config. files from the vacuum git repo to the appropriate system location (this will overwrite the existing default ones). Note that the file ups.conf defines the UPS device(s) connected to the system, so for setups other than c1vac it will need to be edited accordingly.
$ sudo cp /opt/target/services/nut/* /etc/nut
Now we are ready to start the NUT server, and then enable it to automatically start after reboots:
$ sudo systemctl start nut-server.service
$ sudo systemctl enable nut-server.service
If it succeeds, the start command will return without printing any output to the terminal. We can test the server by querying all the available UPS parameters with
$ upsc 120v
which will print to the terminal screen something like
device.mfr: Tripp Lite
device.model: Tripp Lite UPS
driver.version.data: TrippLite HID 0.81
ups.mfr: Tripp Lite
ups.model: Tripp Lite UPS
Here 120v is the name assigned to the 120V UPS device in the ups.conf file, so it will vary for setups on other systems.
If all succeeds to this point, what we have set up so far is a set of command-line tools for querying (and possibly controlling) the UPS units. To access this functionality from within Python scripts, a set of official Python bindings are provided by the python-nut package. However, at the time of writing, these bindings only exist for Python 2.7. For Python 3 applications (like the vacuum system), I have created a Python 3 translation which is included in the vacuum git repo. Refer to the UPS readout script for an illustration of its usage.
The vac work is completed. All of the vacuum equipment is now running on the new 120V UPS, except for TP1. The 230V TP1 is still running off wall power, as it always has. After talking with Tripp Lite support today, I believe there is a problem with the 230V UPS. I will post a more detailed note in the morning.
The vac controls are going down now to pull and test software changes. Will advise when the work is completed.
After replacement of the fiber delivering the LO beam to the airBHD setup (some photos here), I repeated the measurement outlined here. There may be some improvement, but overall, conclusions don't change much.
The main addition I made was to implement a digital phase tracker servo (a la ALS), to make sure my arctan2 usage wasn't completely bonkers (the CDS block can be deleted later, or maybe it's useful to keep it, we will see). I didn't measure it today, but the UGF of said servo should be >100 Hz so the attached spectrum should be valid below that (loop has not been done, so above the UGF, the control signal isn't a valid representative of the free running noise). Attachment #1 shows the result. The 1 Hz and 3 Hz suspension resonances are well resolved. Anyways, what this means is that the earlier result was not crazy. I don't know what to make of the high frequency lines, but my guess is that they are electronic pickup from the Sorensens - I'm using clip-mini-grabbers to digitize these signals, and other electronics in that rack (e.g. ALS signals) also show these lines.
It is pretty easy to keep the simple Michelson locked for several minutes. Attachment #2 shows the phase-tracker servo output over several minutes. The y-axis units are degrees. If this is to be believed, the relative phase between the two fields is drifting by 12um ove an hour. This is significantly lower than my previous measurement, while the noise in the ~0.5-10 Hz band is similar, so maybe the shorter fiber patch cable did some good?
I think there is also correlation between the PSL table temperature, but of course, the evidence is weak, and there are certainly other effects at play. At first, I thought the abrupt jumps are artefacts, but they don't actually represent jumps >360 degrees over successive samples, so maybe they are indicative of some real jump in the relative phase? Either fiber slippage or TT suspension jumps? I'll double check with the offline data to make sure it's not some artefact of the phase tracker servo. If you disagree with these conclusions and think there is some meaurement/analysis/interpretation error, I'd love to hear about it.
I have left the heterodyne electronics setup at the LSC rack, but it is not powered (because there are some exposed wires). Please leave it as is.
To be continued tomorrow. I think it's a good idea to let the newly installed fiber relax into some sort of stable configuration overnight.
Using a heterodyne measurement setup to track both quadratures, I estimated the relative phase fluctuation between the LO field and the interferometer output field. It may be that a single PZT to control the homodyne phase provides insufficient actuation range. I'll also need to think about a good sensing scheme for controlling the homodyne phase, given that it goes through ~3 fringes/sec - I didn't have any success with the double demodulation scheme in my (admittedly limited) trials.
For everything in this elog, the system under study was a simple Michelson (PRM, SRM and ETMs misaligned) locked on the dark fringe.
This work was mainly motivated by my observation of rapid fringing on the BHD photodiodes with MICH locked on the dark fringe. The seismic-y appearance of these fringes reminded me that there are two tip-tilt suspensions (SR2, SR3), one SOS (SRM) + various steering optics on seismic stacks (6+ steering mirrors) between the dark port of the beamsplitter and the AS table, where the BHD readout resides. These suspensions modulate the phase of the output field of course. So even though the Michelson phase is tightly controlled by our LSC feedback loop, the field seen by the homodyne readout has additional phase noise due to these optics (this will be a problem for in vacuum BHD too, the question is whether we have sufficient actuator range to compensate).
To get a feel for how much relative phase noise there is between the LO field and the interferometer output field (this is the metric of interest), I decided to set up a heterodyne readout so that I can simultaneously monitor two orthogonal quadratures.
Attachment #1 shows the detailed measurement setup. I hijacked the ADC channels normally used by the DCPDs (along with the front-end whitening) to record these time-series.
Attachments #2, #3 shows the results in the time domain. The demodulated signal isn't very strong despite my pre-amplification of the PDA10CF output by a ZFL-500-HLN, but I think for the purposes of this measurement, there is sufficient SNR.
This would suggest that there are pretty huge (~200um) relative phase excursions between the LO and IFO fields. I suppose, over minutes, it is reasonable that the fiber length changes by 100um or so? If true, we'd need some actuator that has much more range to control the homodyne phasethan the single PZT we have available right now. Maybe some kind of thermal actuator on the fiber length? If there is some pre-packaged product available, that'd be best, making one from scratch may be a whole project in itself. Attachment #3 is just a zoomed-in version of the time series, showing the fringing more clearly.
Attachment #4 has the same information as Attachment #2, except it is in the frequency domain. The FFT length was 30 seconds. The features between ~1-3 Hz support my hypothesis that the SR2/SR3 suspensions are a dominant source of relative phase noise between LO and IFO fields at those frequencies. I guess we could infer something about the acoustic pickup in the fibers from the other peaks.
Increasing the compensation capacitance (470 pF now instead of 33 pF) seems to have fixed the oscillation issues associated with this circuit. However, the measured noise is in excess of the model at almost any frequency of relevance. I believe the problem is due to the way the measurement is done, and that we should re-do the measurement once the unit is packaged in a shielded environment.
Attachment #1 shows (schematically) the measurement setup. Main differences from the way I did the last round of testing are:
Attachment #2 shows the measurement results:
I didn't capture the data, but viewing the high voltage output on an Oscilloscope threw up no red flags - the oscillations which were previously so evident were nowhere to be seen, so I think the capacitor switch did the trick as far as stability is concerned.
There is a large excess between measurement and model out to a few kHz, if this is really what ends up going to the suspension then this circuit is useless. However, I suspect at least part of the problem is due to close proximity to switching power supplies, judging by the comb of ~10 Hz spaced peaks. This is a frequent problem in coil driver noise measurements - previously, the culprit was a switching power supply to the Prologix GPIB box, but now a Linear AC-DC converter is used (besides, disconnecting it had no visible effect). The bench supplies providing power to the board, however, is a switching supply, maybe that is to blame? I think the KEPCO supplies providing +/-250 V are linear. I tried the usual voodoo of twisting the wires used to receive the signal, moving the SR785 away from the circuit board etc, but these measures had no visible effect either.
The real requirement of this circuit is that the current noise above 100 Hz be <1pA/rtHz. This measurement suggests a level that is 5x too high. But the problem is likely measurement related. I think we can only make a more informed conclusion after shielding the circuit better and conducting the test in a more electromagnetically quiet environment.
Teledyne AP1053 etc were transported from Rich's office to the 40m. The box is placed on the shelf at the entrance.
My record tells that there are 7 AP1053 in the box. I did not check the number this time.
My power at home winked out for a second this morning, but it looks like either nothing happened in the 40m lab or else it rode it out.
MC is locked - lost lock around 11:25 AM and then relocked.
The electronics chain used to drive the three elements of the PI PZT on which a mirror is mounted with the intention of controlling the LO phase has been changed, to now use the Trek Mode603 power amplifier instead of the OMC high voltage driver. Attachment #1 shows the new configuration.
The text of Attachment #1 contains most of the details. The main requirement was to map the DAC output voltage range, to something appropriate for the Trek amplifier. The latter applies a 50V/V gain to the signal received on its input pin, and also provides a voltage monitor output which I hooked up to an ADC channel in c1ioo. The gain of the interfacing electronics was chosen to map the full output range of the DAC (-5 to +5 V for a single-ended receiving config in which one pin is always grounded) to 0-2.5 V at the input of the Trek amplifier, so that the effective high voltage drive range is 0-125 V. I don't know what the damage threshold is for the PI PZT, maybe we can go higher. The only recommendation given in the Trek manual is to not exceed +/-12 V on the input jack, so I have configured D2000396 to have a supply voltage of 11.5 V, so that in the event of electronics failure, we still don't exceed this number.
On the electronics bench, I tested the drive chain, and also measured the transfer function, see Attachment #2. Seems reasonable (the Trek amplifier was driving a 3uF capacitive load used to protect the SR785 measurement device from any high voltage, hence the roll-off). The gain of D2000396 was changed from 1/8 to 1/4 after I realized that the DAC full range is only +/- 5 V when the receiving device is single-ended at both input and output. Maybe the next iteration of this curcuit should have differential sending, to preserve the range.
To test the chain, I used the single bounce beam from the ITM, and interfered it with the LO. Clear fringing due to the seismic motion of the ITM (and also LO phase noise) is visible. In this configuration, I drove the PZT mirror in the LO path at a higher frequency, hoping to see the phase modulation in the DCPD output. However, I saw no signal, even when driving the PZT with 50% of the full DAC range. The voltage monitor ADC channel is reporting that the voltage is faithfully being sent to the PZTs, and I measured the capacitance of the PZTs (looked okay), so not sure what is going on here. Needs more investigation.
Update Aug 30 5pm: Turns out the problem here was a flaky elbow connector I used to pipe the high voltage to the PI PZT, it had some kind of flaky contact in it which meant the HV wasn't actually making it to the PZT. I rectified this and was immediately able to see the signal. Played around with the dark fringe Michelson for a while, trying to lock the homodyne phase by generating a dither line, but had no success with a simple loop shape. Probably needs more tuning of the servo shape (some boosts, notches etc) and also the dither/demod settings themselves (frequency, amplitude, post mixer LPF etc). At least the setup can now be worked on interferometrically.
Clearly this "riff raff" is referring to me. It won't help today I guess but there is one each on the carts holding the SR785 (currently both in the office/electronics bench area), and the only other unit available in the lab is connected to a Prologix box on the Marconi inside the PSL enclosure.
The Prologix GPIB-ethernet dongle needs +8-13 V to run. Some riff raff has removed the adapter and I was thunderstruck to see that it had not been returned.
I set up to do the WFS head modifications today, but I was shot down in flames due to a missing AC/DC adapter.
I did the usual hunt around the lab looking for something with the right specs and connector. I found one that could do +9V and had the right connector, but it didn't light up the adapter so I put it back in black SP table.
I'll order a couple of these (5 ordered for delivery on Wednesday) in case there's a hot demand for the jack / plug combo that this one has. The setup is in the walkway, but I returned the AS table to the usual state and made sure the IMC is locking well.
I lowered the (FAST) PZT gain on the IMC/FSS servo today.
I noticed that the MC locks looked unstable a lot of the day, and during lock the PCDRIVE channel is above 1 Vrms (which means the loop is oscillating, ttypically at the PZT/EOM crossover frequency).
I changed the default setting from 22 to 20 19 dB in the PSL Settings screen so the mcup script will use this for now. Feel free to revert if this turns out to be a Fluke (which you would think is a terrible name for a company, but...)
Just a quick set of notes detailing changes so that there are no surprises, more details to follow.
I briefly tried some LO PZT mirror dithering tonight, but didn't see the signal. Needs more troubleshooting.
Mr Fred Goodbar of Konacrane was in the lab 830am-1130am today. All three cranes in the VEA were inspected, loaded with 450lb test weights, and declared in good working condition and safe to use.
The interferometer subsystems appear normal after the inspection.
I unboxed the Trek amplifier today, and performed some basic tests of the functionality. It seems to work as advertised. However, we may have not specified the correct specifications - the model seems to be configured to drive a bipolar output of +/- 125 V DC, whereas for PZT driving applications, we would typically want a unipolar drive signal. From reading the manual, it appears to me that we cannot configure the unit to output 0-250V DC, which is what we'd want for general PZT driving applications. I will contact them to find out more.
The tests were done using the handheld precision voltage source for now. I drove the input between 0 to +5 V and saw an output voltage (at DC) of 0-250 V. This is consistent with the voltage gain being 50V/V as is stated in the manual, but how am I able to get 250 V DC output even though the bipolar configuration is supposed to be +/- 125 V? On the negative side, I am able to see 50V/V gain from 0 to -1 V DC. At which point making the input voltage more negative does nothing to the output. The unit is supposed to accept a bipolar input of +/- 10 V DC or AC, so I'm pretty sure I'm not doing anything crazy here...
Okay based on the markings on the rear panel, the unit is in fact configured for unipolar output. What this means is we will have to map the +/- 10 V DC output from the DAC to 0-5 V DC. Probably, I will stick to 0-2.5 V DC for a start, to not exceed 125 V DC to the PI PZT. I'm not sure what the damage spec is for that. The Noliac PZT I think can do 250 V DC no problem. Good thing I have the inverting summing amplifier coming in tomorrow...
Attachment #1 is a summary of the current to each coil on the suspensions. The situation is actually a little worse than I remembered - several coils are currently drawing in excess of 10mA. However, most of this is due to a YAW correction, which can be fixed somewhat more easily than a PIT correction. So I think the circuit with a gain of 31 for an input range of +/-10 V, which gives us the ability to drive ~12mA per coil through a 25kohm series resistor, will still provide sufficient actuation range. As far as the HV supplies go, we will want something that can do +/- 350 V. Then the current to the coils will at most be ~50 mA per optic. The feedback path will require roughly the same current. The quiescent draw of each PA95 is ~10mA. So per SOS suspension, we will need ~150mA.
If it turns out that we need to get more current through the 25kohm series resistance, we may have to raise the voltage gain of the circuit. Reducing the series resistance isn't a good option as the whole point of the circuit is to be limited by the Johnson noise of the series resistance. Looking at these numbers, the only suspension on which we would be able to plug in a HV coil driver as is (without a vent to correct for YAW misalignment) is ITMY.
Update 2 Sep 2020 2100: I confirmed today that the number reported in the EPICS channel, and the voltage across the series resistor, do indeed match up. The test was done on the MC3 coil driver as it was exposed and I didn't need to disable any suspensions. I used a Fluke DMM to measure the voltage across the resistor. So there is no sneaky factor of 2 as far as the Acromag DACs are concerned (unlike the General Standards DAC).
I found that the control MEDM screen was left open on the c1vac workstation. This should be closed every time you leave the workstation, to avoid accidental button pressing and such.
The network outage meant that the EPICS data from the pressure gauges wasn't recorded until I reset everything ~noon. So there isn't really a plot of the outgassing/leak rate. But the pressure rose to ~2e-4 torr, over ~4 hours. The pumpdown back to nominal pressure (9e-6 torr) took ~30 minutes.
Listing some talking points from the last week of activity here.
I re-plotted the MCMC results as semi-transparent lines so that probable lines stick out.
This also reveals what is behind the extreme sparsity in the aLIGO simulation results (In the previous post).
There seem to be some bi-stability/phase transition/whatever in the aLIGO simulation. The aLIGO transfer functions are very sensitive to one or more of the DOFs. Not sure which yet.
I'm leaving the lab shortly. We're not ready to switch over the vac equipment to the new UPS units yet.
The 120V UPS is now running and interfaced to c1vac via a USB cable. The unofficial tripplite python package is able to detect and connect to the unit, but then read queries fail with "OS Error: No data received." The firmware has a different version number from what the developers say is known to be supported.
The 230V UPS is actually not correctly installed. For input power, it has a general type C14 connector which is currently plugged into a 120V power strip. However this unit has to be powered from a 230V outlet. We'll have to identify and buy the correct adapter cable.
With the 120V unit now connected, I can continue to work on interfacing it with python remotely. The next implementation I'm going to try is item #2 of this plan [ELOG 15446].
I'm in the lab this morning to interface the two new UPS units with the digital controls system. Will be out by lunchtime. The disruptions to the vac system should be very brief this time.
A more careful analysis has revealed some stability problems. I see oscillations at frequencies ranging from ~600kHz to ~1.5 MHz, depending on the voltage output requested, of ~2 V pp at the high-voltage output in a variety of different conditions (see details). My best guess for why this is happening is insufficient phase margin in the open-loop gain of the PA95 high voltage amplification stage, which causes oscillations to show up in the closed loop. I think we can fix the problem by using a larger compensation capacitor, but if anyone has a better suggestion, I'm happy to consider it.
The changes I wanted to make to the measurement posted earlier in this thread were: (i) to measure the noise with a load resistor of 20 ohms (~OSEM coil resistance) connected, instead of the unloaded config previously used, and (ii) measure the voltage noise on the circuit side (= TP5 on the schematic) with some high voltage output being requested. The point was to simulate conditions closer to what this board will eventually be used in, when it has to meet the requirement of <1pA/rtHz current noise at 100 Hz. The voltage divider formed by the 25 kohm series resistor and the 20 ohm OSEM coil simulated resistance makes it hopeless to measure this level of voltage noise using the SR785. On the other hand, the high voltage would destroy the SR785 (rated for 30 V max input). So I made a little Pomona box to alllow me to do this measurement, see Attachment #1. Its transfer function was measured, and I confirmed that the DC high voltage was indeed blocked (using a Fluke DMM) and that the output of this box never exceeded ~1V, as dictated by the pair of diodes - all seemed okay .
Next, I wanted to measure the voltage noise with ~10mA current flowing through the output path - I don't expect to require more than this amount of current for our test masses. However, I noticed some strange features in the spectrum (viewed continuously on the SR785 using exponential averaging setting). Closer investigation using an oscilloscope revealed:
Some literature review suggested that the capacitor in the feedback path, C4 on the schematic, could be causing problems. Specifically, I think that having that capacitor in the feeddback path necessitates the use of a larger compensation capacitor than the nominal 33pF value (which itself is higher than the 4.7pF recommended on the datasheet, based on experience of the ESD driver circuit which this is based on, oscillations were seen there too but the topology is a bit different). As a first test of this idea, I removed the feedback capacitor, C4 - this seemed to do the trick, the oscillations vanished and I was able to drive the output between the high voltage supply rails. However, we cannot operate in this configuration because we need to roll off the noise gain for the input voltage noise of the PA95 (~6 nV/rtHz at 100 Hz will become ~200 nV/rtHz, which I confirmed using the SR785). Using a passive RC filter at the output of the PA95 (a.k.a. a "snubber" network) is not an option because we need to sum in the fast actuation path voltage at the output of the 25 kohm resistor.
Some modeling confirms this hypothesis, see Attachment #2. The quantity plotted is the open-loop gain of the PA95 portion of the circuit. If the phase is 0 degrees, then the system goes unstable.
So my plan is to get some 470pF capacitors and test this idea out, unless anyone has better suggestions? I guess usually the OpAmps are compensated to be unconditionally stable, but in this case maybe the power op-amp is more volatile?
Need to think more about how to better characterize this noise. An estimate of the required actuation can be found here.
The mode-matching between the LO and AS beams is now ~50%. This isn't probably my most average mode-matching in the lab, but I think it's sufficient to start doing some other characterization and we can try squeezing out hopefully another 20-30% by putting the lenses on translation stages, tweaking alignment etc.
The main change was to increase the optical path length of the IFO AS path, see Attachment #1. This gave me some more room to put a lens and translate it.
Try the PRMI experiments again, now that I have some confidence that the beams are actually interfering.
See Attachment #3 for the updated spectra - the configuration is PRMI locked with carrier resonant and the homodyne phase is uncontrolled. There is now much better clearance between the electronics noise and the MICH signal as measured in the DCPDs. The "LO only" trace is measured with the PSL shutter closed, so the laser frequency isn't slaved to the IMC length. I wonder why the RIN (seen in the SUM channel) is different whether the laser is locked to the IMC or not? The LO pickoff is before the IMC.
A single channel of this board was stuffed (and other channels partially populated). The basic tests passed, and nothing exploded! Even though this is a laughably simple circuit, it's nice that it works.
HV power supplies:
A pair of unused KEPCO BHK300-130 switching power supplies that I found in the lab were used for this test. I pulled the programmable cards out at the rear, and shorted the positive output of one unit to the negative of the other (with both shorted to the supply grounds as well), thereby creating a bipolar supply from these unipolar models. For the purposes of this test, I set the voltage and current limits to 100V DC, 10mA respectively. I didn't ramp up the supply voltage to the rated 300 V maximum. The setup is shown in Attachment #1.
No, there should be no unscheduled visits from any inspector, marshal, tech, or vendor. They all have to be escorted or they don't get in. If they have a problem with that, please give them my cell #.
For the ALS, in addition to the beat note spectrum, I think we need to know the loop gain use to feedback to the ETM to determine the true cavity length fluctuation. w/o ALS, the noise would be only due to the seismic noise, OSEM damping noise, and the IR-PDH residual. Those are all suppressed by the ALS loop, but then the ALS loop puts its sensing noise onto the cavity. So, if I'm thinking about this right, the ALS beat noise > 200 Hz doesn't matter so much to the CARM RMS. So the whitening seems to be doing good in the right spot, but we would like to have another boost in the green PDH to up the gain below ~300 Hz?
With the chosen transimpedance of 300 ohms, in order to be able to see the shot noise of 10 mW of light in the digitized data streams, we'd need all 3 stages of whitening. If we want to be shot noise limited with 1 mW of LO light, we'd need to increase said transimpedance I think.
The measurements were taken with
Of course, it's unlikely we're going to be shot noise limited for any configuration in the short run. But this was also a test of
All 3 tests passed.
I finally managed to install a differential-receiving whitening board in 1Y2 - 4 channels are available at the moment. As I claimed, one stage of 15:150 Hz z:p whitening does improve the ALS noise a little, see Attachment #1. While the RMS (from 1kHz-0.5 Hz) does go down by ~10 Hz, this isn't really going to make any dramatic improvement to the 40m lock acquisiton. Now we're really sitting on the unsuppressed EX laser noise above ~30 Hz. This measurement was taken with the arm cavities locked with POX/POY, and end lasers locked to the arm cavities with uPDH boxes as usual. This was just a test to confirm my suspicion, the whitening board is to be used for the air BHD channels, but when we get a few more stuffed, we can install it for the ALS channels too.
A technician came to the lab today at ~4pm. He entered the VEA (with booties and googles), and also the clean and bake lab. The whole procedure lasted ~10 minutes. I did not follow him around, but was available in the control room throughout the process. I think the whole episode went without incident.
BTW, this guy didn't ring the doorbell, I just happened to be here when he came by. I don't know if this is usual practise - are we happy with the technicians entering the VEA and/or clean and bake labs without supervision? AFAIK, this wasn't scheduled.
Gabriele left the DataRay beam profiler + peripherals (see Attachment #1) in his office. I picked them up just now and brought them over to the 40m.
Yesterday I completed the switchover of small turbo pump interlocks as proposed in ELOG 15499. This overhaul altogether eliminates the dependency on RS232 readbacks, which had become unreliable (glitchy) in both controllers. In their place, the V4(5) valve-close interlocks are now predicated on an analog controller output whose voltage goes high when the rotation speed is >= 80% of the nominal setpoint. The critical speed is 52.8 krpm for TP2 and 40 krpm for TP3. There already exist hardware interlocks of V4(5) using the same signals, which I have also tested.
Unlike the TP1 controller, which exposes simple relays whose open/closed states are sensed by Acromags, what the TP2(3) controllers output is an energized 24V signal for controlling such a relay (output circuit pictured below). I hadn't appreciated this difference and it cost me time yesterday. The ultimate solution was to route the signals through a set of new 24V Phoenix Contact relays installed inside the Acromag chassis. However, this required removing the chassis from the rack and bringing it to the electronics bench (rather than doing the work in situ, as I had planned). The relays are mounted to the second DIN rail opposite the Acromags. Each TP2(3) signal controls the state of a relay, which in turn is sensed using an Acromag XT1111.
The TP2(3) "normal-speed" signals are already in use by hardware interlocks of V4(5). Each signal is routed into the main AC relay box, where it controls an "interrupter" relay through which the Acromag control signal for the main V4(5) relay is passed. These signals are now shared with the digital controls system using a passive DB15 Y-splitter. The signal routing is shown below.
The new turbo-pump-related interlock conditions and their channel predicates are listed below. The full up-to-date channel list and wiring assignments for c1vac are maintained here.
There are two new channels, both of which provide a binary indication of whether the pump speed is outside its nominal range. I did not have enough 24V relays to also add the C1:Vac-TP2(3)_fail channels listed in ELOG 15499. However, these signals are redundant with the existing interlocks, and the existing serial "Status" readback will already print failure messages to the MEDM screens. All of the TP2(3) serial readback channels remain, which monitor voltage, current, operational status, and temperature. The pump on/off and low-speed mode on/off controls remain implemented with serial signals as well.
The new analog readbacks have been added to the MEDM controls screens, circled below:
Vacuum work is completed. The TP2 and TP3 interlocks have been overhauled as proposed in ELOG 15499 and seem to be performing reliably. We're now back in the nominal system state, with TP2 again backing for TP1 and TP3 pumping the annuli. I'll post the full implementation details in the morning.
I did not get to setting up the new UPS units. That will have to be scheduled for another day.
The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up.
That's great news we won't have to worry about a new timing fanout for the two new machines, c1bhd and c1sus2. And there's no plan to change Dolphin IPC drivers. The plan is only to install the same (older) version of the driver on the two new machines and plug into free slots in the existing switch.
The new dolphin eventually helps us. But the installation is an invasive change to the existing system and should be done at the installation stage of the 40m BHD.
I added the EPCIS channels for the c1omc model (gains, matrix elements etc) to the autoburt such that we have a record of these, since we expect these models to be running somewhat regularly now, and I also expect many CDS crashes.
There was a power outage ~30 mins ago that knocked out CDS, PSL etc. The lights in the office area also flickered briefly. Working on recovery now. The elog was also down (since nodus presumably rebooted), I restarted the service just now. Vacuum status seems okay, even though the status string reads "Unrecognized".
The recovery was complete at 1830 local time. Curiously, the EX NPRO and the doubling oven temp controllers stayed on, usually they are taken out as well. Also, all the slow machines and associated Acromag crates survived. I guess the interruption was so fleeting that some devices survived.
The control room workstation, zita, which is responsible for the IFO status StripTool display on the large TV screen, has some display driver issues I think - it crashed twice when I tried to change the default display arrangement (large TV + small monitor). It also wants to update to Ubuntu 18.04 LTS, but I decided not to for the time being (it is running Ubuntu 16.04 LTS). Anyways, after a couple of power cycles, the wall StripTools are up once again.
That's great. I wonder if we can also get away with not adding new Dolphin infrastructure. I'd really like to avoid changing any IPC drivers.
I believe we will use two new chassis at most. We'll replace c1ioo from Sun to Supermicro, but we recycle the existing timing system.
When I tested Q3000 for aLIGO, the failure rate was pretty high. Let's get 10pcs.
Grrr. Let's repair the unit. Let's get a help from Chub & Jordan.
Do you have a second unit in the lab to survive for a while?
The "source" output of the SR785 has a DC offset of -6.66 V. I couldn't make this up.
Upshot is, this SR785 is basically not usable for TF measurements. I was using the unit to characterize the newly stuffed ISC whitening board. The initial set of measurements were sensible, and at some point, I started getting garbage data. Unclear what the cause of this is. AFAIK, we don't have any knob to tune the offset - adjusting the "offset" in the source menu, I can change the level of the offset, but only by ~1 V even if I apply an offset of 10 V. I also tried connecting the ground connection on the rear of the SR785 to the bench power supply ground, no change.
Do we have to send this in for repair?
See Attachments #1 and #2. We don't have any Q3000 QPDs in hand, at least not in the photodiode box stored in the clean optics cabinet at the south end. I also checked a cabinet along the east arm where we store some photodiodes - but didn't find any there either. The only QPDs we have in hand are the YAG-444-4AH, which I believe is what is used in the iLIGO WFS heads.
So how many do we want to get?
See Attachment #1. J8 was connected to a "LASTI timing slave" sitting in the rack that Chiara lives in - we don't use this for anything and I confirmed that there was no effect on the RTCDS when I pulled that fiber out. The LASTI timing slave also had a blinky that was blinking when the fiber was plugged in - which I take to believe that the slot works.
Can we get away with just using these two available slots, J8 and J13? Do we really need three new expansion chassis?
Some tests done today:
All of these tests were done with the PRMI locked with carrier resonant in the recycling cavity (i.e. sidebands rejected to REFL port). I then actuated the BS length DOF with a sine wave at 311.1 Hz, 40 cts amplitude (corresponding to ~8 pm of peak-to-peak displacement).
While it would seem from these graphs that the RIN of the LO beam at these frequencies is rather high, it is because of the ADC noise. More whitening (to be installed in the coming days) will allow us to get a better estimate, should be ~1e-6 I think.
I was just playing today, still need to setup some more screens, DTT templates etc to do more tests in a convenient way.
Now, I can think about how to commission this setup interferometrically.
All the details are in E2000436, and documents linked from there, I think an elog would be much too verbose. In summary, a workable setup consisting of
Last night, I locked the PRMI with the carrier resonant, and convinced myself that the DCPD null stream was sensing the MICH degree of freedom (while it was locked on AS55_Q) with good SNR below ~60 Hz. Above ~60 Hz, in this configuration, the ADC noise was dominating, but by next week, I'll have a whitening board installed that will solve this particular issue. With the optical gain of MICH in this configuration, the ADC noise level was equivalent to ~500 nrad/rtHz of phase noise above ~60 Hz (plots later).
I fixed some stuff in the MCMC simulation:
1. Results are now plotted as shades from minimum to maximum. I tried making the shade the STD around a mean but it doesn't look good on a log scale when the STD is bigger than the mean.
2. Added comparison with aLigo. The OMCL diff and comm motions in A+ are both compared to the single OMCL DOF of aLigo.
3. I fixed a serious error in the code that produced incorrect results.
4. Imbalances in the IFO such as differential arm loss are generated randomly at the beginning and stay fixed for the rest of the simulation instead of being treated as an offset.
5. The simulation now runs with maxtem=2. That is, TEM modes up to 2nd order are considered.
The results are attached.
I have been working on analyzing the seismic data obtained from the 3 seismometers present in the lab. I noticed while looking at the combined time series and the gain plots of the 3 seismometers that there is some error in the calibration of the BS seismometer. The EX and the EY seismometers seem to be well-calibrated as opposed to the BS seismometer.
The calibration factors have been determined to be :
The seismometers each have 3 channels i.e X, Y, and Z for measuring the displacements in all the 3 directions. The X channels of the three seismometers should more or less be coherent in the absence of any seismic excitation with the gain amongst all the similar channels being 1. So is the case with the Y and Z channels. After analyzing multiple datasets, it was observed that the values of all the three channels of the BS seismometer differed very significantly from their corresponding channels in the EX and the EY seismometers and they were not calibrated in the region that they were found to be coherent as well.
Note: All the frequency domain plots that have been calculated are for a sampling rate of 32 Hz. The plots were found to be extremely coherent in a certain frequency range i.e ~0.1 Hz to 2 Hz so this frequency range is used to understand the relative calibration errors. The spread around the function is because of the error caused by coherence values differing from unity and the averages performed for the Welch function. 9 averages have been performed for the following analysis keeping in mind the needed frequency resolution(~0.01Hz) and the accuracy of the power calculated at every frequency.
The gain in the given frequency range is ~3. The phase plotting also shows a 180-degree phase as opposed to 0 so a negative sign would also be required in the calibration factor. Thus the calibration factor for the Y channel of the BS seismometer should be around ~3.
The mean value of the gain in the given frequency range is the desired calibration factor and the error would be the mean of the error for the gain dataset chosen which is caused due to factors mentioned above.
Note: The standard error envelope plotted in the attached graphs is calculated as follows :
1. Divide the data into n segments according to the resolution wanted for the Welch averaging to be performed later.
2. Calculate PSD for every segment (no averaging).
3. Calculate the standard error for every value in the data segment by looking at distribution formed by the n number values we obtain by taking that respective value from every segment.
The BS seismometer is a different model than the EX and the EY seismometers which might be a major cause as to why we need special calibration for the BS seismometer while EX and EY are fine. The sign flip in the BS-Y seismometer may cause a lot of errors in future data acquisitions. The time series plots in Attachment #4 shows an evident DC offset present in the data. All of the information mentioned above indicates that there is some electrical or mechanical defect present in the seismometer and may require a reset. Kindly let me know if and when the seismometer is reset so that I can calibrate it again.
that's great. I think we would like to figure out how to present this so that its clear what the distribution of TFs is. Maybe we can plot the most likely curve as well as a shaded region indicating the 5% and 95% values?
I've pushed an MCMC simulation to the A+ BHD repo (filename MCMC_TFs.ipynb). The idea is to show how random offsets around ideal IFO change the noise couplings of different DOFs to readout.
and then we add the loops
Be aware that there is now a KEPCO HV supply that is energized, sitting on the floor immediately adjacent to the OMC rack, east of the AP table. It is currently set to 100 V DC, and a PI PZT installed on the AP table has its 3 PZTs energized by said supply (via an OMC piezo driver). I will post pictures etc of the work from the last 10 days over the weekend.