BLACK - raw ground motion measured by the Guralp
MAGENTA - motion after passive STACIS (20 Hz harmonic oscillator with a Q~2)
GREEN - difference between ground and top of STACIS
YELLOW - EUCLID noise in air
BLUE - STACIS top motion with loop on (60 Hz UGF, 1/f^2 below 30 Hz)
CYAN - same as BLUE, w/ 10x lower noise sensor
The DFD was setup to measure the change in beatnote when excited. A long long (128in) cable goes from the SR785 near the DFD all the way to the Xend AUX which it accordingly excites and the DFD is monitored by the oscilloscope at the other end. This was completed on Friday. The wires and stand have been moved to the side but the setup is still a bit chaotic. As of writing this post, there is still atleast some minor issue with the setup as we aren't getting the expected output.
[I will shortly update this elog with more pictures]
Edit: the SR785 was replaced by the AG 4395, and pictures added
We used a python script to collect data from the SR785 remotely. The SR785 is now connected to the wifi network via Ethernet port 7.
Joe and I have taken control of the EPICS channels C1:PEM-Stacis_EEEX_geo and C1:PEM-Stacis_EEEY_geo since we heard that they are no longer in use. We are currently
using them to test the ability for the Snap camera code to read and write from EPICS channels. Thus, the information being written to these channels is completely unrelated
to their names or previous use. This is only temporary; we'll create our own channels for the camera code shortly (probably within the next couple of days).
I used MC_L signal from the Mode Cleaner as the desired signal with GUR2_X as witness signals. I observed good subtraction where coherence is high. But there was noise added in other frequency bands. I am not sure how to avoid that.
Please find attached documents that contains relevant plots.
One of these V beam dumps was installed for BH44 RF PD.
The rest is now stored in the box in the shelf along Yarm, together with RF PD mounts.
Joe, Alberto and Steve
We tested gate valve V1 interlock by :
1, decelerated rotation by brake from maglev controller unit.
2, turned maglev controller off from controller unit.
3, unpluged 220VAC plug from wall socket
None of the above action triggered V1 to close. This needs to be corrected in the future.
The MEDM monitor screen of maglev indicated the correct condition changes.
The Maglev is running for 10 days with V1 closed. The pressure at the RGA-region is at 2e-9 torr on CC4 cold cathode gauge.
Valve VM2 to Rga-only was opened 6 days ago. The foreline pressure is still 2.2e-6 torr with small Varian turbo ~10 l/s on cc2
Daily scans show small improvement in large amu 32 Oxygen and large amu 16, 17 and 18 H20 water peaks.
Argon calibration valve is leaking on our Ar cylinder and it is constant.
The good news is that there are no fragmented hydrocarbons in the spectrum.
The Maglev is soaked with water. It was seating in the 40m for 4 years with viton o-ring seals
However I can not explan the large oxygen peak, either Rai Weiss can not.
The Maglev scans are indicating cleanliness and water. I'm ready to open V1 to the IFO
V1 valve is open to IFO now. V1 interlock will be tested tomorrow.
Valve configuration: VAC NORMAL with CRYO and Maglev are both pumping on the IFO
The PR2 candidate V6-704/705 mirrors (Qty2) are now @Downs. Camille picked them up for the measurements.
To identify the mirrors, I labeled them (on the box) as M1 and M2. Also the HR side was checked to be the side pointed by an arrow mark on the barrel. e.g. Attachment 1 shows the HR side up
Camille@Downs measured the surface of these M1 and M2 using Zygo.
Joe and Alex are working on the computers. Our vacuum system is temporary "All off" condition: meaning all valves are closed, so there is no pumping. cc1 = 1.6e-6 Torr
Designated vacuum control lap top is trouble some to use. Joe finally fixed it and I switched valve configuration back to vacuum normal. Shutter is open
Pump spool valves V5, V4, V3 sweating a lot. VM3 and VC2 not so much.
They are VAT valves F28-62887-03, 11, 14 and so on ~15-16 years old.
I'm speculating that some plastic is aging-braking down at the atmospheric-pneumatic side of valves.
The vacuum side is not effected, according to vacuum pressure readings.
May be some condensation from the small turbos? No
I'm looking for an identical valve to examine, but I can not find one.
We are using industrial grade 99.96% Nitrogen to actuate these valves.
Valves are not effected are dry: VA6, V6, V7 and all annuloses.
Yes, our engineers are aware of this issue. They say:
The pneumatic actuator needs lubricant as the O-ring (Viton) slides in the cylinder. Without grease the O-ring would be abraded and leaking after only a relatively few cycles. The lubricant used in our pneumatic actuators is an emulsion of oil and Teflon flakes. Vibration, many cycles and sometimes high temperature lead to the separation of the oil and Teflon. That is apparently the issue you are seeing.
VAT is and has been testing and qualifying new lubricants, and this is one of the factors we are always looking to improve. The formula we used 15 years ago in these valves seems to have performed reasonable well. Our formula today should perform even better.
We realize this explanation does not help you with these existing valves, but 15 years of service is not too bad is it?
Steve -NOTE:bonnet seal is metal so there is no way this oil can get into our vacuum ( only if the bellow leaks )
CC1 5e-7 Torr, VC1 closed at 18:25, IFO is not pumped, RGA is in bg-mode
At about 1am or so Yoichi and I opened VC1. CC1 had fallen to about 5e-5 torr.
I took back he VCO driver that Reetika brought over to the 40m from the PSL lab.
I measured the RF power output of the VCO Driver box as a function of slider value. I measured using the Gigatronics Handheld power meter and connected to the AOM side of the cable after the white Pasternak DC block.
* at low power levels, I believe the waveform is too crappy to get an accurate reading - that's probably why it looks non-monotonic.
* the meter has a sticker label on it saying 'max +20 dBm'. I went above +20 dBm, but I wonder if maybe the thing isn't linear up there...
I found the VCO driver, that Rana asked me to locate, inside the 40m. I already have one VCO from PSL lab. Now, I have kept both of them inside the 40m lab(one on the cart in the side of the Y-arm and the other near the X-arm electronics table).
We hooked up the VCO Driver output to the MFD. We adjusted the levels with attenuators to match up to the Level 7 mixer that's being used.
The mixer the input to the SR560 is going in to the XARM_COARSE_OUT channel and the SR560 (AC coupled, Low Noise, G=1000, LP@1kHz) 600 Ohm output goes into XARM_FINE_OUT.
We calibrated these channels by putting in a 10 mVpp sine wave at 0.22 Hz into the Wideband Input of the VCO Driver box (which has been calibrated to have 1.75 MHz/V for f < 1.6 Hz). This should correspond to 17.5 kHz_pp.
To increase the sensitivity, we also added a 140 ft. BNC cable to the setup. We also added some extra short cable to make the overall phase shift be ~90 deg and zero out the mixer output.
I used the time series data in DTT to then calibrate the channels by changing the GAIN field in their filter modules. So now the DAQ channels are both calibrated as 1 count/Hz.
This is the 140 ft. MFD measurement of the VCO phase noise. It is open loop and so should be a good measurement. The RMS is 30 Hz integrated down to 2 mHz.
I don't know why this doesn't agree with Suresh's measurements of the same thing which uses the PLL feedback method.
In BLUE, I also plot the frequency noise measured by using a Stanford DS345 30 MHz func. generator. I think that this is actually the noise of the FD (i.e. the SR560 preamp) and not the DS345. Mainly, it just tells you that the PINK VCO noise measurement is a real measurement.
I calibrated it by putting in a 5 kHz_pp triangle wave on the sweep of the DS345 and counting the counts in DV.
This measurement pertains to the BL2002 VCO PLL unit.
Our goal is to measure the frequency fluctuations introduced by the VCO.
First the VCO calibration was checked. It is -1.75 MHz per volt. The calibration data is below:
Next we measured the Transfer function between points A and B in the diagram below using the Stanford Research System's SR785. This measurement was done with loop opened just after the 1.9MHz LPF and with the loop closed.
The TF[open] / TF [closed ] gave the total gain in the loop. As shown below:
Green curve is the Transfer Function with the loop open and the red with that of the loop closed.
Gain Shown below is the quotient TF[open]/TF[closed]
c) As can be seen from the graph above the loop gain is >>1 over 0.1 to 300Hz. And hence the frequency noise of the VCO is just the product of the voltage noise and the VCO calibration factor over this range,
d) the noise power at the point B was measured and multiplied by the VCO calibration factor to yield dF(rms)/rtHz:
The green line with dots are the data
The blue line is the rms frequency fluctuation.
This corresponds to a arm length fluctuation of about 20pm.
Rich dropped by at around 3:00 PM today and picked up the VCO in Attachment #1 and left the note in Attachment #2 on Gautam's desk with the promise of bringing it back soon.
Steve, please begin the vent!!
We have followed the pre-vent checklist, and done everything except check the jam nuts (which Steve can do in the morning).
We are ready to vent, so Steve, please begin bringing us up to atmosphere first thing in the morning.
Here is a copy of the list, from the wiki:
I have turned of the high voltage supplies for PZT1 and PZT2. The OMC PZT high voltage supplies were already off, since we aren't really using them currently.
I have closed the PSL shutter, but have not put in a manual extra beam dump yet.
All systems go for vent!
Steve - EricQ will be here around 8am to help with the vent.
CC1 old MKS cold cathode gauge randomly turns on- off. This makes software interlock close VM1 to protect RGA So the closed off RGA region pressure goes up and the result is distorted RGA scan.
CC1 MKS gauge is disconnected and VM1 opened. This reminds me that we should connect our interlocks to CC1 Hornet Pressure gauge.
Pumpdown 80 at 511 days and pd80b at 218 days
Valve configuration: special vacuum normal, annuloses are not pumped at 3 Torr, IFO pressure 7.4e-6 Torr at vac envelope temp 22 +- 1C degrres
we had to reboot the IOO VME crate right before lunch as the DAQ wasn't working correct meaning showing no real signals anymore, only strange noise. The framebuilder and everything else was working fine at that time.
As the other channels showed the same effect we decided to reboot the crate and everything was fine afterwards.
I've added a new tab for VMon under the SUS parent tab. I'm still working out the scale and units, but let me know if you think this is a useful addition. Here's a link to my summary page that has this tab: https://ldas-jobs.ligo.caltech.edu/~praful.vasireddy/1151193617-1151193917/sus/vmon/
I'll have another tab with VMon BLRMS up soon.
Also, the main summary pages should be back online soon after Max fixed a bug. I'll try to add the SUS/VMon tab to the main pages as well.
Dry pump of TP3 replaced after 9.5 months of operation.[ 45 mTorr d3 ]
The annulosses are pumped.
Valve configuration: vac normal, IFO pressure 4.5E-5 Torr [1.6E-5 Torr d3 ] on new ITcc gauge, RGA is not installed yet.
Note how fast the pressure is dropping when the vent is short.
IFO pressure 1.7E-4 Torr on new not logged cold cathode gauge. P1 <7E-4 Torr
Valve configuration: vac.normal with anunulossess closed off.
TP3 was turned off with a failing drypump. It will be replaced tomorrow.
All time stamps are blank on the MEDM screens.
Vacuum Operation Guide is up loaded into the 40m-wiki. This is an old master copy. Not exact in terms of real action, but it is still a good guide of logic.
Rana has promissed to watch the N2 supply and change cylinder when it is empty. I will be Hanford next week.
As it was requested by the Bos.
It would be nice to read from the epic screen C1:Vac-state_mon.......Current State: Vacuum Normal, valve configuration
Maglev is the main pump of our vacuum system below 500 mTorr
It's long term pressure has to be <500 mTorr
Each chamber has it's own annulos. These small volumes are indipendent from main volume. Their pressure ranges are <5 mTorr at vac. normal valve configuration.
CC1=cold cathode gauge (low emmision), Pressure range: 1e-4 to 1e-10 Torr,
In vac- normal configuration CC1= 2e-6 Torr
The N2 supply is regulated to 60-80 PSI out put at the auto cylinder changer.
Each cylinder pressure will be measured before the regulator and summed for warning message to be send
at 1000 PSI
The base Acromag vacuum system is running and performing nicely. Here is a list of remaining questions and to-do items we still need to address.
[Jon, Gautam, Chub]
We continued the pumpdown of the IFO today. The main volume pressure has reached 1.9e-5 torr and is continuing to fall. The system has performed without issue all day, so we'll leave the turbos continuously running from here on in the normal pumping configuration. Both TP2 and TP3 are currently backing for TP1. Once the main volume reaches operating pressure, we can transition TP3 to pump the annuli. They have already been roughed to ~0.1 torr. At that point the speed of all three turbo pumps can also be reduced. I've finished final edits/cleanup of the interlock code and MEDM screens.
All the python code running on c1vac is archived to the git repo:
This includes both the interlock code and the serial device clients for interfacing with gauges and pumps.
We're still using the same base MEDM monitor/control screens, but they have been much improved. Improvements:
Note: The apparent glitches in the pressure and TP diagnostic channels are due to the interlock system being taken down to implement some of these changes.
While glancing at my Vacuum striptool, I noticed that the IFO pressure is 2e-4 torr. There was an "AC power loss" reported by C1Vac about 4 hours (14:07 local time) ago. We are investigating. I closed the PSL shutter.
Jon and I investigated at the vacuum rack. The UPS was reporting a normal status ("On Line"). Everything looked normal so we attempted to bring the system back to the nominal state. But TP2 drypump was making a loud rattling noise, and the TP2 foreline pressure was not coming down at a normal rate. We wonder if the TP2 drypump has somehow been damaged - we leave it for Chub to investigate and give a more professional assessment of the situation and what the appropriate course of action is.
The PSL shutter will remain closed overning, and the main volume and annuli are valved off. We spun up TP1 and TP3 and decided to leave them on (but they have negligible load).
Overnight pressure trends don't suggest anything went awry after the initial interlock trip. Some watchdog script that monitors vacuum pressure and closes the PSL shutter in the event of pressure exceeding some threshold needs to be implemented. Another pending task is to make sure that backup disk for c1vac actually is bootable and is a plug-and-play replacement.
There appears to have been some sort of vacuum failure.
ldas-pcdev1 was down, so the summary pages weren't being generated. I have now switched over to ldas-pcdev6. I suspect some forepump failure, will check up later today unless someone else wants to take care of this.
There was no interlock action, and I don't check the vacuum status every half hour, so there was a period of time last night there was high circulating power in the arm cavities when the main volume pressure was higher than nominal. I have now closed the PSL shutter until the issue is resolved.
It looks like the main vacuum interlock was tripped due to a serial communication error from the TP2 controller. With Rana/Koji's permission, I will open V1 and expose the main volume to TP1 again (#2 in last section).
Recommended course of action:
I've created a purchase list of hardware needed to restore the aging vacuum system. This wasn't planned as part of the BHD upgrade, but I've added it to the BHD procurement list since hardware replacements have become necessary.
The list proposes replacing the aging TP3 Varian turbo pump with the newer Agilent model which has already replaced TP2. It seems I was mistaken in believing we already had a second Agilent pump on hand. A thorough search of the lab has not turned it up, and Steve himself has told me he doesn't remember ordering a second one. Fortunately Steve did leave us a detailed Agilent parts list [ELOG 14322].
It also proposes replacing the glitching TP2 Agilent controller with a new one. The existing one can be sent back for repair and then retained as a spare. Considering that one of these controllers is already malfunctioning after < 2 years, I think it's a very good idea to have a spare on hand.
Below is our current list of vacuum hardware issues. Items that this purchase list will address (limited to only the most urgent) are highlighted in yellow.
This afternoon Jordan is going to carry out a test of the V4 and V5 hardware interlocks. To inform the interlock improvement plan , we need to characterize exactly how these work (they pre-date the 2018 upgrade). I have provided him a sequence of steps for each test and will also be backing him up on Zoom.
We will close V1 as a precaution but there should be no other impact to the IFO. The tests are expected to take <1 hour. We will advise when they are completed.
This test has been completed. The IFO configuration has been reverted to nominal.
For future reference: yes, both the V4 and V5 hardware interlocks were found to still be connected and work. A TTL signal from the analog output port of each pump controller (TP2 and TP3) is connected to an auxiliary relay inside the main valve relay box. These serve the purpose of interupting the (Acromag) control signal to the primary V4/5 relay. This interrupt is triggered by each pump's R1 setpoint signal, which is programmed to go low when the rotation speed falls below 80% of the low-speed setting.
This happened again, about 30,000 seconds (~2:06pm local time according to the logfile) ago. The cited error was the same -
Hard to believe there was any real power loss, nothing else in the lab seems to have been affected so I am inclined to suspect a buggy UPS communication channel. The PSL shutter was not closed - I believe the condition is for P1a to exceed 3 mtorr (it is at 1 mtorr right now), but perhaps this should be modified to close the PSL shutter in the event of any interlock tripping. Also, probably not a bad idea to send an email alert to the lab mailing list in the event of a vac interlock failure.
For tonight, I only plan to work with the EX ALS system anyways so I'm closing the PSL shutter, I'll work with Chub to restore the vacuum if he deems it okay tomorrow.
After getting the go ahead from Chub and Jon, I restored the Vacuum state to "Vacuum normal", see Attachment #1. Steps:
controls@c1vac:/opt/target/python/interlocks$ git diff interlock.py
diff --git a/python/interlocks/interlock.py b/python/interlocks/interlock.py
index 28d3366..46a39fc 100755
@@ -52,8 +52,8 @@ class Interlock(object):
self.pumps = 
for pump in interlocks['pumps']:
pm = PumpManager(pump['name'])
- for condition in pump['conditions']:
+ #for condition in pump['conditions']:
+ # pm.register_condition(*condition)
So far the pressure is coming down smoothly, see Attachment #2. I'll keep an eye on it.
PSL shutter was opened at 645pm local time. IMC locked almost immediately.
Update 11pm: The pressure has reached 8.5e-6 torr without hiccup.
I slightly cleaned up Gautam's disabling of the UPS-predicated vac interlock and restarted the interlock service. This interlock is intended to protect the turbo pumps after a power outage, but it has proven disruptive to normal operations with too many false triggers. It will be reenabled once a new UPS has been installed. For now, as it has been since 2001, the vac pumps are unprotected against an extended power outage.
The vac system is going down at 11 am today for planned maintenance:
We will advise when the work is completed.
This work is finally complete. The dry pump replacement was finished quickly but the controls updates required some substantial debugging.
For one, the mailer code I had been given to install would not run against Python 3.4 on c1vac, the version run by the vac controls since about a year ago. There were some missing dependencies that proved difficult to install (related to Debian Jessie becoming unsupported). I ultimately solved the problem by migrating the whole system to Python 3.5. Getting the Python keyring working within systemd (for email account authentication) also took some time.
Edit: The new interlock flag channel is named C1:Vac-interlock_flag.
Along the way, I discovered why the interlocks had been failing to auto-close the PSL shutter: The interlock was pointed to the channel C1:AUX-PSL_ShutterRqst. During the recent c1psl upgrade, we renamed this channel C1:PSL-PSL_ShutterRqst. This has been fixed.
The main volume is being pumped down, for now still in a TP3-backed configuration. As of 8:30 pm the pressure had fallen back to the upper 1E-6 range. The interlock protection is fully restored. Any time an interlock is triggered in the future, the system will send an immediate notification to 40m mailing list. 👍
I propose we go for all CAPS for all channel names. The lower case names is just a holdover from Steve/Alan from the 90's. All other systems are all CAPS.
It avoids us having to force them all to UPPER in the scripts and channel lists.
the replacement was done this afternoon. The red "Replace Battery" indicator is no longer on.
Larry W came by the 40m, and reported that there was a campus-wide power glitch (he was here to check if our networking infrastructure was affected). I thought I'd check the status of the vacuum.
I decided to check the systemctl process status on c1vac:
controls@c1vac:~$ sudo systemctl status modbusIOC.service
● modbusIOC.service - ModbusIOC Service via procServ
Loaded: loaded (/etc/systemd/system/modbusIOC.service; enabled)
Active: active (running) since Thu 2019-01-03 14:53:49 PST; 11min ago
Main PID: 16533 (procServ)
├─16533 /usr/bin/procServ -f -L /opt/target/modbusIOC.log -p /run/...
├─16534 /opt/epics/modules/modbus/bin/linux-x86_64/modbusApp /opt/...
Jan 03 14:53:49 c1vac systemd: Started ModbusIOC Service via procServ.
Warning: Unit file changed on disk, 'systemctl daemon-reload' recommended.
So something did happen today that required restart of the modbus processes. But clearly not everything has come back up gracefully. A few lines of dmesg (there are many more segfaults):
[1706033.718061] python: segfault at 8 ip 000000000049b37d sp 00007fbae2b5fa10 error 4 in python2.7[400000+31d000]
[1706252.225984] python: segfault at 8 ip 000000000049b37d sp 00007fd3fa365a10 error 4 in python2.7[400000+31d000]
[1720961.451787] systemd-udevd: starting version 215
[1782064.269844] audit: type=1702 audit(1546540443.159:38): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.269866] audit: type=1302 audit(1546540443.159:39): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/85/tmp_obj_uAXhPg" inode=173019272 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.365240] audit: type=1702 audit(1546540443.255:40): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.365271] audit: type=1302 audit(1546540443.255:41): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/58/tmp_obj_KekHsn" inode=173019274 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.460620] audit: type=1702 audit(1546540443.347:42): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.460652] audit: type=1302 audit(1546540443.347:43): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/cb/tmp_obj_q62Pdr" inode=173019276 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.545449] audit: type=1702 audit(1546540443.435:44): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.545480] audit: type=1302 audit(1546540443.435:45): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/e3/tmp_obj_gPI4qy" inode=173019277 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.640756] audit: type=1702 audit(1546540443.527:46): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1783440.878997] systemd: Unit serial_TP3.service entered failed state.
[1784682.147280] systemd: Unit serial_TP2.service entered failed state.
[1786407.752386] systemd: Unit serial_MKS937b.service entered failed state.
[1792371.508317] systemd: serial_GP316a.service failed to run 'start' task: No such file or directory
[1795550.281623] systemd: Unit serial_GP316b.service entered failed state.
[1796216.213269] systemd: Unit serial_TP3.service entered failed state.
[1796518.976841] systemd: Unit serial_GP307.service entered failed state.
[1796670.328649] systemd: serial_Hornet.service failed to run 'start' task: No such file or directory
[1797723.446084] systemd: Unit serial_MKS937b.service entered failed state.
I don't know enough about the new system so I'm leaving this for Jon to debug. Attachment #3 shows that the analog readout of the P1 pressure gauge suggests that the IFO is still under vacuum, so no random valve openings were effected (as expected, since we valved off the N2 line for this very purpose).
The vac controls are going down now to pull and test software changes. Will advise when the work is completed.