40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 337 of 350  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  14310   Tue Nov 20 13:13:01 2018 gautamUpdateVACIMC alignment is okay

I checked the IMC alignment following the vent, for which the manual beam block placed on the PSL table was removed. The alignment is okay, after minor touchup, the MC Trans was ~1200 cts which is roughly what it was pre-vent. I've closed the PSL shutter again.

  14318   Mon Nov 26 15:58:48 2018 SteveUpdateVACVent 81

Gautam, Aaron, Chub & Steve,

ETMY heavy door replaced by light one.

We did the following:  measured 950 particles/cf min of 0.5 micron at SP table, wiped crane and it's cable, wiped chamber,

                                placed heavy door on clean merostate covered stand, dry wiped o-rings and isopropanol wiped Aluminum light cover

                              

Quote:

Gautam, Aaron, Chub and Steve,

Quote:

Vent 80 is nearly complete; the instrument is almost to atmosphere.  All four ion pump gate valves have been disconnected, though the position sensors are still connected,and all annulus valves are open.  The controllers of TP1 and TP3 have been disconnected from AC power. VC1 and VC2 have been disconnected and must remained closed. Currently, the RGA is being vented through the needle valve and the RGA had been shut off at the beginning of the vent preparations.  VM1 and VM3 could not be actuated.  The condition status is still listed as Unidentified because of the disconnected valves. 

The vent 81 is completed.

4 ion pumps and cryo pump are at ~ 1-4 Torr (estimated as we have no gauges there), all other parts of the vacuum envelope are at atm. P2 & P3 gauges are out of order.

V1 and VM1 are in a locked state. We suspect this is because of some interlock logic.

TP1 and TP3 controllers are turned off.

Valve conditions as  shown: ready to be opened or closed or moved or rewired. To re-iterate: VC1, VC2, and the Ion Pump valves shouldn't be re-connected during the vac upgrade.

Thanks for all of your help.

 

  14322   Tue Nov 27 17:06:51 2018 SteveConfigurationVACAgilent 84FS turbo installed as TP2

Chub & Steve,

We swapped in our  replacement of Varian V70D "bear-can" turbo as factory clean.

The new Agilent TwisTorr 84 FS  turbo pump [ model x3502-64002,  sn IT17346059 ]  with intake screen, fan, vent valve. The controller  [ model 3508-64001, sn IT1737C383 ] and a larger drypump IDP-7,  [ model x3807-64010, sn MY17170019 ] was installed.

Next things to do:

  1. implement hardware interlock to close V4 at 80% pumping speed slowdown of "standby" rotation speed, estimated to be ~ 40,000 RPM ( when Standby 50K RPM  )
  2. set up isolation valve in the foreline of TP2, with delayed start of the IDP-7 and/or use relay to power drypump.  This turbo controller can not switch off or start of the dry pump. [ Agilent isolation valve #X3202-60055, with position indicator, pneumatic actuation, 115V solenoid ]..........as a second thought, we do not need isolation valve if we go with the relay option. The IDP-7 has built in delay of 10-15 sec
  3. test performance of new turbo
  14367   Wed Dec 19 14:19:15 2018 KojiSummaryVACPlan for pumpoing down test

We still need elaborated test procedure posted

12/29 Wed

  • Jon continues to work on valve actuator tests.
  • Chub continues to work on wiring / fixing wiring.
  • At the end of the day Jon is going to send out a notification email of "GO"/"NO GO" for pumping.

 

12/30 Thu

  • 9AM: Start closing two doors unless Jon gives us NO GO sign.
  • 10AM: Start pumping down
    • Test roughing pump capability via new control system
    • (Independently) Test turbo rotating procedure. This time we will not open the gate valve between the TP1 and the main volume. This is because we want to take care of the backing turbo loads while we gradually open the gate valve. This will take more hours to be done and we will not be able to finish this test by the end of Thu.
    • At the end of the procedure, we isolate the main volume, stop all the pumps, and vent the roghing pumps to save them from the oil backstream.

gautam: Koji and I were just staring at the vacuum screen, and realized that the drypumps, which are the backing pumps for TP2 and TP3, are not reflected on the MEDM screen. This should be rectified.

Steve also mentioned that the new small turbo controller does not directly interface with the drypump. So we need some system to delay the starting of the turbo itself, once the drypump has been engaged. Does this system exist?

Attachment 1: Screenshot_from_2018-12-19_14-49-34.png
Screenshot_from_2018-12-19_14-49-34.png
  14370   Wed Dec 19 21:14:50 2018 gautamUpdateVACPumpdown tomorrow

I just spoke to Jon who asked me to make this elog - we will be ready to test one or more parts of the pumpdown procedure tomorrow (12/20), so we should proceed as planned to put the heavy doors back on EY and OMC chambers at 9am tomorrow morning. Jon will circulate a more detailed procedure about the pumpdown steps later today evening.

  14372   Thu Dec 20 08:38:27 2018 JonUpdateVACPumpdown tomorrow

Linked is the pumpdown procedure, contained in the old 40m documentation. The relevant procedure is "All Off --> Vacuum Normal" on page 11.

Quote:

I just spoke to Jon who asked me to make this elog - we will be ready to test one or more parts of the pumpdown procedure tomorrow (12/20), so we should proceed as planned to put the heavy doors back on EY and OMC chambers at 9am tomorrow morning. Jon will circulate a more detailed procedure about the pumpdown steps later today evening.

 

  14373   Thu Dec 20 10:28:43 2018 gautamUpdateVACHeavy doors back on for pumpdown 82

[Chub, Koji, Gautam]

We replaced the EY and IOO chamber heavy doors by 10:10 am PST. Torquing was done first oen round at 25 ft-lb, next at 45 ft-lb (we trust the calibration on the torque wrench, but how reliable is this? And how important are these numbers in ensuring a smooth pumpdown?). All went smooth. The interior of the IOO chamber was found to be dirty when Koji ran a wipe along some surfaces.

For this pumpdown, we aren't so concerned with having the IFO in an operating state as we will certainly vent it again early next year. So we didn't follow the full close-up checklist.

Jon and Chub and Koji are working on starting the pumpdown now... In order to not have to wear laser safety goggles while we closed doors and pumped down, I turned off all the 1064nm lasers in the lab.

  14377   Fri Dec 21 11:13:13 2018 gautamOmnistructureVACN2 line valved off

Per the discussion yesterday, I valved off the N2 line in the drill press room at 11 am PST today morning so as to avoid any accidental software induced gate-valve actuation during the holidays. The line pressure is steadily dropping...

Attachment #1 shows that while the main volume pressure was stable overnight, the the pumpspool pressure has been steadily rising. I think this is to be expected as the turbo pumps aren't running and the valves can't preserve the <1mtorr pressure over long timescales?

Attachment #2 shows the current VacOverview MEDM screen status.

Attachment 1: VacGauges.png
VacGauges.png
Attachment 2: Screenshot_from_2018-12-21_13-02-06.png
Screenshot_from_2018-12-21_13-02-06.png
  14379   Fri Dec 21 12:57:10 2018 KojiOmnistructureVACN2 line valved off

Independent question: Are all the turbo forelines vented automatically? We manually did it for the main roughing line.

 

  14380   Thu Jan 3 15:08:37 2019 gautamOmnistructureVACVac status unknown

Larry W came by the 40m, and reported that there was a campus-wide power glitch (he was here to check if our networking infrastructure was affected). I thought I'd check the status of the vacuum.

  • Attachment #1 is a screenshot of the Vac overview MEDM screen. Clearly something has gone wrong with the modbus process(es). Only the PTP2 and PTP3 gauges seem to be communicative.
  • Attachment #2 shows the minute trend of the pressure gauges for a 12 day period - it looks like there is some issue with the frame builder clock, perhaps this issue resurfaced? But checking the system time on FB doesn't suggest anything is wrong.. I double checked with dataviewer as well that the trends don't exist... But checking the status of the individual daqd processes indeed showed that the dates were off by 1 year, so I just restarted all of them and now the time seems correct. How can we fix this problem more permanently? Also, the P1b readout looks suspicious - why are there periods where it seems like we are reading values better than the LSB of the device?

I decided to check the systemctl process status on c1vac:

controls@c1vac:~$ sudo systemctl status modbusIOC.service
● modbusIOC.service - ModbusIOC Service via procServ
   Loaded: loaded (/etc/systemd/system/modbusIOC.service; enabled)
   Active: active (running) since Thu 2019-01-03 14:53:49 PST; 11min ago
 Main PID: 16533 (procServ)
   CGroup: /system.slice/modbusIOC.service
           ├─16533 /usr/bin/procServ -f -L /opt/target/modbusIOC.log -p /run/...
           ├─16534 /opt/epics/modules/modbus/bin/linux-x86_64/modbusApp /opt/...
           └─16582 caRepeater

Jan 03 14:53:49 c1vac systemd[1]: Started ModbusIOC Service via procServ.

Warning: Unit file changed on disk, 'systemctl daemon-reload' recommended.

So something did happen today that required restart of the modbus processes. But clearly not everything has come back up gracefully. A few lines of dmesg (there are many more segfaults):

[1706033.718061] python[23971]: segfault at 8 ip 000000000049b37d sp 00007fbae2b5fa10 error 4 in python2.7[400000+31d000]
[1706252.225984] python[24183]: segfault at 8 ip 000000000049b37d sp 00007fd3fa365a10 error 4 in python2.7[400000+31d000]
[1720961.451787] systemd-udevd[4076]: starting version 215
[1782064.269844] audit: type=1702 audit(1546540443.159:38): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.269866] audit: type=1302 audit(1546540443.159:39): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/85/tmp_obj_uAXhPg" inode=173019272 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.365240] audit: type=1702 audit(1546540443.255:40): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.365271] audit: type=1302 audit(1546540443.255:41): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/58/tmp_obj_KekHsn" inode=173019274 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.460620] audit: type=1702 audit(1546540443.347:42): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.460652] audit: type=1302 audit(1546540443.347:43): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/cb/tmp_obj_q62Pdr" inode=173019276 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.545449] audit: type=1702 audit(1546540443.435:44): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.545480] audit: type=1302 audit(1546540443.435:45): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/e3/tmp_obj_gPI4qy" inode=173019277 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.640756] audit: type=1702 audit(1546540443.527:46): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1783440.878997] systemd[1]: Unit serial_TP3.service entered failed state.
[1784682.147280] systemd[1]: Unit serial_TP2.service entered failed state.
[1786407.752386] systemd[1]: Unit serial_MKS937b.service entered failed state.
[1792371.508317] systemd[1]: serial_GP316a.service failed to run 'start' task: No such file or directory
[1795550.281623] systemd[1]: Unit serial_GP316b.service entered failed state.
[1796216.213269] systemd[1]: Unit serial_TP3.service entered failed state.
[1796518.976841] systemd[1]: Unit serial_GP307.service entered failed state.
[1796670.328649] systemd[1]: serial_Hornet.service failed to run 'start' task: No such file or directory
[1797723.446084] systemd[1]: Unit serial_MKS937b.service entered failed state.

 

I don't know enough about the new system so I'm leaving this for Jon to debug. Attachment #3 shows that the analog readout of the P1 pressure gauge suggests that the IFO is still under vacuum, so no random valve openings were effected (as expected, since we valved off the N2 line for this very purpose).

Attachment 1: Screenshot_from_2019-01-03_15-19-51.png
Screenshot_from_2019-01-03_15-19-51.png
Attachment 2: Screenshot_from_2019-01-03_15-14-14.png
Screenshot_from_2019-01-03_15-14-14.png
Attachment 3: 997B13A9-CAAF-409C-A6C2-00414D30A141.jpeg
997B13A9-CAAF-409C-A6C2-00414D30A141.jpeg
  14383   Fri Jan 4 10:25:19 2019 JonOmnistructureVACN2 line valved off

Yes, for TP2 and TP3. They both have a small vent valve that opens automatically on shutdown.

Quote:

Independent question: Are all the turbo forelines vented automatically? We manually did it for the main roughing line.

 

 

  14391   Wed Jan 9 11:07:09 2019 gautamUpdateVACNew Vac channel logging

Looks like I didn't restart all the daqd processes last night, so the data was not in fact being recorded to frames. I just restarted everything, and looks like the data for the last 3 minutes are being recorded yes. Is it reasonable that the TP1 current channel is reporting 0.75A of current draw now, when the pump is off? Also the temperature readback of TP3 seems a lot jumpier than that of TP2, probably has to do with the old controller having fewer ADC bits or something, but perhaps the SMOO needs to be adjusted.

Quote:
 

Gautam and I updated the framebuilder config file, adding the newly-added channels to the list of those to be logged.

Attachment 1: Screenshot_from_2019-01-09_11-08-28.png
Screenshot_from_2019-01-09_11-08-28.png
  14393   Wed Jan 9 20:01:25 2019 JonUpdateVACSecond pumpdown completed

[Jon, Koji, Chub, Gautam]

Summary

The second pumpdown with the new vacuum system was completed successfully today. A time history is attached below.

We started with the main volume still at 12 torr from the Dec. pumpdown. Roughing from 12 to 0.5 torr took approximately two hours, at which point we valved out RP1 and RP3 and valved in TP1 backed by TP2 and TP3. We additionally used the AUX dry pump connected to the backing lines of TP2 and TP3, which we found to boost the overall pump rate by a factor of ~3. The manual hand-crank valve directly in front of TP1 was used to throttle the pump rate, to avoid tripping software interlocks. If the crank valve is opened too quickly, the pressure differential between the main volume (TP1 intake) and TP1 exhaust becomes >1 torr, tripping the V1 valve-close interlock. Once the main volume pressure reached 1e-2 torr, the crank valve could be opened fully.

We allowed the pumpdown to continue until reaching 9e-4 torr in the main volume. At this point we valved off the main volume, valved off TP2 and TP3, and then shut down all turbo pumps/dry pumps. We will continue pumping tomorrow under the supervision of an operator. If the system continues to perform problem-free, we will likely leave the turbos pumping on the main volume and annuli after tomorrow.

New Vac Control Station

We installed a local controls terminal for the vacuum system on the desk in front of the vacuum rack (pictured below). This console is connected directly to c1vac and can be used to monitor/control the system even during a network outage or power failure. The entire pumpdown was run from this station today.

To open a controls MEDM screen, open a terminal and execute the alias

$control

Similarly, to open a monitor-only MEDM screen, execute the alias

$monitor
Attachment 1: Screenshot_from_2019-01-09_20-00-39.png
Screenshot_from_2019-01-09_20-00-39.png
Attachment 2: IMG_3088.jpg
IMG_3088.jpg
  14394   Thu Jan 10 10:23:46 2019 gautamUpdateVACovernight leak rate

Overnight, the pressure increased from 247 uTorr to 264 uTorr over a period of 30000 seconds. Assuming an IFO volume of 33,000 liters, this corresponds to an average leak rate of ~20 uTorr L / s. It'd be interesting to see how this compares with the spec'd leak rates of the Viton O-ring seals and valves/ outgassing rates. The two channels in the screenshot are monitoring the same pressure from the same sensor, top pane is a digital readout while the bottom is a calibrated analog readout that is subsequently digitized into the CDS system.

Quote:
 

We allowed the pumpdown to continue until reaching 9e-4 torr in the main volume. At this point we valved off the main volume, valved off TP2 and TP3, and then shut down all turbo pumps/dry pumps. We will continue pumping tomorrow under the supervision of an operator. If the system continues to perform problem-free, we will likely leave the turbos pumping on the main volume and annuli after tomorrow.

Attachment 1: OvernightLeak.png
OvernightLeak.png
  14395   Thu Jan 10 11:32:40 2019 ChubUpdateVACManual valve interfaced with CDS

Connected the manual gate valve status indicator to the Acromag box this morning.  Labeled the temporary cable (a 50' 9p DSUB, will order a proper sized cable shortly) and the panel RV2.  

  14396   Thu Jan 10 19:59:08 2019 JonUpdateVACVac System Running Normally on Turbo Pumps

[Jon, Gautam, Chub]

Summary

We continued the pumpdown of the IFO today. The main volume pressure has reached 1.9e-5 torr and is continuing to fall. The system has performed without issue all day, so we'll leave the turbos continuously running from here on in the normal pumping configuration. Both TP2 and TP3 are currently backing for TP1. Once the main volume reaches operating pressure, we can transition TP3 to pump the annuli. They have already been roughed to ~0.1 torr. At that point the speed of all three turbo pumps can also be reduced. I've finished final edits/cleanup of the interlock code and MEDM screens.

Python Code

All the python code running on c1vac is archived to the git repo: 

https://git.ligo.org/40m/vacpython

This includes both the interlock code and the serial device clients for interfacing with gauges and pumps.

MEDM Monitor/Control

We're still using the same base MEDM monitor/control screens, but they have been much improved. Improvements:

  • Valves now light up in red when they are open. This makes it much easier to see at a glance what is valved in/out.
  • Every pump in the system (except CP1) is now digitally controlled from the MEDM control screen. No more need to physically push any buttons in the vaccum rack. 👍
  • The turbo pumps now show additional diagnostic readouts: speed (TP1/2/3), temperature (TP2/3), current draw (TP1/2/3), and voltage (TP2/3).
  • The foreline pressure gauge readouts for TP2/3 have been added to the digital system.
  • The two new main volume gauges, Hornet and SuperBee, have been added to the digital system as well.
  • New transducers have been added to read back the two N2 tank pressures.
  • The interlock code generates a log file of all its actions. A field in the MEDM screens specifies the location of the log file.
  • A tripped interlock (appearing as a message in the "Error message" field) must be manually cleared via the "Clear error message" button on the control screen before the system will accept any more manual valve input.

Note: The apparent glitches in the pressure and TP diagnostic channels are due to the interlock system being taken down to implement some of these changes.

Attachment 1: Screen_Shot_2019-01-10_at_7.58.24_PM.png
Screen_Shot_2019-01-10_at_7.58.24_PM.png
Attachment 2: CCs.png
CCs.png
Attachment 3: TPs.png
TPs.png
  14398   Mon Jan 14 10:06:53 2019 gautamUpdateVACVent 82 complete

[chub, gautam]

  • IFO pressure was ~2e-4 torr when we started, on account of the interlock code closing all valves because the N2 line pressure dropped below threshold (<65 psi)
  • Chub fixed the problem on the regulator in the drill-press area where the N2 tanks are, the N2 line is now at ~75 psi so that we have the ability to actuate valves if we so desire
  • We decided that there is no need to vent the pumpspool this time - avoiding an unnecessary turbo landing, so the pumpspool is completely valved off from the main volume and TPs 1-3 are left running
  • Went through the pre-vent checklist:
    • Chub measured particle count, deemed it to be okay (I think we should re-locate the particle counter to near 1X8 because that is where the air enters the IFO anyways, and that way, we can hook it up to the serial device server and have a computerized record of this number as we had in the past, instead of writing it down in a notebook)
    • Checked that the PSL was manually blocked from entering the IFO
    • Walked through the lab, visually inspected Jam Nuts and window covers, all was deemed okay
  • Moved 2 tanks of N2 into the lab on account of the rain
  • Started the vent at ~930am PST
    • There were a couple of short bursty increases in the pressure as we figured out the right valve settings but on average, things are rising at approx the same rate as we had in vent 81...
    • There was a rattling noise coming from the drypump that is the forepump for TP2 (Agilent) - turned out to be the plastic shell/casing on the drypump, moreover, the TP2 diagnostics (temperature, current etc) are all normal.
    • The CC1 gauge (Hornet) is supposed to have an auto-shutoff of its High Voltage when the pressure exceeds 10 mTorr, but it was reporting pressures in the 1 mTorr range even when the adjacent Pirani was at 25 torr. To avoid risk of damage, we manually turned the HV off. There needs to be a python script that can be executed to transition control between the remote and local control modes for the hornet, we had to Power Cycle the gauge because it wouldn't give us local control over the HV.
    • Transitioned from N2 to dry air at P1a=25 torr. We had some trouble finding the correct regulator (left-handed thread) for the dry air cylinders, it was stored in a cabinet labelled green optics no
    • Disconnected dry air from VV1 intake once P1b reached 700 torr, to let lab air flow into the IFO and avoid overpressuring.
    • VA* and VAV* valves were opened so as to vent the annuli as we anticipate multiple chamber openings for this vent.

As of 8pm local time, the IFO seems to have equilibriated to atmospheric pressure (I don't hear the hiss of in-rushing air near 1X8 and P1a reports 760 torr). The pumpspool looks healthy and there are no signs in the TP diagnostics channels that anything bad happened to the pumps. Chub is working on getting the N2 setup more robust, we plan to take the EY door off at 9am tomorrow morning with Bob's help.

* I took this opportunity to follow instructions on pg 29 of the manual and set the calibration for the SuperBee pirani gauge to 760 torr so that it is in better agreement with our existing P1a Pirani gauge. The correction was ~8% (820-->760).

Attachment 1: Vent82Summary.png
Vent82Summary.png
  14406   Fri Jan 18 17:44:14 2019 gautamUpdateVACPumping on RGA volume

Steve came by the lab today, and looked at the status of the upgraded vacuum system. He recommended pumping on the RGA volume, since it has not been pumped on for ~3 months on account of the vacuum upgrade. The procedure (so we may script this operation in the future) was:

  1. Start with the pumpspool completely isolated from the main IFO volume.
  2. Open V5, pump down the section between V5 and VM3. Keep an eye on PTP3.
  3. Open VM3, keep an eye on P4. It was reporting ~10 mtorr, went to "LO".
  4. Close VM3 and V5, transition pumping of the RGA volume to TP1 which is backed by TP2 (we had to open V4 as all valves were closed due to an N2 pressure drop event).
  5. Open VM2.
  6. Watch CC4.

CC4 pressure has been steadily falling. Steve recommends leaving things in this state over the weekend. He recommends also turning the RGA unit on so that the temperature rises and there is a bakeout of the RGA. The temperature may be read off manually using a probe attached to it.

Attachment 1: CC4.png
CC4.png
  14410   Sun Jan 20 23:41:00 2019 JonOmnistructureVACNotes on vac serial comm, adapter wiring

I've attached my handwritten notes covering all the serial communications in the vac system, and the relevant wiring for all the adapters, etc. I'll work with Chub to produce a final documentation, but in the meantime this may be a useful reference.

Attachment 1: Jon_wiring_notes.tar.gz
  14412   Tue Jan 22 20:45:21 2019 gautamUpdateVACNew N2 setup

The N2 ran out this weekend (again no reminder email, but I haven't found the time to setup the Python mailer yet). So all the valves Steve and I had opened, closed (rightly so, that's what the interlocks are supposed to do). Chub will post an elog about the new N2 valve setup in the Drill-press room, but we now have sufficient line pressure in the N2 line again. So Chub and I re-opened the valves to keep pumping on the RGA.

  14419   Fri Jan 25 16:14:51 2019 gautamUpdateVACVacuum interlock code, N2 warning

I reset the remote of this git repo to the 40m version instead of Jon's personal one, to ensure consistency between what's on the vacuum machine and in the git repo. There is now a N2 checker python mailer that will email the 40m list if all the tank pressures are below 600 PSI (>12 hours left for someone to react before the main N2 line pressure drops and the interlocks kick in). For now, the script just runs as a cron job every 3 hours, but perhaps we should integrate it with the interlock process?

Quote:

All the python code running on c1vac is archived to the git repo: 

https://git.ligo.org/40m/vacpython

  14429   Sat Feb 2 21:53:24 2019 KojiUpdateVACovernight leak rate

The pressure of the main volume increased from ~1mtorr to 50mtorr for the past 24 hours (86ksec). This rate is about x1000 of the reported number on Jan 10. Do we suspect vacuum leak?

Quote:

Overnight, the pressure increased from 247 uTorr to 264 uTorr over a period of 30000 seconds. Assuming an IFO volume of 33,000 liters, this corresponds to an average leak rate of ~20 uTorr L / s.

 

Attachment 1: Screen_Shot_2019-02-02_at_21.49.33.png
Screen_Shot_2019-02-02_at_21.49.33.png
  14430   Sun Feb 3 15:15:21 2019 gautamUpdateVACovernight leak rate

I looked into this a bit today. Did a walkthrough of the lab, didn't hear any obvious hissing (makes sense, that presumably would signal a much larger leak rate).

Attachment #1: Data from the 30 ksec we had the main vol valved off on Jan 10, but from the gauges we have running right now (the CC gauges have not had their HV enabled yet so we don't have that readback).

Attachment #2: Data from ~150 ksec from Friday night till now.

Interpretation: The number quoted from Jan 10 is from the cold-cathode gauge (~20 utorr increase). In the same period, the Pirani gauge reports a increase of ~5 mtorr (=250x the number reported by the cold-cathode gauge). So which gauge do we trust in this regime more? Additionally, the rate at which the annuli pressures are increasing seem consistent between Jan 10 and now, at ~100 mtorr every 30 ksec.

I don't think this is conclusive, but at least the leak rates between Jan 10 and now don't seem that different for the annuli pressures. Moreover, for the Jan 10 pumpdown, we had the IFO at low pressure for several days over the chirstmas break, which presumably gave time for some outgassing which was cleaned up by the TPs on Jan 10, whereas for this current pumpdown, we don't have that luxury.

Do we want to do a systematic leak check before resuming the pumpdown on Monday? The main differences in vacuum I can think of are

  1. Two pieces of Kapton tape are now in the EY chamber.
  2. Possible resiudue from cleaning solvents in IY and EY chambers are still outgassing.

This entry by Steve says that the "expected" outgassing rate is 3-5 mtorr per day, which doesn't match either the current observation or that from Jan 10.

Attachment 1: Jan10_data.png
Jan10_data.png
Attachment 2: Feb1_data.png
Feb1_data.png
  14431   Sun Feb 3 20:52:34 2019 KojiUpdateVACovernight leak rate

We can pump down (or vent) annuli. If this is the leak between the main volume and the annuli, we will be able to see the effect on the leak rate. If this is the leak of an  outer o-ring, again pumping down (or venting) of the annuli should temporarily decrease (or increase) the leak rate..., I guess. If the leak rate is not dependent on the pressure of the annuli, we can conclude that it is internal outgassing.

  14432   Mon Feb 4 12:23:24 2019 gautamUpdateVACpumpdown 83 - leak tests

[koji, gautam]

As planned, we valved off the main volume and the annuli from the turbo-pumps at ~730 PM PST. At this time, the main volume pressure was 30 uTorr. It started rising at a rate of ~200 uTorr/hr, which translates to ~5 mtorr/day, which is in the ballpark of what Steve said is "normal". However, the calibration of the Hornet gauge seems to be piecewise-linear (see Attachment #1), so we will have to observe overnight to get a better handle on this number.

We decided to vent the IY and EY chamber annular volumes, and check if this made anu dramatic changes in the main volume pressure increase rate, presumably signalling a leak from the outside. However, we saw no such increase - so right now, the working hypothesis is still that the main volume pressure increase is being driven by outgassing of something from the vacuum.

Let's leave things in this state overnight - V1 and V5 closed so that neither the main volume nor the annuli are being pumped, and get some baseline numbers for what the outgassing rate is.

Attachment 1: PD83.png
PD83.png
  14434   Tue Feb 5 10:11:30 2019 gautamUpdateVACleak tests complete, pumpdown 83 resumed

I guess we forgot to close V5, so we were indeed pumping on the ITMY and ETMY annuli, but the other three were isolated suggest a leak rate of ~200-300 mtorr/day, see Attachment #1 (consistent with my earlier post).

As for the main volume - according to CC1, the pressure saturates at ~250 uTorr and is stable, while the Pirani P1a reports ~100x that pressure. I guess the cold-cathode gauge is supposed to be more accurate at low pressures, but how well do we believe the calibration on either gauge? Either ways, based on last night's test (see Attachment #2), we can set an upper limit of 12 mtorr/day. This is 2-3x the number Steve said is normal, but perhaps this is down to the fact that the outgassing from the main volume is higher immediately after a vent and in-chamber work. It is also 5x lower rate of pressure increase than what was observed on Feb 2.

I am resuming the pumping down with the turbo-pumps, let's see how long we take to get down to the nominal operating pressure of 8e-6 torr, it ususally takes ~ 1 week. V1, VASV, VASE and VABS were opened at 1030am PST. Per Chub's request (see #14435), I ran RP1 and RP3 for ~30 seconds, he will check if the oil level has changed.

Quote:
 

Let's leave things in this state overnight - V1 and V5 closed so that neither the main volume nor the annuli are being pumped, and get some baseline numbers for what the outgassing rate is.

Attachment 1: Annuli.png
Annuli.png
Attachment 2: MainVol.png
MainVol.png
  14436   Tue Feb 5 19:30:14 2019 gautamUpdateVACMain volume at 20 uTorr

Pumpdown looks healthy, so I'm leaving the TPs on overnight. At some point, we should probably get the RGA going again. I don't know that we have a "reference" RGA trace that we can compare the scan to, should check with Steve. The high power (1 W) beam has not yet been sent into the vacuum, we should probably add the interlock condition that shuts off the PSL shutter before that.

Attachment 1: PD83.png
PD83.png
  14438   Thu Feb 7 13:55:25 2019 gautamUpdateVACRGA turned on

[chub, steve, gautam]

Steve came by the lab today. He advised us to turn the RGA on again, now that the main volume pressure is < 20 uTorr. I did this by running the RGAset.py script on c0rga - the temperature of the unit was 22C in the morning, after ~3 hours of the filament being turned on, the temperature has already risen to 34 C. Steve says this is normal. We also opened VM1 (I had to edit the interlocks.yaml to allow VM1 to open when CC1 < 20uTorr instead of 10uTorr), so that the RGA volume is exposed to the main volume. So the nightly scans should run now, Steve suggests ignoring the first few while the pumpdown is still reaching nominal pressure. Note that we probably want to migrate all the RGA stuff to the new c1vac machine.

Other notes from Steve:

  • RP1 and RP3 should have their oil fully changed (as opposed to just topped up)
  • VABSSCI adn VABSSCO are NOT vent valves, they are isolating the annuli of the IOO and OMC chambers from the BS chamber annuli. So next time we vent, we should fix this!
  • Leak rate of 3-5 mTorr/day is "normal" once the system has been pumped for a few days. Steve agrees that our observations of the main volume pressure increase is expected, given that we were at atmosphere.
  • Regarding the upcoming CES construction
    • Steve recommends keeping the door along the east arm, as it is useful for bringing equipment into the lab (end door access is limited because of end optical tables)
    • Particle counter data logging should be resumed before the construction starts, so that we can monitor if the lab is getting dirtier
  • OSEM filters (new ones, i.e. made according to spects in D000209) are in the Clean Cabinet (EX). They are individually packaged in little capsules, see Attachment #1. So the ones I installed were actually a 2002 vintage. We have 50pcs, enough to install new ones on all the core optics + spares.
  14440   Thu Feb 7 19:28:46 2019 gautamUpdateVACIFO recovery

[rana, gautam]

The full 1 W is again being sent into the IMC. We have left the PBS+HWP combo installed as Rana pointed out that it is good to have polarization control after the PMC but before the EOM. The G&H mirror setup used to route a pickoff of the post-EOM beam along the east edge of the PSL table to the AUX laser beat setup was deemed too flaky and has been bypassed. Centering on the steering mirror and subsequently the IMC REFL photodiode was done using an IR viewer - this technique allows one to geometrically center the beam on the steering mirror and PD, to the resolution of the eye, whereas the voltage maximization technique using the monitor port and an o'scope doesn't allow the former. Nominal IMC transmission of ~15,000 counts has been recovered, and the IMC REFL level is also around 0.12, consistent with the pre-vent levels.

  14452   Thu Feb 14 15:37:35 2019 gautamUpdateVACVacromag failure

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

Details:

  1. Chub alerted me he had changed the main N2 line pressure, but this did not show up in the trend data. In fact, the trend data suggested that all 3 N2 gauges had stopped logging data (they just held the previous value) since sometime on Monday, see Attachment #1.
  2. We verified that the gauges were being powered, and that the analog voltage output of the gauges made sense in the drill press room ---> So this suggested something was wrong at the Vacuum rack electronics rack.
  3. Went to the vacuum rack, saw no obvious indicator lights signalling a fault.
  4. So I restarted the modbus process on c1vac using sudo systemctl restart modbusIOC.service. The way Jon has this setup, this service controls all the sub-processes talking to gauges and TPs, so resatrting this master process should have brought everything back.
  5. This tripped the interlock, and all valves got closed.
  6. Once the modbus service restarted, most things came back normally. However, V1, V3, V4 and V5 readbacks were listed as "UNDEF".
  7. The way the interlock code works, it checks a valve state change request against the monitor channel, so all these valves could not be opened.
  8. We confirmed that the valves themselves were operational, by bypassing the itnerlock logic and directly actuating on the valve - but this is not a safe way of running overnight so we decided to shut everything down.
  9. We also confirmed that the problem is with one particular Acromag unit - switching the readback Dsub connector to another channel (e.g. V1 --> VM2) showed the expected readback.
  10. As a further check - I connected a windows laptop with the Acromag software installed, to the suspected XT1111 - it reported an error message saying "USB device may be damaged". Plugging into another XT111 in the crate, I was able to access the unit in the normal way.
  11. The phoenix connector architecture of the Acromags makes it possible to replace this single unit (we have spare XT1111 units) without disturbing the whole system - so barring objections, we plan to do this at 9am tomorrow. The replacement plan is summarized in Attachment #2.

Pressure of the main volume seems to have stabilized - see Attachment #3, so it should be fine to leave the IFO in this state overnight.

Questions:

  1. What caused the original failure of the writing to the ADC channels hooked up to the N2 gauges? There isn't any logging setup from the modbus processes afaik.
  2. What caused the failure of the XT1111? What is the failure mode even? Because some other channels on the same XT1111 are working...
  3. Was it user error? The only operation carried out by me was restarting the modbus services - how did this damage the readback channels for just four valves? I think Chub also re-arranged some wires at the end, but unplugging/re-connecting some cables shouldn't produce this kind of response...

The whole point of the upgrade was to move to a more reliable system - but seems quite flaky already.

Attachment 1: Screenshot_from_2019-02-14_15-40-36.png
Screenshot_from_2019-02-14_15-40-36.png
Attachment 2: IMG_7320.JPG
IMG_7320.JPG
Attachment 3: Screenshot_from_2019-02-14_20-43-15.png
Screenshot_from_2019-02-14_20-43-15.png
  14453   Thu Feb 14 18:16:24 2019 JonUpdateVACVacromag failure

I sent Gautam instructions to first try stopping the modbus service, power cycling the Acromag chassis, then restarting the service. I've seen the Acromags go into an unresponsive state after a strong electrical transient or shorted signal wires, and the unit has to be power cycled to be reset.

If this doesn't resolve it, I'll come in tomorrow to help with the Acromag replacement. We have plenty of spares.

Quote:

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

 

  14456   Fri Feb 15 11:58:45 2019 JonUpdateVACVac system is back up

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals) that could only be cleared by power cycling the units. After resetting the system, the main volume pressure dropped quickly and is now < 2e-5 torr, so normal operations can resume. For future reference, below is the procedure to safely reset these units from a trouble state.

Vacromag Reset Procedure

  • TP2 and TP3 can be left running, but isolate them by closing valves V4 and V5.
  • TP1 can also be left running, but manually flip the operation mode on the front of the controller from REMOTE to LOCAL. This prevents the pump from receiving a "stop" command when its control Acromag shuts down.
  • Close all the pneumatic valves in the system (they'll otherwise close automatically when their control Acromags shut down).
  • On c1vac, stop the modbusIOC service. Sometimes this takes ~1 min to actually terminate.
  • Turn off the Acromags by flipping the "24 V" on the back of the chassis.
  • Wait ~10 sec, then turn them back on.
  • Start the modbusIOC service. It may take up to ~1 min for all the readings on the MEDM screen to initialize.
  • Ensure that the rotation speed of TP1,2,3 are still all nominal.
  • If pumps are OK, open V4, V5, and V7, then open V1. This restores the system to the "Maximum pumping speed" state.
  • Flip the TP1 controller operation state back to REMOTE.
  14458   Fri Feb 15 18:41:18 2019 ranaUpdateVACVac system is back up

If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?

Quote:

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)

  14460   Fri Feb 15 19:50:09 2019 ranaUpdateVACVac system is back up

The acromags are on the UPS. I suspect the transient came in on one of the signal lines. Chub tells me he unplugged one of the signal cables from the chassis around the time things died on Monday, although we couldn't reproduce the problem doing that again today.

In this situation it wasn't the software that died, but the acromag units themselves. I have an idea to detect future occurrences using a "blinker" signal. One acromag outputs a periodic signal which is directly sensed by another acromag. The can be implemented as another polling condition enforced by the interlock code.

Quote:

If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?

Quote:

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)

 

  14461   Fri Feb 15 20:07:02 2019 JonUpdateVACUpdated vacuum punch list

While working on the vac controls today, I also took care of some of the remaining to-do items. Below is a summary of what was done, and what still remains.

Completed today

  • TP2/3 overcurrent interlock raised from 1 to 1.2 A. This was tripping during normal operation as the pump accelerates from low-speed (standby) to normal-speed mode.
  • Interlock conditions on VABSSCO/VABSSCI removed. Per discussion with Steve, these are not vent valves, but rather isolation valves between the BS/IOO/OMC annuli. The interlocks were preventing the valves from opening, and hence the IOO and OMC annuli from being pumped.
  • Channel exposed for interlocking in-vacuum high-voltage drivers. The channel name is C1:Vac-interlock_high_voltage. The vac interlock service sets this channel's value to 0 when the main volume pressure is in the range 3 mtorr-500 torr, and to 1 otherwise.
  • Annuli pumping integrated into the set of recognized states. "Vacuum normal" now refers to TP1 and TP2 pumping on the main volume AND TP3 pumping on all the annuli. The system is currently running in this state.
  • TP1 lowered to the nominal speed setting recommended by Steve: 33.6 krpm (560 Hz).

Still remaining

  • Implement a "blinker" input-output signal loop between two Acromags to detect hardware failures like the one today.
  • Add an AC power monitor to sense extended power losses and automatically put the system into safe shutdown.
  • Migrate the RGA to c1vac. Still some issues getting the serial comm working.
  • Troubleshoot the SuperBee (backup) main volume Parani gauge. It has not communicated with c1vac since a serial adapter was replaced two weeks ago. Chub thinks the gauge was possibly damaged by arcing during the replacement.
  • Scripting for more automated pumpdowns.
  • Generate a bootable backup hard drive for c1vac, which could be swapped in on a short time scale after a failure.
  14462   Fri Feb 15 21:15:42 2019 gautamUpdateVACdd backup of c1vac made
  1. Connected one of the solid-state drives to c1vac. It was /dev/sdb.
  2. Formatted the drive using sudo mkfs -t ext4 /dev/sdb
  3.  Mounted it as /mnt/backup using sudo mount /dev/sdb /mnt/backup
  4. Started a tmux session for the dd, called DDbackup
  5. Started the dd backup using  sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync
  6. Backup completed in 719 seconds: need to test if it works...
controls@c1vac:~$ sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync
[sudo] password for controls: 
^C283422+0 records in
283422+0 records out
18574344192 bytes (19 GB) copied, 719.699 s, 25.8 MB/s
Quote:
 
  • Generate a bootable backup hard drive for c1vac, which could be swapped in on a short time scale after a failure.
  14487   Wed Mar 20 12:31:30 2019 JonUpdateVACDoing vac controls work

I'm rebooting the IOLAN server to load new serial ports. The interlocks might trip when the pressure gauge readbacks cut out.

  14488   Wed Mar 20 19:26:25 2019 JonUpdateVACProtection against AC power loss

Today I implemented protection of the vac system against extended power losses. Previously, the vac controls system (both old and new) could not communicate with the APC Smart-UPS 2200 providing backup power. This was not an issue for short glitches, but for extended outages the system had no way of knowing it was running on dwindling reserve power. An intelligent system should sense the outage and put the IFO into a controlled shutdown, before the batteries are fully drained.

What enabled this was a workaround Gautam and I found for communicating with the UPS serially. Although the UPS has a serial port, neither the connector pinout nor the low-level command protocol are released by APC. The only official way to communicate with the UPS is through their high-level PowerChute software. However, we did find "unofficial" documentation of APC's protocol. Using this information, I was able to interface the the UPS to the IOLAN serial device server. This allowed the UPS status to be queried using the same Python/TCP sockets model as all the other serial devices (gauges, pumps, etc.). I created a new service called "serial_UPS.service" to persistently run this Python process like the others. I added a new EPICS channel "C1:Vac-UPS_status" which is updated by this process.

With all this in place, I added new logic to the interlock.py code which closes all valves and stops all pumps in the event of a power failure. To be conservative, this interlock is also tripped when the communications link with the UPS is disconnected (i.e., when the power state becomes unknown). I tested the new conditions against both communication failure (by disconnecting the serial cable) and power failure (by pressing the "Test" button on the UPS front panel). This protects TP2 and TP3. However, I discovered that TP1---the pump that might be most damaged by a sudden power failure---is not on the UPS. It's plugged directly into a 240V outlet along the wall. This is because the current UPS doesn't have any 240V sockets. I'd recommend we get one that can handle all the turbo pumps.

For future reference:

Pin 1: RxD

Pin 2: TxD

Pin 5: GND

Standard: RS-232

Baud rate: 2400

Data bits: 8

Parity: none

Stop bits: 1

Handshaking: none

 

 

Attachment 1: IMG_3146.jpg
IMG_3146.jpg
  14489   Wed Mar 20 20:07:22 2019 JonUpdateVACDoing vac controls work

Work is completed and the vac system is back in its nominal state.

Quote:

I'm rebooting the IOLAN server to load new serial ports. The interlocks might trip when the pressure gauge readbacks cut out.

 

  14490   Thu Mar 21 12:46:22 2019 JonUpdateVACMore vac controls upgrades

The vac controls system is going down for migration from Python 2.7 to 3.4. Will advise when it is back up.

  14491   Thu Mar 21 17:22:52 2019 JonUpdateVACMore vac controls upgrades

I've converted all the vac control system code to run on Python 3.4, the latest version available through the Debian package manager. Note that these codes now REQUIRE Python 3.x. We decided there was no need to preserve Python 2.x compatibility. I'm leaving the vac system returned to its nominal state ("vacuum normal + RGA").

Quote:

The vac controls system is going down for migration from Python 2.7 to 3.4. Will advise when it is back up.

 

  14494   Thu Mar 21 21:50:31 2019 ranaUpdateVACProtection against AC power loss

agreed - we need all pumps on UPS for their safety and also so that we can spin them down safely. Can you and Chub please find a suitable UPS?

Quote:

However, I discovered that TP1---the pump that might be most damaged by a sudden power failure---is not on the UPS. It's plugged directly into a 240V outlet along the wall. This is because the current UPS doesn't have any 240V sockets. I'd recommend we get one that can handle all the turbo pumps.

  14509   Tue Apr 2 18:40:01 2019 gautamUpdateVACVac failure

While glancing at my Vacuum striptool, I noticed that the IFO pressure is 2e-4 torr. There was an "AC power loss" reported by C1Vac about 4 hours (14:07 local time) ago. We are investigating. I closed the PSL shutter.


Jon and I investigated at the vacuum rack. The UPS was reporting a normal status ("On Line"). Everything looked normal so we attempted to bring the system back to the nominal state. But TP2 drypump was making a loud rattling noise, and the TP2 foreline pressure was not coming down at a normal rate. We wonder if the TP2 drypump has somehow been damaged - we leave it for Chub to investigate and give a more professional assessment of the situation and what the appropriate course of action is.

The PSL shutter will remain closed overning, and the main volume and annuli are valved off. We spun up TP1 and TP3 and decided to leave them on (but they have negligible load).

Attachment 1: vacFail.png
vacFail.png
  14511   Wed Apr 3 09:07:46 2019 gautamUpdateVACVac failure

Overnight pressure trends don't suggest anything went awry after the initial interlock trip. Some watchdog script that monitors vacuum pressure and closes the PSL shutter in the event of pressure exceeding some threshold needs to be implemented. Another pending task is to make sure that backup disk for c1vac actually is bootable and is a plug-and-play replacement.

Attachment 1: vacFailOvernight.png
vacFailOvernight.png
  14512   Wed Apr 3 10:42:36 2019 gautamUpdateVACTP2 forepump replaced

Bob and Chub concluded that the drypump that serves as TP2's forepump had failed. Steve had told me the whereabouts of a spare Agilent IDP-7. This was meant to be a replacement for the TP3 foreline pump when it failed, but we decided to swap it in while diagnosing the failed drypump (which had 2182 hours continuous running according to the hour counter). Sure enough, the spare pump spun up and the TP2fl pressure dropped at a rate consistent with what is expected. I was then able to spin up TP1, TP2 and TP3. 

However, when opening V4 (the foreline of TP1 pumped by TP2), I heard a loud repeated click track (~5Hz) from the electronics rack. Shortly after, the interlocks shut down all the TPs again, citing "AC power loss". Something is not right, I leave it to Jon and Chub to investigate.

  14514   Wed Apr 3 16:17:17 2019 JonUpdateVACTP2 forepump replaced

I can't explain the mechanical switching sound Gautam reported. The relay controlling power to the TP2 forepump is housed in the main AC relay box under the arm tube, not in the Acromag chassis, so it can't be from that. I've cycled through the pumpdown sequence several times and can't reproduce the effect. The Acromag switches for TP2 still work fine.

In any case, I've made modifications to the vacuum interlocks that will help with two of the issues:

  1. For the "AC power loss" over-triggering: New logic added requiring the UPS to be out of the "on line power, battery OK" state for ~5 seconds before tripping the interlock. This will prevent electrical transients from triggering an emergency shutdown, as seems to be the case here (the UPS briefly isolates the load to battery during such events).
  2. PSL interlocking: New logic added which directly sets C1:AUX-PSL_ShutterRqst --> 0 (closes the PSL shutter) when the main volume pressure is 3 mtorr-500 torr. Previously there was a channel exposed for this interlock (C1:Vac-interlock_high_voltage), but c1aux was not actually monitoring it. Following the convention of every vac interlock, after the PSL shutter has been closed, it has to be manually reopened. Once the pressure is out of this range, the vac system will stop blocking the shutter from reopening, but it will not perform the reopen action itself. gautam: a separate interlock logic needs to be implemented on c1aux (the shutter machine) that only permits the shutter to be opened if the Vac pressure range is okay. The SUS watchdog style AND logic in the EPICS database file should work just fine.

After finishing this vac work, I began a new pumpdown at ~4:30pm. The pressure fell quickly and has already reached ~1e-5 torr. TP2 current and temp look fine.

Quote:

However, when opening V4 (the foreline of TP1 pumped by TP2), I heard a loud repeated click track (~5Hz) from the electronics rack. Shortly after, the interlocks shut down all the TPs again, citing "AC power loss". Something is not right, I leave it to Jon and Chub to investigate.

Attachment 1: IMG_3180.jpg
IMG_3180.jpg
  14515   Wed Apr 3 18:35:54 2019 gautamUpdateVACPSL shutter re-opened

PSL shutter was re-opened at 6pm local time. IMC was locked. As of 10pm, the main volume pressure is already back down to the 8e-6 level.

  14517   Fri Apr 5 01:10:18 2019 gautamUpdateVACTP3 forepump is also noisy

Is this one close to failure as well?

  14546   Tue Apr 16 22:06:51 2019 gautamUpdateVACVac interlock tripped again

This happened again, about 30,000 seconds (~2:06pm local time according to the logfile) ago. The cited error was the same -

2019-04-16 14:06:05,538 - C1:Vac-error_status => VA6 closed. AC power loss.

Hard to believe there was any real power loss, nothing else in the lab seems to have been affected so I am inclined to suspect a buggy UPS communication channel. The PSL shutter was not closed - I believe the condition is for P1a to exceed 3 mtorr (it is at 1 mtorr right now), but perhaps this should be modified to close the PSL shutter in the event of any interlock tripping. Also, probably not a bad idea to send an email alert to the lab mailing list in the event of a vac interlock failure.

For tonight, I only plan to work with the EX ALS system anyways so I'm closing the PSL shutter, I'll work with Chub to restore the vacuum if he deems it okay tomorrow.

Attachment 1: Screenshot_from_2019-04-16_22-05-47.png
Screenshot_from_2019-04-16_22-05-47.png
Attachment 2: Screenshot_from_2019-04-16_22-06-02.png
Screenshot_from_2019-04-16_22-06-02.png
  14550   Wed Apr 17 18:12:06 2019 gautamUpdateVACVac interlock tripped again

After getting the go ahead from Chub and Jon, I restored the Vacuum state to "Vacuum normal", see Attachment #1. Steps:

  1. Interlock code modifications
    • Backed up /opt/target/python/interlocks/interlock_conditions.yaml to /opt/target/python/interlocks/interlock_conditions_UPS.yaml
    • The "power_loss" condition was removed for every valve and pump inside /opt/target/python/interlocks/interlock_conditions.yaml
    • The interlock service was restarted using sudo systemctl restart interlock.service
    • Looking at the status of the service, I saw that it was dying ~ every 1 second.
    • Traced this down to a problem in/opt/target/python/interlocks/interlock_conditions.yaml  when the "pump_managers" are initialized - the way this is coded up, doesn't play nice if there are no conditions specified in the yaml file. For now, I just commented this part out. The git diff  below:
  2. Restoring vacuum normal:
    • Spun up TP1, TP2 and TP3
    • Opened up foreline of TP1 to TP2, and then opened main volume to TP1
    • Opened up annulus foreline to TP3, and then opened the individual annular volumes to TP3.
controls@c1vac:/opt/target/python/interlocks$ git diff interlock.py
diff --git a/python/interlocks/interlock.py b/python/interlocks/interlock.py
index 28d3366..46a39fc 100755
--- a/python/interlocks/interlock.py
+++ b/python/interlocks/interlock.py
@@ -52,8 +52,8 @@ class Interlock(object):
         self.pumps = []
         for pump in interlocks['pumps']:
             pm = PumpManager(pump['name'])
-            for condition in pump['conditions']:
-                pm.register_condition(*condition)
+            #for condition in pump['conditions']:
+            #    pm.register_condition(*condition)
             self.pumps.append(pm)

So far the pressure is coming down smoothly, see Attachment #2. I'll keep an eye on it.

PSL shutter was opened at 645pm local time. IMC locked almost immediately.

Update 11pm: The pressure has reached 8.5e-6 torr without hiccup. 

Attachment 1: Screenshot_from_2019-04-17_18-11-45.png
Screenshot_from_2019-04-17_18-11-45.png
Attachment 2: Screenshot_from_2019-04-17_18-21-30.png
Screenshot_from_2019-04-17_18-21-30.png
ELOG V3.1.3-