40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 246 of 339  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  12252   Wed Jul 6 11:02:41 2016 PrafulUpdateComputer Scripts / ProgramsVMon Tab on Summary Pages

I've added a new tab for VMon under the SUS parent tab. I'm still working out the scale and units, but let me know if you think this is a useful addition. Here's a link to my summary page that has this tab: https://ldas-jobs.ligo.caltech.edu/~praful.vasireddy/1151193617-1151193917/sus/vmon/


I'll have another tab with VMon BLRMS up soon.

Also, the main summary pages should be back online soon after Max fixed a bug. I'll try to add the SUS/VMon tab to the main pages as well.

  12577   Fri Oct 21 09:28:21 2016 SteveUpdateVACVac Normal reached

Dry pump of TP3 replaced after 9.5 months of operation.[ 45 mTorr d3 ]

The annulosses are pumped.

Valve configuration: vac normal, IFO pressure 4.5E-5 Torr [1.6E-5 Torr d3 ] on new ITcc gauge, RGA is not installed yet.

Note how fast the pressure is dropping when the vent is short.

Quote:

IFO pressure 1.7E-4 Torr on new not logged cold cathode gauge. P1 <7E-4 Torr

Valve configuration: vac.normal with anunulossess closed off.

TP3 was turned off with a failing drypump. It will be replaced tomorrow.

All time stamps are blank on the MEDM screens.

Attachment 1: VacNormal.png
VacNormal.png
  11292   Fri May 15 16:18:28 2015 SteveUpdateVACVac Operation Guide

Vacuum Operation Guide is up loaded into the 40m-wiki. This is an old master copy. Not exact in terms of real action, but it is still a good guide of logic.

Rana has promissed to watch the N2 supply and change cylinder when it is empty. I will be Hanford next week.

  11262   Tue Apr 28 09:49:26 2015 SteveUpdateVACVac Summery Channels

 

 

       Channel        

                    Function                                          Interlock action           
            

C1:Vac-P1_pressure   

 IFO vac envelope pressure           at 3 mT close V1 and PSL shutter
C1:Vac-P2_pressure  Maglev foreline pressure                      at 6 Torr close V1 
C1:Vac-P3_pressure  annuloses    
C1:Vac-CC1_pressure  IFO pressure   at 1e-5 Torr close VM1
C1:Vac-CC4_pressure  RGA  pressure    
 C1:Vac-N2pres  valve's drive  pneumatic 60-80PSI    

 

at 55 PSI close V1, at 45 PSI close all 
 It  does not exist yet 2 N2 cylinder sum pressure  

 

  11274   Tue May 5 16:02:57 2015 SteveUpdateVACVac Summery Channels with discription

As it was requested by the Bos.

It would be nice to read from the epic screen C1:Vac-state_mon.......Current State: Vacuum Normal, valve configuration

Quote:

       Channel        

                    Function                                              Description                                             Interlock         
             

C1:Vac-P1_pressure   

 Main volume of 40m interferro meter        P=Pirani gauge, Pressure range: ATM-760  to 1e-4 Torr at 3 mT close V1 and PSL shutter
C1:Vac-P2_pressure  Maglev foreline pressure

Maglev is the main pump of our vacuum system below 500 mTorr

It's long term pressure has to be <500 mTorr                  

 at 6 Torr close V1 
C1:Vac-P3_pressure  annuloses

 Each chamber has it's own annulos. These small volumes are indipendent from main volume.     Their  pressure  ranges are <5 mTorr at vac. normal valve configuration.

                                          
C1:Vac-CC1_pressure  IFO main volume

CC1=cold cathode gauge (low emmision), Pressure range: 1e-4 to 1e-10 Torr,

In vac- normal configuration CC1= 2e-6 Torr

at 1e-5 Torr close VM1
C1:Vac-CC4_pressure  RGA  pressure In vac-normal configuration CC1=CC4  
 C1:Vac-N2pres  valve's drive pneumatic    

The N2 supply is regulated to 60-80 PSI out put at the auto cylinder changer.

 at 55 PSI close V1, at 45 PSI close all 
 It  does not exist yet 2 N2 cylinder sum pressure

Each cylinder pressure will be measured before the regulator and summed for warning message to be send

at 1000 PSI

 

  14384   Fri Jan 4 11:06:16 2019 JonOmnistructureUpgradeVac System Punchlist

The base Acromag vacuum system is running and performing nicely. Here is a list of remaining questions and to-do items we still need to address.

Safety Issues

  • Interlock for HV supplies. The vac system hosts a binary EPICS channel that is the interlock signal for the in-vacuum HV supplies. The channel value is OFF when the main volume pressure is in the arcing range, 3 mtorr - 500 torr, and ON otherwise. Is there something outside the vacuum system monitoring this channel and toggling the HV supplies?
  • Exposed 30-amp supply terminals. The 30-amp output terminals on the back of the Sorensen in the vac rack are exposed. We need a cover for those.
  • Interlock for AC power loss. The current vac system is protected only from transient power glitches, not an extended loss. The digital system should sense an outage and put the IFO into a safe state (pumps spun down and critical valves closed) before the UPS battery is fully drained. However, it presently has no way of sensing when power has been lost---the system just continues running normally on UPS power until the battery dies, at which point there is a sudden, uncontrolled shutdown. Is it possible for the digital system to communicate directly with the UPS to poll its activation state?

Infrastructure Improvements

  • Install the new N2 tank regulator and high-pressure transducers (we have the parts; on desk across from electronics bench). Run the transducer signal wires to the Acromag chassis in the vacuum rack.
  • Replace the kludged connectors to the Hornet and SuperBee serial outputs with permanent ones (we need to order the parts).
  • Wire the position indicator readback on the manual TP1 valve to the Acromag chassis.
  • Add cable tension relief to the back of the vac rack.
  • Add the TP1 analog readback signals (rotation speed and current) to the digital system.  Digital temperature, current, voltage, and rotation speed signals have already been added for TP2 and TP3.
  • Set up a local vacuum controls terminal on the desk by the vac rack.
  • Remove gauges from the EPICS database/MEDM screens that are no longer installed or functional. Potential candidates for removal: PAN, PTP1, IG1, CC2, CC3, CC4.
  • Although it appeared on the MEDM screen, the RGA was never interfaced to the old vac system. Should it be connected to c1vac now?
  14396   Thu Jan 10 19:59:08 2019 JonUpdateVACVac System Running Normally on Turbo Pumps

[Jon, Gautam, Chub]

Summary

We continued the pumpdown of the IFO today. The main volume pressure has reached 1.9e-5 torr and is continuing to fall. The system has performed without issue all day, so we'll leave the turbos continuously running from here on in the normal pumping configuration. Both TP2 and TP3 are currently backing for TP1. Once the main volume reaches operating pressure, we can transition TP3 to pump the annuli. They have already been roughed to ~0.1 torr. At that point the speed of all three turbo pumps can also be reduced. I've finished final edits/cleanup of the interlock code and MEDM screens.

Python Code

All the python code running on c1vac is archived to the git repo: 

https://git.ligo.org/40m/vacpython

This includes both the interlock code and the serial device clients for interfacing with gauges and pumps.

MEDM Monitor/Control

We're still using the same base MEDM monitor/control screens, but they have been much improved. Improvements:

  • Valves now light up in red when they are open. This makes it much easier to see at a glance what is valved in/out.
  • Every pump in the system (except CP1) is now digitally controlled from the MEDM control screen. No more need to physically push any buttons in the vaccum rack. 👍
  • The turbo pumps now show additional diagnostic readouts: speed (TP1/2/3), temperature (TP2/3), current draw (TP1/2/3), and voltage (TP2/3).
  • The foreline pressure gauge readouts for TP2/3 have been added to the digital system.
  • The two new main volume gauges, Hornet and SuperBee, have been added to the digital system as well.
  • New transducers have been added to read back the two N2 tank pressures.
  • The interlock code generates a log file of all its actions. A field in the MEDM screens specifies the location of the log file.
  • A tripped interlock (appearing as a message in the "Error message" field) must be manually cleared via the "Clear error message" button on the control screen before the system will accept any more manual valve input.

Note: The apparent glitches in the pressure and TP diagnostic channels are due to the interlock system being taken down to implement some of these changes.

Attachment 1: Screen_Shot_2019-01-10_at_7.58.24_PM.png
Screen_Shot_2019-01-10_at_7.58.24_PM.png
Attachment 2: CCs.png
CCs.png
Attachment 3: TPs.png
TPs.png
  14509   Tue Apr 2 18:40:01 2019 gautamUpdateVACVac failure

While glancing at my Vacuum striptool, I noticed that the IFO pressure is 2e-4 torr. There was an "AC power loss" reported by C1Vac about 4 hours (14:07 local time) ago. We are investigating. I closed the PSL shutter.


Jon and I investigated at the vacuum rack. The UPS was reporting a normal status ("On Line"). Everything looked normal so we attempted to bring the system back to the nominal state. But TP2 drypump was making a loud rattling noise, and the TP2 foreline pressure was not coming down at a normal rate. We wonder if the TP2 drypump has somehow been damaged - we leave it for Chub to investigate and give a more professional assessment of the situation and what the appropriate course of action is.

The PSL shutter will remain closed overning, and the main volume and annuli are valved off. We spun up TP1 and TP3 and decided to leave them on (but they have negligible load).

Attachment 1: vacFail.png
vacFail.png
  14511   Wed Apr 3 09:07:46 2019 gautamUpdateVACVac failure

Overnight pressure trends don't suggest anything went awry after the initial interlock trip. Some watchdog script that monitors vacuum pressure and closes the PSL shutter in the event of pressure exceeding some threshold needs to be implemented. Another pending task is to make sure that backup disk for c1vac actually is bootable and is a plug-and-play replacement.

Attachment 1: vacFailOvernight.png
vacFailOvernight.png
  15391   Thu Jun 11 11:48:43 2020 gautamUpdateVACVac failure

There appears to have been some sort of vacuum failure.

ldas-pcdev1 was down, so the summary pages weren't being generated. I have now switched over to ldas-pcdev6. I suspect some forepump failure, will check up later today unless someone else wants to take care of this.

There was no interlock action, and I don't check the vacuum status every half hour, so there was a period of time last night there was high circulating power in the arm cavities when the main volume pressure was higher than nominal. I have now closed the PSL shutter until the issue is resolved.

Attachment 1: vacFailure.png
vacFailure.png
  15392   Thu Jun 11 16:14:03 2020 gautamUpdateVACVac failure - probable cause is serial comm glitch

Summary:

It looks like the main vacuum interlock was tripped due to a serial communication error from the TP2 controller. With Rana/Koji's permission, I will open V1 and expose the main volume to TP1 again (#2 in last section).

Details:

  • The vacuum interlock log file at /opt/target/vac.log on c1vac suggests that the interlock was tripped because "TP2 is too warm".
  • Looking back at the diagnostics channels, it looks like the TP2 temperature channel registered a rise in temperature of >30 C in <0.2 seconds, see Attachment #1 - seems highly unlikely, probably some kind of glitch in the serial communication? This particular pump is relatively new from Agilent (<2 years installed I think)
  • The PSL shutter was automatically closed at ~1150 am today, see Attachment #2. There is some EPICS logic on c1psl (Acromag server) that checks if C1:Vac-P1a_pressure is greater than 3 mTorr (or greater than 500 Torr for in-air locking of the IMC), in which case it closes the shutter, so this seems consistent with expectations.

Recommended course of action:

  1. Code in some averaging in the interlock code, so that the interlock isn't triggered on some unphysical glitch like this. As shown in Attachment #3, this has been happening for the past 24 hours (though not before, because the interlock wasn't tripped). Probably need the derivative of the temperature as well, and the derivative should be less than 5 C/s or something physical (in addition to the temperature being high) for the interlock to trip.
  2. Re-open V1 to pump down the main volume to nominal pressure so that the interferometer locking activity can resume.
    • One option in the interim is to bypass the TP2 temperature interlock condition.
    • The pressure-based interlocks are probably sufficient to protect the main volume / pumps during the nominal operations - the temperature interlocks are mainly useful during the pumpdown where the TPs have a large load, and so we want to avoid over-stressing them.
Attachment 1: TP2_tempGlitch.png
TP2_tempGlitch.png
Attachment 2: PSL_shutterClosed.png
PSL_shutterClosed.png
Attachment 3: TP2tempGlitches.pdf
TP2tempGlitches.pdf
  15412   Thu Jun 18 22:33:57 2020 JonOmnistructureVACVac hardware purchase list

Replacement Hardware Purchase List

I've created a purchase list of hardware needed to restore the aging vacuum system. This wasn't planned as part of the BHD upgrade, but I've added it to the BHD procurement list since hardware replacements have become necessary.

The list proposes replacing the aging TP3 Varian turbo pump with the newer Agilent model which has already replaced TP2. It seems I was mistaken in believing we already had a second Agilent pump on hand. A thorough search of the lab has not turned it up, and Steve himself has told me he doesn't remember ordering a second one. Fortunately Steve did leave us a detailed Agilent parts list [ELOG 14322].

It also proposes replacing the glitching TP2 Agilent controller with a new one. The existing one can be sent back for repair and then retained as a spare. Considering that one of these controllers is already malfunctioning after < 2 years, I think it's a very good idea to have a spare on hand.

Known Hardware Issues

Below is our current list of vacuum hardware issues. Items that this purchase list will address (limited to only the most urgent) are highlighted in yellow.

  • Replace the UPS
    • Need a 240V socket for TP1 (currently TP1 is not protected from power loss)
    • Need RS232/485 comms with the interlock server (current UPS: serial readbacks have failed, battery is failing)
  • Remove/replace the failed pressure gauges (~5)
  • Add more cold cathode sensors to the main volume for sensor redundancy (currently the main-volume interlocks rely on only 1 working sensor)
  • Replace TP3 (controller is failing)
  • Replace TP2 controller (serial interface has failed)
  • Remove RP2
    • Dead and also not needed. We already have to throttle the pumpdown rate with only two roughing pumps
  • Remove/refurbish the cryopump
    • Contamination risk to have it sitting connectable to the main volume
  15502   Tue Jul 28 12:22:40 2020 JonUpdateVACVac interlock test today 1:30 pm

This afternoon Jordan is going to carry out a test of the V4 and V5 hardware interlocks. To inform the interlock improvement plan [15499], we need to characterize exactly how these work (they pre-date the 2018 upgrade). I have provided him a sequence of steps for each test and will also be backing him up on Zoom.

We will close V1 as a precaution but there should be no other impact to the IFO. The tests are expected to take <1 hour. We will advise when they are completed.

  15504   Tue Jul 28 14:11:14 2020 JonUpdateVACVac interlock test today 1:30 pm

This test has been completed. The IFO configuration has been reverted to nominal.

For future reference: yes, both the V4 and V5 hardware interlocks were found to still be connected and work. A TTL signal from the analog output port of each pump controller (TP2 and TP3) is connected to an auxiliary relay inside the main valve relay box. These serve the purpose of interupting the (Acromag) control signal to the primary V4/5 relay. This interrupt is triggered by each pump's R1 setpoint signal, which is programmed to go low when the rotation speed falls below 80% of the low-speed setting.

Quote:

This afternoon Jordan is going to carry out a test of the V4 and V5 hardware interlocks. To inform the interlock improvement plan [15499], we need to characterize exactly how these work (they pre-date the 2018 upgrade). I have provided him a sequence of steps for each test and will also be backing him up on Zoom.

We will close V1 as a precaution but there should be no other impact to the IFO. The tests are expected to take <1 hour. We will advise when they are completed.

  14546   Tue Apr 16 22:06:51 2019 gautamUpdateVACVac interlock tripped again

This happened again, about 30,000 seconds (~2:06pm local time according to the logfile) ago. The cited error was the same -

2019-04-16 14:06:05,538 - C1:Vac-error_status => VA6 closed. AC power loss.

Hard to believe there was any real power loss, nothing else in the lab seems to have been affected so I am inclined to suspect a buggy UPS communication channel. The PSL shutter was not closed - I believe the condition is for P1a to exceed 3 mtorr (it is at 1 mtorr right now), but perhaps this should be modified to close the PSL shutter in the event of any interlock tripping. Also, probably not a bad idea to send an email alert to the lab mailing list in the event of a vac interlock failure.

For tonight, I only plan to work with the EX ALS system anyways so I'm closing the PSL shutter, I'll work with Chub to restore the vacuum if he deems it okay tomorrow.

Attachment 1: Screenshot_from_2019-04-16_22-05-47.png
Screenshot_from_2019-04-16_22-05-47.png
Attachment 2: Screenshot_from_2019-04-16_22-06-02.png
Screenshot_from_2019-04-16_22-06-02.png
  14550   Wed Apr 17 18:12:06 2019 gautamUpdateVACVac interlock tripped again

After getting the go ahead from Chub and Jon, I restored the Vacuum state to "Vacuum normal", see Attachment #1. Steps:

  1. Interlock code modifications
    • Backed up /opt/target/python/interlocks/interlock_conditions.yaml to /opt/target/python/interlocks/interlock_conditions_UPS.yaml
    • The "power_loss" condition was removed for every valve and pump inside /opt/target/python/interlocks/interlock_conditions.yaml
    • The interlock service was restarted using sudo systemctl restart interlock.service
    • Looking at the status of the service, I saw that it was dying ~ every 1 second.
    • Traced this down to a problem in/opt/target/python/interlocks/interlock_conditions.yaml  when the "pump_managers" are initialized - the way this is coded up, doesn't play nice if there are no conditions specified in the yaml file. For now, I just commented this part out. The git diff  below:
  2. Restoring vacuum normal:
    • Spun up TP1, TP2 and TP3
    • Opened up foreline of TP1 to TP2, and then opened main volume to TP1
    • Opened up annulus foreline to TP3, and then opened the individual annular volumes to TP3.
controls@c1vac:/opt/target/python/interlocks$ git diff interlock.py
diff --git a/python/interlocks/interlock.py b/python/interlocks/interlock.py
index 28d3366..46a39fc 100755
--- a/python/interlocks/interlock.py
+++ b/python/interlocks/interlock.py
@@ -52,8 +52,8 @@ class Interlock(object):
         self.pumps = []
         for pump in interlocks['pumps']:
             pm = PumpManager(pump['name'])
-            for condition in pump['conditions']:
-                pm.register_condition(*condition)
+            #for condition in pump['conditions']:
+            #    pm.register_condition(*condition)
             self.pumps.append(pm)

So far the pressure is coming down smoothly, see Attachment #2. I'll keep an eye on it.

PSL shutter was opened at 645pm local time. IMC locked almost immediately.

Update 11pm: The pressure has reached 8.5e-6 torr without hiccup. 

Attachment 1: Screenshot_from_2019-04-17_18-11-45.png
Screenshot_from_2019-04-17_18-11-45.png
Attachment 2: Screenshot_from_2019-04-17_18-21-30.png
Screenshot_from_2019-04-17_18-21-30.png
  14574   Thu Apr 25 10:32:39 2019 JonUpdateVACVac interlocks updated

I slightly cleaned up Gautam's disabling of the UPS-predicated vac interlock and restarted the interlock service. This interlock is intended to protect the turbo pumps after a power outage, but it has proven disruptive to normal operations with too many false triggers. It will be reenabled once a new UPS has been installed. For now, as it has been since 2001, the vac pumps are unprotected against an extended power outage.

  15421   Mon Jun 22 10:43:25 2020 JonConfigurationVACVac maintenance at 11 am

The vac system is going down at 11 am today for planned maintenance:

  • Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417]
  • Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413]

We will advise when the work is completed.

  15424   Mon Jun 22 20:06:06 2020 JonConfigurationVACVac maintenance complete

This work is finally complete. The dry pump replacement was finished quickly but the controls updates required some substantial debugging.

For one, the mailer code I had been given to install would not run against Python 3.4 on c1vac, the version run by the vac controls since about a year ago. There were some missing dependencies that proved difficult to install (related to Debian Jessie becoming unsupported). I ultimately solved the problem by migrating the whole system to Python 3.5. Getting the Python keyring working within systemd (for email account authentication) also took some time.

Edit: The new interlock flag channel is named C1:Vac-interlock_flag.

Along the way, I discovered why the interlocks had been failing to auto-close the PSL shutter: The interlock was pointed to the channel C1:AUX-PSL_ShutterRqst. During the recent c1psl upgrade, we renamed this channel C1:PSL-PSL_ShutterRqst. This has been fixed.

The main volume is being pumped down, for now still in a TP3-backed configuration. As of 8:30 pm the pressure had fallen back to the upper 1E-6 range. The interlock protection is fully restored. Any time an interlock is triggered in the future, the system will send an immediate notification to 40m mailing list. 👍

Quote:

The vac system is going down at 11 am today for planned maintenance:

  • Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417]
  • Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413]
Attachment 1: Pumpdown-6-22-20.png
Pumpdown-6-22-20.png
  15425   Tue Jun 23 17:54:56 2020 ranaConfigurationVACVac maintenance complete

I propose we go for all CAPS for all channel names. The lower case names is just a holdover from Steve/Alan from the 90's. All other systems are all CAPS.

It avoids us having to force them all to UPPER in the scripts and channel lists.

  15748   Wed Jan 6 15:28:04 2021 gautamUpdateVACVac rack UPS batteries replaced

[chub, gautam]

the replacement was done this afternoon. The red "Replace Battery" indicator is no longer on.

  14380   Thu Jan 3 15:08:37 2019 gautamOmnistructureVACVac status unknown

Larry W came by the 40m, and reported that there was a campus-wide power glitch (he was here to check if our networking infrastructure was affected). I thought I'd check the status of the vacuum.

  • Attachment #1 is a screenshot of the Vac overview MEDM screen. Clearly something has gone wrong with the modbus process(es). Only the PTP2 and PTP3 gauges seem to be communicative.
  • Attachment #2 shows the minute trend of the pressure gauges for a 12 day period - it looks like there is some issue with the frame builder clock, perhaps this issue resurfaced? But checking the system time on FB doesn't suggest anything is wrong.. I double checked with dataviewer as well that the trends don't exist... But checking the status of the individual daqd processes indeed showed that the dates were off by 1 year, so I just restarted all of them and now the time seems correct. How can we fix this problem more permanently? Also, the P1b readout looks suspicious - why are there periods where it seems like we are reading values better than the LSB of the device?

I decided to check the systemctl process status on c1vac:

controls@c1vac:~$ sudo systemctl status modbusIOC.service
● modbusIOC.service - ModbusIOC Service via procServ
   Loaded: loaded (/etc/systemd/system/modbusIOC.service; enabled)
   Active: active (running) since Thu 2019-01-03 14:53:49 PST; 11min ago
 Main PID: 16533 (procServ)
   CGroup: /system.slice/modbusIOC.service
           ├─16533 /usr/bin/procServ -f -L /opt/target/modbusIOC.log -p /run/...
           ├─16534 /opt/epics/modules/modbus/bin/linux-x86_64/modbusApp /opt/...
           └─16582 caRepeater

Jan 03 14:53:49 c1vac systemd[1]: Started ModbusIOC Service via procServ.

Warning: Unit file changed on disk, 'systemctl daemon-reload' recommended.

So something did happen today that required restart of the modbus processes. But clearly not everything has come back up gracefully. A few lines of dmesg (there are many more segfaults):

[1706033.718061] python[23971]: segfault at 8 ip 000000000049b37d sp 00007fbae2b5fa10 error 4 in python2.7[400000+31d000]
[1706252.225984] python[24183]: segfault at 8 ip 000000000049b37d sp 00007fd3fa365a10 error 4 in python2.7[400000+31d000]
[1720961.451787] systemd-udevd[4076]: starting version 215
[1782064.269844] audit: type=1702 audit(1546540443.159:38): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.269866] audit: type=1302 audit(1546540443.159:39): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/85/tmp_obj_uAXhPg" inode=173019272 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.365240] audit: type=1702 audit(1546540443.255:40): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.365271] audit: type=1302 audit(1546540443.255:41): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/58/tmp_obj_KekHsn" inode=173019274 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.460620] audit: type=1702 audit(1546540443.347:42): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.460652] audit: type=1302 audit(1546540443.347:43): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/cb/tmp_obj_q62Pdr" inode=173019276 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.545449] audit: type=1702 audit(1546540443.435:44): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.545480] audit: type=1302 audit(1546540443.435:45): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/e3/tmp_obj_gPI4qy" inode=173019277 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.640756] audit: type=1702 audit(1546540443.527:46): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1783440.878997] systemd[1]: Unit serial_TP3.service entered failed state.
[1784682.147280] systemd[1]: Unit serial_TP2.service entered failed state.
[1786407.752386] systemd[1]: Unit serial_MKS937b.service entered failed state.
[1792371.508317] systemd[1]: serial_GP316a.service failed to run 'start' task: No such file or directory
[1795550.281623] systemd[1]: Unit serial_GP316b.service entered failed state.
[1796216.213269] systemd[1]: Unit serial_TP3.service entered failed state.
[1796518.976841] systemd[1]: Unit serial_GP307.service entered failed state.
[1796670.328649] systemd[1]: serial_Hornet.service failed to run 'start' task: No such file or directory
[1797723.446084] systemd[1]: Unit serial_MKS937b.service entered failed state.

 

I don't know enough about the new system so I'm leaving this for Jon to debug. Attachment #3 shows that the analog readout of the P1 pressure gauge suggests that the IFO is still under vacuum, so no random valve openings were effected (as expected, since we valved off the N2 line for this very purpose).

Attachment 1: Screenshot_from_2019-01-03_15-19-51.png
Screenshot_from_2019-01-03_15-19-51.png
Attachment 2: Screenshot_from_2019-01-03_15-14-14.png
Screenshot_from_2019-01-03_15-14-14.png
Attachment 3: 997B13A9-CAAF-409C-A6C2-00414D30A141.jpeg
997B13A9-CAAF-409C-A6C2-00414D30A141.jpeg
  15556   Fri Sep 4 15:26:55 2020 JonUpdateVACVac system UPS installation

The vac controls are going down now to pull and test software changes. Will advise when the work is completed.

  15557   Fri Sep 4 21:12:51 2020 JonUpdateVACVac system UPS installation

The vac work is completed. All of the vacuum equipment is now running on the new 120V UPS, except for TP1. The 230V TP1 is still running off wall power, as it always has. After talking with Tripp Lite support today, I believe there is a problem with the 230V UPS. I will post a more detailed note in the morning.

Quote:

The vac controls are going down now to pull and test software changes. Will advise when the work is completed.

  15558   Sat Sep 5 12:01:10 2020 JonUpdateVACVac system UPS installation

Summary

Yesterday's UPS switchover was mostly a success. The new Tripp Lite 120V UPS is fully installed and is communicating with the slow controls system. The interlocks are configured to trigger a controlled shutdown upon an extended power outage (> ~30 s), and they have been tested. All of the 120V pumpspool equipment (the full c1vac/LAN/Acromag system, pressure gauges, valves, and the two small turbo pumps) has been moved to the new UPS. The only piece of equipment which is not 120V is TP1, which is intended to be powered by a separate 230V UPS. However that unit is still not working, and after more investigation and a call to Tripp Lite, I suspect it may be defective. A detailed account of the changes to the system follow below.

Unfortunately, I think I damaged the Hornet (the only working cathode ionization gauge in the main volume) by inadvertently unplugging it while switching over equipment to the new UPS. The electronics are run from multiple daisy-chained power strips in the bottom of the rack and it is difficult to trace where everything goes. After the switchover, the Hornet repeatedly failed to activate (either remotely or manually) with the error "HV fail." Its compatriot, the Pirani SuperBee, also failed about a year ago under similar circumstances (or at least its remote interface did, making it useless for digital monitoring and control). I think we should replace them both, ideally with ones with some built-in protection against power failures.

New EPICS channels

Four new soft channels per UPS have been created, although the interlocks are currently predicated on only C1:Vac-UPS120V_status.

Channel Type Description Units
C1:Vac-UPS120V_status stringin Operational status -
C1:Vac-UPS120V_battery ai Battery remaining %
C1:Vac-UPS120V_line_volt ai Input line voltage V
C1:Vac-UPS120V_line_freq ai Input line frequency Hz
C1:Vac-UPS240V_status stringin Operational status -
C1:Vac-UPS240V_battery ai Battery remaining %
C1:Vac-UPS240V_line_volt ai Input line voltage V
C1:Vac-UPS240V_line_freq ai Input line frequency Hz

These new readbacks are visible in the MEDM vacuum control/monitor screens, as circled in Attachment 1:

Continuing issues with 230V UPS

Yesterday I brought with me a custom power cable for the 230V UPS. It adapts from a 208/120V three-phase outlet (L21-20R) to a standard outlet receptacle (5-15P) which can mate with the UPS's C14 power cable. I installed the cable and confirmed that, at the UPS end, 208V AC was present split-phase (i.e., two hot wires separated 120 deg in phase, each at 120V relative to ground). This failed to power on the unit. Then Jordan showed up and suggested to try powering it instead from a single-phase 240V outlet (L6-20R). However we found that the voltage present at this outlet was exactly the same as what the adapter cable provides: 208V split-phase.

This UPS nominally requires 230V single-phase. I don't understand well enough how the line-noise-isolation electronics work internally, so I can think of three possible explanations:

  1. 208V AC is insufficient to power the unit.
  2. The unit requires a true neutral wire (i.e., not a split-phase configuration), in which case it is not compatible with the U.S. power grid.
  3. The unit is defective.

I called Tripp Lite technical support. They thought the unit should work as powered in the configuration I described, so this leads me to suspect #3.

@Chub and Jordan: Can you please look into somehow replacing this unit, potentially with a U.S.-specific model? Let's stick with the Tripp Lite brand though, as I already have developed the code to interface those.

UPS-host computer communications

Unlike our older equipment, which communicates serially with the host via RS232/485, the new UPS units can be connected with a USB 3.0 cable. I found a great open-source package for communicating directly with the UPS from within Python, Network UPS Tools (NUT), which eliminates the dependency on Tripp Lite's proprietary GUI. The package is well documented, supports hundreds of power-management devices, and is available in the Debian package manager from Jessie (Debian 8) up. It consists of a large set of low-level, device-specific drivers which communicate with a "server" running as a systemd service. The NUT server can then be queried using a uniform set of programming commands across a huge number of devices.

I document the full set-up procedure below, as we may want to use this with more USB devices in the future.

How to set up

First, install the NUT package and its Python binding:

$ sudo apt install nut python-nut

This automatically creates (and starts) a set of systemd processes which expectedly fail, since we have not yet set up the config. files defining our USB devices. Stop these services, delete their default definitions, and replace them with the modified definitions from the vacuum git repo:

$ sudo systemctl stop nut-*.service
$ sudo rm /lib/systemd/system/nut-*.service
$ sudo cp /opt/target/services/nut-*.service /etc/systemd/system
$ sudo systemctl daemon-reload

Next copy the NUT config. files from the vacuum git repo to the appropriate system location (this will overwrite the existing default ones). Note that the file ups.conf defines the UPS device(s) connected to the system, so for setups other than c1vac it will need to be edited accordingly.

$ sudo cp /opt/target/services/nut/* /etc/nut

Now we are ready to start the NUT server, and then enable it to automatically start after reboots:

$ sudo systemctl start nut-server.service
$ sudo systemctl enable nut-server.service

If it succeeds, the start command will return without printing any output to the terminal. We can test the server by querying all the available UPS parameters with

$ upsc 120v

which will print to the terminal screen something like

battery.charge: 100
battery.runtime: 1215
battery.type: PbAC
battery.voltage: 13.5
battery.voltage.nominal: 12.0
device.mfr: Tripp Lite 
device.model: Tripp Lite UPS 
device.type: ups
driver.name: usbhid-ups
driver.parameter.pollfreq: 30
driver.parameter.pollinterval: 2
driver.parameter.port: auto
driver.parameter.productid: 2010
driver.parameter.vendorid: 09ae
driver.version: 2.7.2
driver.version.data: TrippLite HID 0.81
driver.version.internal: 0.38
input.frequency: 60.1
input.voltage: 120.3
input.voltage.nominal: 120
output.frequency.nominal: 60
output.voltage.nominal: 120
ups.beeper.status: enabled
ups.delay.shutdown: 20
ups.mfr: Tripp Lite 
ups.model: Tripp Lite UPS 
ups.power.nominal: 1000
ups.productid: 2010
ups.status: OL
ups.timer.reboot: 65535
ups.timer.shutdown: 65535
ups.vendorid: 09ae
ups.watchdog.status: 0

Here 120v is the name assigned to the 120V UPS device in the ups.conf file, so it will vary for setups on other systems.

If all succeeds to this point, what we have set up so far is a set of command-line tools for querying (and possibly controlling) the UPS units. To access this functionality from within Python scripts, a set of official Python bindings are provided by the python-nut package. However, at the time of writing, these bindings only exist for Python 2.7. For Python 3 applications (like the vacuum system), I have created a Python 3 translation which is included in the vacuum git repo. Refer to the UPS readout script for an illustration of its usage.

Attachment 1: vac_medm.png
vac_medm.png
  14456   Fri Feb 15 11:58:45 2019 JonUpdateVACVac system is back up

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals) that could only be cleared by power cycling the units. After resetting the system, the main volume pressure dropped quickly and is now < 2e-5 torr, so normal operations can resume. For future reference, below is the procedure to safely reset these units from a trouble state.

Vacromag Reset Procedure

  • TP2 and TP3 can be left running, but isolate them by closing valves V4 and V5.
  • TP1 can also be left running, but manually flip the operation mode on the front of the controller from REMOTE to LOCAL. This prevents the pump from receiving a "stop" command when its control Acromag shuts down.
  • Close all the pneumatic valves in the system (they'll otherwise close automatically when their control Acromags shut down).
  • On c1vac, stop the modbusIOC service. Sometimes this takes ~1 min to actually terminate.
  • Turn off the Acromags by flipping the "24 V" on the back of the chassis.
  • Wait ~10 sec, then turn them back on.
  • Start the modbusIOC service. It may take up to ~1 min for all the readings on the MEDM screen to initialize.
  • Ensure that the rotation speed of TP1,2,3 are still all nominal.
  • If pumps are OK, open V4, V5, and V7, then open V1. This restores the system to the "Maximum pumping speed" state.
  • Flip the TP1 controller operation state back to REMOTE.
  14458   Fri Feb 15 18:41:18 2019 ranaUpdateVACVac system is back up

If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?

Quote:

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)

  14460   Fri Feb 15 19:50:09 2019 ranaUpdateVACVac system is back up

The acromags are on the UPS. I suspect the transient came in on one of the signal lines. Chub tells me he unplugged one of the signal cables from the chassis around the time things died on Monday, although we couldn't reproduce the problem doing that again today.

In this situation it wasn't the software that died, but the acromag units themselves. I have an idea to detect future occurrences using a "blinker" signal. One acromag outputs a periodic signal which is directly sensed by another acromag. The can be implemented as another polling condition enforced by the interlock code.

Quote:

If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?

Quote:

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)

 

  14387   Mon Jan 7 11:54:12 2019 JonConfigurationComputer Scripts / ProgramsVac system shutdown

I'm making a controlled shutdown of the vac controls to add new ADC channels. Will advise when it's back up.

  14388   Mon Jan 7 19:21:45 2019 JonConfigurationComputer Scripts / ProgramsVac system shutdown

ADC work finished for the day. The vac controls are back up, with all valves CLOSED and all pumps OFF.

Quote:

I'm making a controlled shutdown of the vac controls to add new ADC channels. Will advise when it's back up.

 

  3343   Sat Jul 31 22:35:01 2010 KojiUpdateVACVac-P1 still 1.2mtorr

I resumed the pumping from 19:00.

Now the valve RV1 is full open. But the pumping is really slow as we are using only one RP.

After 3hrs of pumping, P1 reached 1.2mmtorr but still we need 2hrs of pumping...

I stopped pumping at 22:30.

  14452   Thu Feb 14 15:37:35 2019 gautamUpdateVACVacromag failure

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

Details:

  1. Chub alerted me he had changed the main N2 line pressure, but this did not show up in the trend data. In fact, the trend data suggested that all 3 N2 gauges had stopped logging data (they just held the previous value) since sometime on Monday, see Attachment #1.
  2. We verified that the gauges were being powered, and that the analog voltage output of the gauges made sense in the drill press room ---> So this suggested something was wrong at the Vacuum rack electronics rack.
  3. Went to the vacuum rack, saw no obvious indicator lights signalling a fault.
  4. So I restarted the modbus process on c1vac using sudo systemctl restart modbusIOC.service. The way Jon has this setup, this service controls all the sub-processes talking to gauges and TPs, so resatrting this master process should have brought everything back.
  5. This tripped the interlock, and all valves got closed.
  6. Once the modbus service restarted, most things came back normally. However, V1, V3, V4 and V5 readbacks were listed as "UNDEF".
  7. The way the interlock code works, it checks a valve state change request against the monitor channel, so all these valves could not be opened.
  8. We confirmed that the valves themselves were operational, by bypassing the itnerlock logic and directly actuating on the valve - but this is not a safe way of running overnight so we decided to shut everything down.
  9. We also confirmed that the problem is with one particular Acromag unit - switching the readback Dsub connector to another channel (e.g. V1 --> VM2) showed the expected readback.
  10. As a further check - I connected a windows laptop with the Acromag software installed, to the suspected XT1111 - it reported an error message saying "USB device may be damaged". Plugging into another XT111 in the crate, I was able to access the unit in the normal way.
  11. The phoenix connector architecture of the Acromags makes it possible to replace this single unit (we have spare XT1111 units) without disturbing the whole system - so barring objections, we plan to do this at 9am tomorrow. The replacement plan is summarized in Attachment #2.

Pressure of the main volume seems to have stabilized - see Attachment #3, so it should be fine to leave the IFO in this state overnight.

Questions:

  1. What caused the original failure of the writing to the ADC channels hooked up to the N2 gauges? There isn't any logging setup from the modbus processes afaik.
  2. What caused the failure of the XT1111? What is the failure mode even? Because some other channels on the same XT1111 are working...
  3. Was it user error? The only operation carried out by me was restarting the modbus services - how did this damage the readback channels for just four valves? I think Chub also re-arranged some wires at the end, but unplugging/re-connecting some cables shouldn't produce this kind of response...

The whole point of the upgrade was to move to a more reliable system - but seems quite flaky already.

Attachment 1: Screenshot_from_2019-02-14_15-40-36.png
Screenshot_from_2019-02-14_15-40-36.png
Attachment 2: IMG_7320.JPG
IMG_7320.JPG
Attachment 3: Screenshot_from_2019-02-14_20-43-15.png
Screenshot_from_2019-02-14_20-43-15.png
  14453   Thu Feb 14 18:16:24 2019 JonUpdateVACVacromag failure

I sent Gautam instructions to first try stopping the modbus service, power cycling the Acromag chassis, then restarting the service. I've seen the Acromags go into an unresponsive state after a strong electrical transient or shorted signal wires, and the unit has to be power cycled to be reset.

If this doesn't resolve it, I'll come in tomorrow to help with the Acromag replacement. We have plenty of spares.

Quote:

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

 

  14309   Mon Nov 19 23:38:41 2018 JonOmnistructure Vacuum Acromag Channel Assignments

I've completed bench testing of all seven vacuum Acromags installed in a custom rackmount chassis. The system contains five XT1111 modules (sinking digital I/O) used for readbacks of the state of the valves, TP1, CP1, and the RPs. It also contains two XT1121 modules (sourcing digital I/O) used to pass 24V DC control signals to the AC relays actuating the valves and RPs. The list of Acromag channel assignments is attached.

I tested each input channel using a manual flip-switch wired between signal pin and return, verifying the EPICS channel readout to change appropriately when the switch is flipped open vs. closed. I tested each output channel using a voltmeter placed between signal pin and return, toggling the EPICS channel on/off state and verifying the output voltage to change appropriately. These tests confirm the Acromag units all work, and that all the EPICS channels are correctly addressed.

Attachment 1: Binary_IO_Channel_Assignments.pdf
Binary_IO_Channel_Assignments.pdf
  14296   Wed Nov 14 21:34:44 2018 JonOmnistructure Vacuum Acromags installed and tested

All 7 Acromag units are now installed in the vacuum chassis. They are connected to 24V DC power and Ethernet.

I have merged and migrated the two EPICS databases from c1vac1 and c1vac2 onto the new machine, with appropriate modifications to address the Acromags rather than VME crate.

I have tested all the digital output channels with a voltmeter, and some of the inputs. Still more channels to be tested.

I’ll follow up with a wiring diagram for channel assignments.

Attachment 1: IMG_3003.jpg
IMG_3003.jpg
  14375   Thu Dec 20 21:29:41 2018 JonOmnistructureUpgradeVacuum Controls Switchover Completed

[Jon, Chub, Koji, Gautam]

Summary

Today we carried out the first pumpdown with the new vacuum controls system in place. It performed well. The only problem encountered was with software interlocks spuriously closing valves as the Pirani gauges crossed 1E-4 torr. At that point their readback changes from a number to "L OE-04, " which the system interpreted as a gauge failure instead of "<1E-4." This posed no danger and was fixed on the spot. The main volume was pumped to ~10 torr using roughing pumps 1 and 3. We were limited only by time, as we didn't get started pumping the main volume until after 1pm. The three turbo pumps were also run and tested in parallel, but were isolated to the pumpspool volume. At the end of the day, we closed every pneumatic valve and shut down all five pumps. The main volume is sealed off at ~10 torr, and the pumpspool volume is at ~1e-6 torr. We are leaving the system parked in this state for the holidays. 

Main Volume Pumpdown Procedure

In pumping down the main volume, we carried out the following procedure.

  1. Initially: All valves closed (including manual valves RV1 and VV1); all pumps OFF.
  2. Manually connected roughing pump line to pumpspool via KF joint.
  3. Turned ON RP1 and RP2.
  4. Waited until roughing pump line pressure (PRP) < 0.5 torr.
  5. Opened V3.
  6. Waited until roughing pump line pressure (PRP) < 0.5 torr.
  7. Manually opened RV1 throttling valve to main volume until pumpdown rate reached ~3 torr/min (~3 hours on roughing pumps).
  8. Waited until main volume pressure (P1a/P1b) < 0.5 torr.

We didn't quite reach the end of step 8 by the time we had to stop. The next step would be to valve out the roughing pumps and to valve in the turbo pumps.

Hardware & Channel Assignments

All of the new hardware is now permanently installed in the vacuum rack. This includes the SuperMicro rack server (c1vac), the IOLAN serial device server, a vacuum subnet switch, and the Acromag chassis. Every valve/pump signal cable that formerly connected to the VME bus through terminal blocks has been refitted with a D-sub connector and screwed directly onto feedthroughs on the Acromag chassis.

The attached pdf contains the master list of assigned Acromag channels and their wiring.

Attachment 1: 40m_vacuum_acromag_channels.pdf
40m_vacuum_acromag_channels.pdf 40m_vacuum_acromag_channels.pdf 40m_vacuum_acromag_channels.pdf
  14493   Thu Mar 21 18:36:59 2019 JonOmnistructureUpgradeVacuum Controls Switchover Completed

Updated vac channel list is attached. There are several new ADC channels.

Quote:

Hardware & Channel Assignments

All of the new hardware is now permanently installed in the vacuum rack. This includes the SuperMicro rack server (c1vac), the IOLAN serial device server, a vacuum subnet switch, and the Acromag chassis. Every valve/pump signal cable that formerly connected to the VME bus through terminal blocks has been refitted with a D-sub connector and screwed directly onto feedthroughs on the Acromag chassis.

The attached pdf contains the master list of assigned Acromag channels and their wiring.

Attachment 1: 40m_Vacuum_Acromag_Channels_20190321.pdf
40m_Vacuum_Acromag_Channels_20190321.pdf 40m_Vacuum_Acromag_Channels_20190321.pdf 40m_Vacuum_Acromag_Channels_20190321.pdf
  14315   Sun Nov 25 17:41:43 2018 JonOmnistructure Vacuum Controls Upgrade - Status and Plans

New hardware has been installed in the vacuum controls rack. It is shown in the below post-install photo.

  • Supermicro server (c1vac) which will be replacing c1vac1 and c1vac2.
  • 16-port Ethernet switch providing a closed local network for all vacuum devices.
  • 16-port IOLAN terminal server for multiplexing/Ethernetizing all RS-232 serial devices.

Below is a high-level summary of where things stand, and what remains to be done.

Completed:

 Set up of replacement controls server (c1vac).

  • Supermicro 1U rackmount server, running Debian 8.5.
  • Hosting an EPICS modbus IOC, scripted to start/restart automatically as a system service.
  • First Ethernet interface put on the martian network at 192.168.113.72.
  • Second Ethernet interface configured to host a LAN at 192.168.114.xxx for communications with all vacuum electronics. It connects to a 16-port Ethernet switch installed in the vacuum electronics rack.
  • Server installed in vacuum electronics rack (see photo).

 Set up of Acromag terminals.

  • 6U rackmount chassis frame assembled; 15V DC, 24V DC, and Ethernet wired.
  • Acromags installed in chassis and configured for the LAN (5 XT1111 units, 2 XT1121 units).

 EPICS database migration.

  • All vacuum channels moved to the modbus IOC, with the database updated to address the new Acromags. [The new channels are running concurrently at "C1:Vac2-...." to avoid conflict with the existing system.]
  • Each hard channel was individually tested on the electronics bench to confirm correct addressing and Acromag operation.

 Set up of 16-port IOLAN terminal server (for multiplexing/Ethernetizing the serial devices).

  • Configured for operation on the LAN. Each serial device port is assigned a unique IP address, making the terminal server transparent to client TCP applications.
  • Most of the pressure gauges are now communicating with the controls server via TCP.

Ongoing this week:

  • [Jon] Continue migrating serial devices to ports on the terminal server. Still left are the turbo pumps, N2 gauge, and RGA.
  • [Jon] Continue developing Python code for communicating with gauges and pumps via TCP sockets. A beta version of gauge readout code is running now.
  • [Chub] Install feedthrough panels on the Acromag chassis. Connect the wiring from feedthrough panels to the assigned Acromag slots.
  • [Chub/Jon] Test all the hard EPICS channels on the electronics bench, prior to installing the crate in the vacuum rack.
  • [Chub/Jon] Install the crate in the vacuum rack; connect valve/pump readbacks and actuators; test each hard EPICS channel in situ.
  • [Jon] Once all the signal connections have been made, in situ testing of the Python interlock code can begin.
Attachment 1: rack_photo.jpg
rack_photo.jpg
  13179   Wed Aug 9 16:34:46 2017 ranaUpdateVACVacuum Document recovered

Steve and I found the previous draft of the 40m Vacuum Document. Someone in 2015 had browsed into the Docs history and then saved the old 2013 version as the current one.

We restored the version from 2014 which has all of Steve's edits. I have put that version (which is now the working copy) into the DCC:  https://dcc.ligo.org/E1500239.

The latest version is in our Google Docs place as usual. Steve is going to have a draft ready for us to ready be Tuesday, so please take a look then and we can discuss what needs doing at next Wednesday's 40m meeting.

  16508   Wed Dec 15 15:06:08 2021 JordanUpdateVACVacuum Feedthru Install

Jordan, Chub

We installed the 4x DB25 feedthru flange on the North-West port of ITMX chamber this afternoon. It is ready to go.

  9017   Fri Aug 16 09:35:18 2013 SteveUpdateVACVacuum Normal state recognition is back

Quote:

Quote:

Quote:

Quote:

Apparently all of the ION pump valves (VIPEE, VIPEV, VIPSV, VIPSE) opened, which vented the main volume up to 62 mTorr.  All of the annulus valves (VAVSE, VAVSV, VAVBS, VAVEV, VAVEE) also appeared to be open.  One of the roughing pumps was also turned on.  Other stuff we didn't notice?  Bad. 

 Several of the suspensions were kicked pretty hard (600+ mV on some sensors) as a result of this quick vent wind.  All of the suspensions are damped now, so it doesn't look like we suffered any damage to suspensions.

CLOSE CALL on the vacuum system:

Jamie and I disabled V1, VM2 and VM3 gate valves by disconnecting their 120V solenoid actuator before the swap of the VME crate.

The vacuum controller unexpectedly lost control over the swap as Jamie described it. We were lucky not to do any damage! The ion pumps were cold and clean. We have not used them for years so their outgassing possibly  accumulated to reach ~10-50 Torr

I disconnected_ immobilized and labelled the following 6 valves:  the 4 large ion pump gate valves and VC1,  VC2  of the cryo pump. Note: the valves on the cryo pump stayed closed. It is crucial that a warm cry pump is kept closed!

This will not allow the same thing to happen again and protect the IFO from warm cryo contamination.

The down side of this that the computer can not identify vacuum states any longer.

This vacuum system badly needs an upgrade. I will make a list.

 While I was doing the oil change of the roughing pumps I accidentally touched the 24 V adjustment knob on the power supply.

All valve closed to default condition. I realized that the current indicator was red at 0.2A  and the voltage fluctuated from 3-13V

Increased current limiter to 0.4A and set voltage to 24V     I think this was the reason for the caos of valve switching during the VME swap.

 

 Based on the facts above I reconnected VC1 and VC2 valves.  State recognition is working.  Ion pumps are turned off and their gate valves are disabled. 

We learned that even with closed off gate valves while at atmosphere  ion pumps outgass hydrocarbons at 1e-6 Torr level.  We have not used them for this reason in the passed 9 rears.

 

I need help with implementing V1 interlock triggered by Maglev failure signal  and-or P2 pressure.

MEDM screen agrees with vacuum rack signs.

Attachment 1: VacuumNormal.png
VacuumNormal.png
Attachment 2: vacValvesDisabled.jpg
vacValvesDisabled.jpg
  16980   Fri Jul 8 14:03:33 2022 JCHowToVACVacuum Preparation for Power Shutdown

[Koji, JC]

Koji and I have prepared the vacuum system for the power outage on Saturday.

  1. Closed V1 to isolate the main volume.
  2. Closed of VASE, VASV, VABSSCI,VABS, VABSSCO, VAEV, and VAEE.
  3. Closed V6, then close VM3 to isolate RGA
  4. Turn off TP1 (You must check the RPMs on the TP1 Turbo Controller Module)
  5. Close V5
  6. Turn off TP3 (There is no way to check the RPMs, so be patient)
  7. Close V4 (System State changes to 'All pneumatic valves are closed)
  8. Turn off TP2 (There is no way to check the RPMs, so be patient)
  9. Close Vacuum Valves (on TP2 and TP3) which connect to the AUX Pump.
  10. Turn of AUX Pump with the breaker switch wall plug.

From here, we shutdown electronics.

  1. Run /sbin/shutdown -h now on c1vac to shut the host down.
  2. Manually turn off power to electronic modules on the rack.
    • GP316a
    • GP316b
    • Vacuum Acromags
    • PTP3
    • PTP2
    • TP1
    • TP2 (Unplugged)
    • TP3 (Unplugged)

 

Attachment 1: Screen_Shot_2022-07-12_at_7.02.14_AM.png
Screen_Shot_2022-07-12_at_7.02.14_AM.png
  14308   Mon Nov 19 22:45:23 2018 JonOmnistructure Vacuum System Subnetwork

I've set up a closed subnetwork for interfacing the vacuum hardware (Acromags and serial devices) with the new controls machine (c1vac; 192.168.113.72). The controls machine has two Ethernet interfaces, one which faces outward into the martian network and another which faces the internal subnetwork, 192.168.114.xxx. The second network interface was configured via the following procedure.

1. Add the following lines to /etc/network/interfaces:

allow-hotplug eth1
iface eth1 inet static
address 192.168.114.9
netmask 255.255.255.0

2. Restart the networking services:

$sudo /etc/init.d/networking restart

3. Enable DNS lookup on the martian network by adding the following lines to /etc/resolv.conf:

search martian
nameserver 192.168.113.104

4. Enable IP forwarding from eth1 to eth0:

$sudo echo 1 > /proc/sys/net/ipv4/ip_forward

5. Configure IP tables to allow outgoing connections, while keeping the LAN invisible from outside the gateway (c1vac):

$sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
$sudo iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
$sudo iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

6. Finally, because the EPICS 3.14 server binds to all network interfaces, client applications running on c1vac now see two instances of the EPICS server---one at the outward-facing address and one at the LAN address. To resolve this ambiguity, two additional enviroment variables must be set that specify to local clients which server address to use. Add the following lines to /home/controls/.bashrc:

EPICS_CA_AUTO_ADDR_LIST=NO
EPICS_CA_ADDR_LIST=192.168.113.72

A list of IP addresses so far assigned on the subnetwork follows.

Device IP Address
Acromag XT1111a 192.168.114.1
Acromag XT1111b 192.168.114.2
Acromag XT1111c 192.168.114.3
Acromag XT1111d 192.168.114.4
Acromag XT1111e 192.168.114.5
Acromag XT1121a 192.168.114.6
Acromag XT1121b 192.168.114.7
Perle IOLAN SDS16 192.168.114.8
c1vac 192.168.114.9
  5180   Wed Aug 10 22:47:22 2011 ranaSummaryVACVacuum Workstation (linux3) re-activated

For some reason the workstation at the vac rack was off and unplugged. Nicole and I plugged its power back in to the EX rack.

I turned it on and it booted up fine; its not dead. To get it on to the network I just made the conversion from 131.215 to 192.168 that Joe had done on all the other computers several months ago.

Now it is showing the Vacuum overview screen correctly again and so Steve no longer has to monopolize one of the Martian laptops over there.

  11352   Wed Jun 10 15:54:14 2015 SteveUpdateVACVacuum comp. rebooted

Koji and Steve succeded rebooting C1vac1, C1vac2 and pressure reading is working now

More tomorrow .........

 

Attachment 1: afterReb061015.png
afterReb061015.png
  11353   Thu Jun 11 19:40:59 2015 KojiUpdateVACVacuum comp. rebooted

The serial connections to the vacuum gauges were recovered by rebooting c1vac1 and c1vac2.

Steve claimed that the vacuum screen had showed "NO COMM" at the vacuum pressure values.
The epics connection to c1vac was fine. We could logged in to c1vac1 with telnet too although c1vac2 had no response.

After some inspection, we decided to reboot the slow machines. Steve manually XXXed YYY valves (to be described)
to prepare for any possible unwanted switching. Initially Koji thought only c1vac2 can be rebooted. But it was wrong.
If the reset button is pushed, all of the modules on the same crate is reset. So everything was reset. After ~3min we still
don't have the connection to c1vac1 restored. We decided to another reboot. This time I pushed c1vac1 reset button.
After waiting about two minutes, the ADCs started to show green lights and the switch box started scanning.
We recovered the telnet connection to c1vac1 and epics functions. c1vac2 is still note responding to telnet, and
the values associated with c1vac2 are still blank.

Steve restored the valves and everything was back to normal.

  11354   Fri Jun 12 08:40:17 2015 SteveUpdateVACVacuum comp. rebooted

Koji and Steve,

One computer expert and one vacuum expert required.

Quote:

The serial connections to the vacuum gauges were recovered by rebooting c1vac1 and c1vac2.

Steve claimed that the vacuum screen had showed "NO COMM" at the vacuum pressure values.
The epics connection to c1vac was fine. We could logged in to c1vac1 with telnet too although c1vac2 had no response.

After some inspection, we decided to reboot the slow machines. Steve manually XXXed YYY valves (to be described)
to prepare for any possible unwanted switching. Initially Koji thought only c1vac2 can be rebooted. But it was wrong.
If the reset button is pushed, all of the modules on the same crate is reset. So everything was reset. After ~3min we still
don't have the connection to c1vac1 restored. We decided to another reboot. This time I pushed c1vac1 reset button.
After waiting about two minutes, the ADCs started to show green lights and the switch box started scanning.
We recovered the telnet connection to c1vac1 and epics functions. c1vac2 is still note responding to telnet, and
the values associated with c1vac2 are still blank.

Steve restored the valves and everything was back to normal.

Atm 1,  problem condition: gauges are not reading for a week, error message "NO COMM" and all computer LEDs are green

Atm 2, prepare to safe reboot:

            a, close V1, disconnect it's power cable and turn off Maglev, wait till rotation stops

            b, close PSL shutter ( take adrenaline if needed )

            c, close V4, V5, VA6 valves and disconnect their cables. "Moving" error message indicating this condition.

               V1 is not showing "Moving" because its power cable disconnected only! It will show it if its position indicator cable is disconnected too. There is no need for that.

               These valves closed and disabled will not allow accidental venting of main volume.

            d, push reset, reseting c1vac2 will reset c1vac1 also, wait ~ 6 minutes

"Vacuum Normal" valve configuration was restored after succesful reboot as follows:

             a, reconnect cable and open V4 and V5 at P2 & P3 <1e-1 Torr

             b, observe that P2  < 1e-3 Torr and retsart Maglev

             c, wait till Maglev reaches full speed of 560 Hz and reconnect-open V1

             d, reconnect-open VA6 at P3 <1e-3 Torr

NOTE: VM1 valve was locked in open position and it was not responding before and after reboot

          Error message on Atm2 is indicating this locked condition: "opening VM1 will vent IFO"

          This is a fauls message. The valve is frozen in open position. We need a softwear expert help.

 

 

Attachment 1: vacMonNoGauges.png
vacMonNoGauges.png
Attachment 2: prepReboot.png
prepReboot.png
  1673   Mon Jun 15 15:17:33 2009 josephb, SteveConfigurationVACVacuum control and monitor screens

We updated the vacuum control and monitor screens  (C0VAC_MONITOR.adl and C0VAC_CONTROL.adl).  We also updated the /cvs/cds/caltech/target/c1vac1/Vac.db file.

1) We changed the C1:Vac-TP1_lev channel to C1:Vac-TP1_ala channel, since it now is an alarm readback on the new turbo pump rather than an indication of levitation.  The logic on printing the "X" was changed from X is printed on a 1 = ok status) to X is printed on a 0 = problem status.  All references within the Vac.db file to C1:Vac-TP1_lev were changed.  The medm screens also now are labeled Alarm, instead of Levitating.

2) We changed the text displayed by the CP1 channel (C1:Vac-CP1_mon in Vac.db) from "On" and "Off" to "Cold - On" and "Warm - OFF".

3) We restarted the c1vac1 front end as well as the framebuilder after these changes.

  15499   Thu Jul 23 15:58:24 2020 JonSummaryVACVacuum controls refurbishment plan

This year we've struggled with vacuum controls unreliability (e.g., spurious interlock triggers) caused by decaying hardware. Here are details of the vacuum refurbishment plan I described on the 40m call this week.

 Refurbish TP2 and TP3 dry pumps. Completed [ELOG 15417].

 Automated notifications of interlock-trigger events. Email to 40m list and a new interlock flag channel. Completed [ELOG 15424].

Replace failing UPS.

  • Two new Tripp Lite units on order, 110V and 230V [ELOG 15465].
  • Jordan will install them in the vacuum rack once received.
  • Once installed, Jon will come test the new units, set up communications, and integrate them into the interlock system following this plan [ELOG 15446].
  • Jon will move the pumps and other equipment to the new UPS units only after completing the above step.

Remove interlock dependencies on TP2/TP3 serial readbacks. Due to persistent glitching [ELOG 15140, ELOG 15392].

Unlike TP2 and TP3, the TP1 readbacks are real analog signals routed to Acromags. As these have caused us no issues at all, the plan is to eliminate dependence on the TP2/3 digital readbacks in favor of the analog controller outputs. All the digital readback channels will continue to exist, but the interlock system will no longer depend on them. This will require adding 2 new sinking BI channels each for TP2 and TP3 (for a total of 4 new channels). We have 8 open Acromag XT1111 channels in the c1vac system [ELOG 14493], so the new channels can be accommodated. The below table summarizes the proposed changes.

Channel Type Status Description Interlock
C1:Vac-TP1_current AI exists Current draw (A) keep
C1:Vac-TP1_fail BI exists Critical fault has occurred keep
C1:Vac-TP1_norm BI exists Rotation speed is within +/-10% of set point new
C1:Vac-TP2_rot soft exists Rotation speed (krpm) remove
C1:Vac-TP2_temp soft exists Temperature (C) remove
C1:Vac-TP2_current soft exists Current draw (A) remove
C1:Vac-TP2_fail BI new Critical fault has occurred new
C1:Vac-TP2_norm BI new Rotation speed is >80% of set point new
C1:Vac-TP3_rot soft exists Rotation speed (krpm) remove
C1:Vac-TP3_temp soft exists Temperature (C) remove
C1:Vac-TP3_current soft exists Current draw (A) remove
C1:Vac-TP3_fail BI new Critical fault has occurred new
C1:Vac-TP3_norm BI new Rotation speed is >80% of set point new
  14419   Fri Jan 25 16:14:51 2019 gautamUpdateVACVacuum interlock code, N2 warning

I reset the remote of this git repo to the 40m version instead of Jon's personal one, to ensure consistency between what's on the vacuum machine and in the git repo. There is now a N2 checker python mailer that will email the 40m list if all the tank pressures are below 600 PSI (>12 hours left for someone to react before the main N2 line pressure drops and the interlocks kick in). For now, the script just runs as a cron job every 3 hours, but perhaps we should integrate it with the interlock process?

Quote:

All the python code running on c1vac is archived to the git repo: 

https://git.ligo.org/40m/vacpython

ELOG V3.1.3-