40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 339 of 350  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Type Categoryup Subject
  15526   Fri Aug 14 10:10:56 2020 JonConfigurationVACVacuum repairs today

The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up.

  15527   Sat Aug 15 02:02:13 2020 JonConfigurationVACVacuum repairs today

Vacuum work is completed. The TP2 and TP3 interlocks have been overhauled as proposed in ELOG 15499 and seem to be performing reliably. We're now back in the nominal system state, with TP2 again backing for TP1 and TP3 pumping the annuli. I'll post the full implementation details in the morning.

I did not get to setting up the new UPS units. That will have to be scheduled for another day.

Quote:

The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up.

  15528   Sat Aug 15 15:12:22 2020 JonConfigurationVACOverhaul of small turbo pump interlocks

Summary

Yesterday I completed the switchover of small turbo pump interlocks as proposed in ELOG 15499. This overhaul altogether eliminates the dependency on RS232 readbacks, which had become unreliable (glitchy) in both controllers. In their place, the V4(5) valve-close interlocks are now predicated on an analog controller output whose voltage goes high when the rotation speed is >= 80% of the nominal setpoint. The critical speed is 52.8 krpm for TP2 and 40 krpm for TP3. There already exist hardware interlocks of V4(5) using the same signals, which I have also tested.

Interlock signal

Unlike the TP1 controller, which exposes simple relays whose open/closed states are sensed by Acromags, what the TP2(3) controllers output is an energized 24V signal for controlling such a relay (output circuit pictured below). I hadn't appreciated this difference and it cost me time yesterday. The ultimate solution was to route the signals through a set of new 24V Phoenix Contact relays installed inside the Acromag chassis. However, this required removing the chassis from the rack and bringing it to the electronics bench (rather than doing the work in situ, as I had planned). The relays are mounted to the second DIN rail opposite the Acromags. Each TP2(3) signal controls the state of a relay, which in turn is sensed using an Acromag XT1111.

Signal routing

The TP2(3) "normal-speed" signals are already in use by hardware interlocks of V4(5). Each signal is routed into the main AC relay box, where it controls an "interrupter" relay through which the Acromag control signal for the main V4(5) relay is passed. These signals are now shared with the digital controls system using a passive DB15 Y-splitter. The signal routing is shown below.

Interlock conditions

The new turbo-pump-related interlock conditions and their channel predicates are listed below. The full up-to-date channel list and wiring assignments for c1vac are maintained here.

Channel Type New? Interlock-triggering condition
C1:Vac-TP1_norm BI No Rotation speed < 90% nominal setpoint (29 krpm)
C1:Vac-TP1_fail BI No Critical fault occurrence
C1:Vac-TP1_current AI No Current draw > 4 A
C1:Vac-TP2_norm BI Yes Rotation speed < 80% nominal setpoint (52.8 krpm)
C1:Vac-TP3_norm BI Yes Rotation speed < 80% nominal setpoint (40 krpm)

There are two new channels, both of which provide a binary indication of whether the pump speed is outside its nominal range. I did not have enough 24V relays to also add the C1:Vac-TP2(3)_fail channels listed in ELOG 15499. However, these signals are redundant with the existing interlocks, and the existing serial "Status" readback will already print failure messages to the MEDM screens. All of the TP2(3) serial readback channels remain, which monitor voltage, current, operational status, and temperature. The pump on/off and low-speed mode on/off controls remain implemented with serial signals as well.

The new analog readbacks have been added to the MEDM controls screens, circled below:

Other incidental repairs

  • I replaced the (dead) LED monitor at the vac controls console. In the process of finding a replacement, I came across another dead spare monitor as well. Both have been labeled "DEAD" and moved to Jordan's desk for disposal.
  • I found the current TP3 Varian V70D controller to be just as glitchy in the analog outputs as well. That likely indicates there is a problem with the microprocessor itself, not just the serial communications card as I thought might be the case. I replaced the controller with the spare unit which was mounted right next to it in the rack [ELOG 13143]. The new unit has not glitched since the time I installed it around 10 pm last night.
  15537   Mon Aug 24 08:13:56 2020 JonUpdateVACUPS installation

I'm in the lab this morning to interface the two new UPS units with the digital controls system. Will be out by lunchtime. The disruptions to the vac system should be very brief this time.

  15538   Mon Aug 24 11:25:07 2020 JonUpdateVACUPS installation

I'm leaving the lab shortly. We're not ready to switch over the vac equipment to the new UPS units yet.

The 120V UPS is now running and interfaced to c1vac via a USB cable. The unofficial tripplite python package is able to detect and connect to the unit, but then read queries fail with "OS Error: No data received." The firmware has a different version number from what the developers say is known to be supported.

The 230V UPS is actually not correctly installed. For input power, it has a general type C14 connector which is currently plugged into a 120V power strip. However this unit has to be powered from a 230V outlet. We'll have to identify and buy the correct adapter cable.

With the 120V unit now connected, I can continue to work on interfacing it with python remotely. The next implementation I'm going to try is item #2 of this plan [ELOG 15446].

Quote:

I'm in the lab this morning to interface the two new UPS units with the digital controls system. Will be out by lunchtime. The disruptions to the vac system should be very brief this time.

  15541   Wed Aug 26 15:48:31 2020 gautamUpdateVACControl screen left open on vacuum workstation

I found that the control MEDM screen was left open on the c1vac workstation. This should be closed every time you leave the workstation, to avoid accidental button pressing and such.

The network outage meant that the EPICS data from the pressure gauges wasn't recorded until I reset everything ~noon. So there isn't really a plot of the outgassing/leak rate. But the pressure rose to ~2e-4 torr, over ~4 hours. The pumpdown back to nominal pressure (9e-6 torr) took ~30 minutes.

  15556   Fri Sep 4 15:26:55 2020 JonUpdateVACVac system UPS installation

The vac controls are going down now to pull and test software changes. Will advise when the work is completed.

  15557   Fri Sep 4 21:12:51 2020 JonUpdateVACVac system UPS installation

The vac work is completed. All of the vacuum equipment is now running on the new 120V UPS, except for TP1. The 230V TP1 is still running off wall power, as it always has. After talking with Tripp Lite support today, I believe there is a problem with the 230V UPS. I will post a more detailed note in the morning.

Quote:

The vac controls are going down now to pull and test software changes. Will advise when the work is completed.

  15558   Sat Sep 5 12:01:10 2020 JonUpdateVACVac system UPS installation

Summary

Yesterday's UPS switchover was mostly a success. The new Tripp Lite 120V UPS is fully installed and is communicating with the slow controls system. The interlocks are configured to trigger a controlled shutdown upon an extended power outage (> ~30 s), and they have been tested. All of the 120V pumpspool equipment (the full c1vac/LAN/Acromag system, pressure gauges, valves, and the two small turbo pumps) has been moved to the new UPS. The only piece of equipment which is not 120V is TP1, which is intended to be powered by a separate 230V UPS. However that unit is still not working, and after more investigation and a call to Tripp Lite, I suspect it may be defective. A detailed account of the changes to the system follow below.

Unfortunately, I think I damaged the Hornet (the only working cathode ionization gauge in the main volume) by inadvertently unplugging it while switching over equipment to the new UPS. The electronics are run from multiple daisy-chained power strips in the bottom of the rack and it is difficult to trace where everything goes. After the switchover, the Hornet repeatedly failed to activate (either remotely or manually) with the error "HV fail." Its compatriot, the Pirani SuperBee, also failed about a year ago under similar circumstances (or at least its remote interface did, making it useless for digital monitoring and control). I think we should replace them both, ideally with ones with some built-in protection against power failures.

New EPICS channels

Four new soft channels per UPS have been created, although the interlocks are currently predicated on only C1:Vac-UPS120V_status.

Channel Type Description Units
C1:Vac-UPS120V_status stringin Operational status -
C1:Vac-UPS120V_battery ai Battery remaining %
C1:Vac-UPS120V_line_volt ai Input line voltage V
C1:Vac-UPS120V_line_freq ai Input line frequency Hz
C1:Vac-UPS240V_status stringin Operational status -
C1:Vac-UPS240V_battery ai Battery remaining %
C1:Vac-UPS240V_line_volt ai Input line voltage V
C1:Vac-UPS240V_line_freq ai Input line frequency Hz

These new readbacks are visible in the MEDM vacuum control/monitor screens, as circled in Attachment 1:

Continuing issues with 230V UPS

Yesterday I brought with me a custom power cable for the 230V UPS. It adapts from a 208/120V three-phase outlet (L21-20R) to a standard outlet receptacle (5-15P) which can mate with the UPS's C14 power cable. I installed the cable and confirmed that, at the UPS end, 208V AC was present split-phase (i.e., two hot wires separated 120 deg in phase, each at 120V relative to ground). This failed to power on the unit. Then Jordan showed up and suggested to try powering it instead from a single-phase 240V outlet (L6-20R). However we found that the voltage present at this outlet was exactly the same as what the adapter cable provides: 208V split-phase.

This UPS nominally requires 230V single-phase. I don't understand well enough how the line-noise-isolation electronics work internally, so I can think of three possible explanations:

  1. 208V AC is insufficient to power the unit.
  2. The unit requires a true neutral wire (i.e., not a split-phase configuration), in which case it is not compatible with the U.S. power grid.
  3. The unit is defective.

I called Tripp Lite technical support. They thought the unit should work as powered in the configuration I described, so this leads me to suspect #3.

@Chub and Jordan: Can you please look into somehow replacing this unit, potentially with a U.S.-specific model? Let's stick with the Tripp Lite brand though, as I already have developed the code to interface those.

UPS-host computer communications

Unlike our older equipment, which communicates serially with the host via RS232/485, the new UPS units can be connected with a USB 3.0 cable. I found a great open-source package for communicating directly with the UPS from within Python, Network UPS Tools (NUT), which eliminates the dependency on Tripp Lite's proprietary GUI. The package is well documented, supports hundreds of power-management devices, and is available in the Debian package manager from Jessie (Debian 8) up. It consists of a large set of low-level, device-specific drivers which communicate with a "server" running as a systemd service. The NUT server can then be queried using a uniform set of programming commands across a huge number of devices.

I document the full set-up procedure below, as we may want to use this with more USB devices in the future.

How to set up

First, install the NUT package and its Python binding:

$ sudo apt install nut python-nut

This automatically creates (and starts) a set of systemd processes which expectedly fail, since we have not yet set up the config. files defining our USB devices. Stop these services, delete their default definitions, and replace them with the modified definitions from the vacuum git repo:

$ sudo systemctl stop nut-*.service
$ sudo rm /lib/systemd/system/nut-*.service
$ sudo cp /opt/target/services/nut-*.service /etc/systemd/system
$ sudo systemctl daemon-reload

Next copy the NUT config. files from the vacuum git repo to the appropriate system location (this will overwrite the existing default ones). Note that the file ups.conf defines the UPS device(s) connected to the system, so for setups other than c1vac it will need to be edited accordingly.

$ sudo cp /opt/target/services/nut/* /etc/nut

Now we are ready to start the NUT server, and then enable it to automatically start after reboots:

$ sudo systemctl start nut-server.service
$ sudo systemctl enable nut-server.service

If it succeeds, the start command will return without printing any output to the terminal. We can test the server by querying all the available UPS parameters with

$ upsc 120v

which will print to the terminal screen something like

battery.charge: 100
battery.runtime: 1215
battery.type: PbAC
battery.voltage: 13.5
battery.voltage.nominal: 12.0
device.mfr: Tripp Lite 
device.model: Tripp Lite UPS 
device.type: ups
driver.name: usbhid-ups
driver.parameter.pollfreq: 30
driver.parameter.pollinterval: 2
driver.parameter.port: auto
driver.parameter.productid: 2010
driver.parameter.vendorid: 09ae
driver.version: 2.7.2
driver.version.data: TrippLite HID 0.81
driver.version.internal: 0.38
input.frequency: 60.1
input.voltage: 120.3
input.voltage.nominal: 120
output.frequency.nominal: 60
output.voltage.nominal: 120
ups.beeper.status: enabled
ups.delay.shutdown: 20
ups.mfr: Tripp Lite 
ups.model: Tripp Lite UPS 
ups.power.nominal: 1000
ups.productid: 2010
ups.status: OL
ups.timer.reboot: 65535
ups.timer.shutdown: 65535
ups.vendorid: 09ae
ups.watchdog.status: 0

Here 120v is the name assigned to the 120V UPS device in the ups.conf file, so it will vary for setups on other systems.

If all succeeds to this point, what we have set up so far is a set of command-line tools for querying (and possibly controlling) the UPS units. To access this functionality from within Python scripts, a set of official Python bindings are provided by the python-nut package. However, at the time of writing, these bindings only exist for Python 2.7. For Python 3 applications (like the vacuum system), I have created a Python 3 translation which is included in the vacuum git repo. Refer to the UPS readout script for an illustration of its usage.

  15577   Wed Sep 16 12:03:07 2020 JonUpdateVACReplacing pressure gauges

Assembled is the list of dead pressure gauges. Their locations are also circled in Attachment 1.

Gauge Type Location
CC1 Cold cathode Main volume
CC3 Cold cathode Pumpspool 
CC4 Cold cathode RGA chamber
CCMC Cold cathode IMC beamline near MC2
P1b Pirani Main volume
PTP1 Pirani TP1 foreline

For replacements, I recommend we consider the Agilent FRG-700 Pirani Inverted Magnetron Gauge. It uses dual sensing techniques to cover a broad pressure range from 3e-9 torr to atmosphere in a single unit. Although these are more expensive, I think we would net save money by not having to purchase two separate gauges (Pirani + hot/cold cathode) for each location. It would also simplify the digital controls and interlocking to have a streamlined set of pressure readbacks.

For controllers, there are two options with either serial RS232/485 or Ethernet outputs. We probably want the Agilent XGS-600, as it can handle all the gauges in our system (up to 12) in a single controller and no new software development is needed to interface it with the slow controls.

  15582   Sat Sep 19 18:07:35 2020 KojiUpdateVACTP3 RP failure

I came to the campus and Gautam notified that he just had received the alert from the vac watchdog.

I checked the vac status at c1vac. PTP3 went up to 10 torr-ish and this made the diff pressure for TP3 over 1torr. Then the watchdog kicked in.

To check the TP3 functionality, AUX RP was turned on and the manual valve (MV in the figure) was opened to pump the foreline of TP3. This easily made PTP3 <0.2 torr and TP3 happy (I didn't try to open V5 though).

So the conclusion is that RP for TP3 has failed. Presumably, the tip-seal needs to be replaced.

Right now TP3 was turned off and is ready for the tip-seal replacement. V5 was closed since the watchdog tripped.

  15586   Sat Sep 19 19:37:16 2020 not KojiUpdateVACTP3 RP failure

Disconcerting because those tip seals were just replaced [15417]. Maybe they were just defective, but if there is a more serious problem with the pump, there is a spare Varian roughing pump (the old TP2 dry pump) sitting at the X-end.

I reset the interlock error to unfreeze the vac controls (leaving V5 closed).

Quote:

So the conclusion is that RP for TP3 has failed. Presumably, the tip-seal needs to be replaced.

Right now TP3 was turned off and is ready for the tip-seal replacement. V5 was closed since the watchdog tripped.

  15591   Mon Sep 21 15:57:08 2020 JordanUpdateVACTP3 Forepump Replacement and Vac reset

I removed the forepump (Varian SH-110) for TP3 today to see why it had failed over the weekend. I tested it in the C&B lab and the ultimate pressure was only ~40torr. I checked the tip seals and they were destroyed. The scroll housing also easily pulled off of the motor drive shaft, which is indicative of bad bearings. The excess travel in the bearings likely led to significant increase in tip seal wear. This pump will need to be scrapped, or rebuilt.

I tested the spare Varian SH-110 pump located at the X-end and the ultimate pressure was ~98 mtorr. This pump had tip seals replaced on 11/5/18, and is currently at 55163 operating hours. It has been installed as the TP3 forepump.

Once installed, restarting the pump line occured as follows: V5 Closed, VA6 closed, VASE Closed, VASV closed, VABSSCI closed, VABS closed, VABSSCO closed, VAEV closed, VAEE closed,TP3 was restarted and once at normal operation, valves were opened in same order.

The pressure differential interlock condition for V5 was temporaily changed to 10 torr (by Gautam), so that valves could be opened in a controlled manner. Once, the vacuum system was back to normal state the V5 interlock condition was set back to the nominal 1 torr. Vacuum system is now running normally.

  15599   Wed Sep 23 08:57:18 2020 gautamUpdateVACTP2 running HOT

The interlocks tripped at ~630am local time. Jordan reported that TP2 was supposedly running at 52 C (!).

V1 was already closed, but TP2 was still running. With him standing by the rack, I remotely exectued the following sequence:

  • VM1 closed (isolates RGA volume).
  • VA6 closed (isolates annuli from being pumped).
  • V7 opened (TP3 now backs TP1, temporarily, until I'm in the lab to check things out further).
  • TP2 turned off.

Jordan confirmed (by hand) that TP2 was indeed hot and this is not just some serial readback issue. I'll do the forensics later.

  15600   Wed Sep 23 10:06:52 2020 KojiUpdateVACTP2 running HOT

Here is the timeline. This suggests TP2 backing RP failure.

1st line: TP2 foreline pressure went up. Accordingly TP2 P, current, voltage, and temp went up. TP2 rotation went down.

2nd line: TP2 temp triggered the interlock. TP2 foreline pressure was still high (10torr) so TP2 struggled and was running at 1 torr.

3rd line: Gautam's operation. TP2 was isolated and stopped.

Between the 1st line and 2nd line, TP2 pressue (=TP1 foreline pressure) went up to 1torr. This made TP1 current increased from 0.55A to 0.68A (not shown in the plot), but TP1 rotation was not affected.

  15602   Wed Sep 23 15:06:54 2020 JordanUpdateVACTP2 Forepump Re-install

I removed the forepump to TP2 this morning after the vacuum failure, and tested in the C&B lab. I pumped down on a small volume 10 times, with no issue. The ultimate pressure was ~30 mtorr.

I re-installed the forepump in the afternoon, and restarted TP2, leaving V4 closed. This will run overnight to test, while TP3 backs TP1.

In order to open V1, with TP3 backing TP1, the interlock system had to be reset since it is expecting TP2 as a backing pump. TP2 is running normally, and pumping of the main volume has resumed.


gautam 2030:

  1. The monitor (LCD display) at the vacuum rack doesn't work - this has been the case since Monday at least. I usually use my laptop to ssh in so I didn't notice it so it could have been busted from before. But for anyone wishing to use the workstation arrangement at 1X8, this is not great. Today, we borrowed the vertex laptop to ssh in, the vertex laptop has since been returned to its nominal location.
  2. The modification to the interlock condition was made by simply commenting out the line requiring V4 to be open for V1 to be opened. I made a copy of the original .yaml file which we can revert to once we go back to the normal config.
  3. I also opened VM1 to allow the RGA scans to continue to be meaningful.
  4. At the time of writing, all systems seem nominal. See Attachment #2. The vertical line indicates when we started pumping on the main volume again earlier today, with TP3 backing TP1.

Unclear why the TP2 foreline pump failed in the first place, it has been running fine for several hours now (although TP2 has no load, since V4 isolates it from the main volume). Koji's plots show that the TP2 foreline pressure did not recover even after the interlock tripped and V4 was closed (i.e. the same conditions as TP2 sees right now).

  15615   Tue Oct 6 14:35:16 2020 JordanUpdateVACSpare forepumps

I have placed 3 new in box, IDP 7 forepumps along the x arm of the interferometer. These are to be used as spares for both the 40m and Clean and Bake.

  15668   Tue Nov 10 11:59:37 2020 gautamUpdateVACStuck RV2

I've uploaded some more photos here. I believe the problem is a worn out thread where the main rotary handle attaches to the shaft that operates the valve.

This morning, I changed the valve config such that TP2 backs TP1 and that combo continues to pump on the main volume through the partially open RV2. TP3 was reconfigured to pump the annuli - initially, I backed it with the AUX drypump but since the load has decreased now, I am turning the AUX drypump off. At some point, if we want to try it, we can try pumping the main volume via the RGA line using TP2/TP3 and see if that allows us to get to a lower pressure, but for now, I think this is a suitable configuration to continue the IFO work.

There was a suggestion at the meeting that the saturation of the main volume pressure at 1mtorr could be due to a leak - to test, I closed V1 for ~5 hours and saw the pressure increased by 1.5 mtorr, which is in line with our estimates from the past. So I think we can discount that possibility.

  15681   Wed Nov 18 17:51:50 2020 gautamUpdateVACAgilent pressure gauge controller delivered

It is stored along with the cables that arrived a few weeks ago, awaiting the gauges which are now expected next week sometime.

  15686   Mon Nov 23 16:33:10 2020 gautamUpdateVACMore vacuum deliveries

Five Agilent pressure gauges were delivered to the 40m. It is stored with the controller and cables in the office area. This completes the inventory for the gauge replacement - we have all the ordered parts in hand (though. not necessarily all the adaptor flanges etc). I'll see if I can find some cabinet space in the VEA to store these, the clutter is getting out of hand again...
 

in addition, the spare gate valve from LHO was also delivered today to the 40m. It is stored at EX with the other spare valves. 

Quote:

It is stored along with the cables that arrived a few weeks ago, awaiting the gauges which are now expected next week sometime.

  15692   Wed Dec 2 12:27:49 2020 JonUpdateVACReplacing pressure gauges

Now that the new Agilent full-range gauges (FRGs) have been received, I'm putting together an installation plan. Since my last planning note in Sept. (ELOG 15577), two more gauges appear to be malfunctioning: CC2 and PAN. Those are taken into account, as well. Below are the proposed changes for all the sensors in the system.

In summary:

  • Four of the FRGs will replace CC1/2/3/4.
  • The fifth FRG will replace CCMC if the 15.6 m cable (the longest available) will reach that location.
  • P2 and P3 will be moved to replace PTP1 and PAN, as they will be redundant once the new FRGs are installed.

Required hardware:

  • 3x CF 2.75" blanks
  • 10x CF 2.75" gaskets
  • Bolts and nut plates
Volume Sensor Location Status Proposed Action
Main P1a functioning leave
Main P1b local readback only leave
Main CC1 dead replace with FRG
Main CCMC dead replace with FRG*
Pumpspool PTP1 dead replace with P2
Pumpspool P2 functioning replace with 2.75" CF blank
Pumpspool CC2 intermittent replace with FRG
Pumpspool PTP2 functioning leave
Pumpspool P3 functioning replace with 2.75" CF blank
Pumpspool CC3 dead replace with FRG
Pumpspool PTP3 functioning leave
Pumpspool PRP functioning leave
RGA P4 functioning leave
RGA CC4 dead replace with FRG
RGA IG1 dead replace with 2.75" CF blank
Annuli PAN intermittent replace with P3
Annuli PASE functioning leave
Annuli PASV functioning leave
Annuli PABS functioning leave
Annuli PAEV functioning leave
Annuli PAEE functioning leave

 

Quote:

For replacements, I recommend we consider the Agilent FRG-700 Pirani Inverted Magnetron Gauge. It uses dual sensing techniques to cover a broad pressure range from 3e-9 torr to atmosphere in a single unit. Although these are more expensive, I think we would net save money by not having to purchase two separate gauges (Pirani + hot/cold cathode) for each location. It would also simplify the digital controls and interlocking to have a streamlined set of pressure readbacks.

For controllers, there are two options with either serial RS232/485 or Ethernet outputs. We probably want the Agilent XGS-600, as it can handle all the gauges in our system (up to 12) in a single controller and no new software development is needed to interface it with the slow controls.

 

  15698   Thu Dec 3 10:33:00 2020 gautamUpdateVACTrippLite UPS delivered

The latest greatest UPS has been delivered. I will move it to near the vacuum rack in its packaging for storage. It weighs >100lbs so care will have to be taken when installing - can the rack even support this?

  15703   Thu Dec 3 14:53:58 2020 JonUpdateVACReplacing pressure gauges

Update to the gauge replacement plan (15692), based on Jordan's walk-through today. He confirmed:

  • All of the gauges being replaced are mounted via 2.75" ConFlat flange. The new FRGs have the same footprint, so no adapters are required.
  • The longest Agilent cable (50 ft) will NOT reach the CCMC location. The fifth FRG will have to be installed somewhere closer to the X-end.

Based on this info (and also info from Gautam that the PAN gauge is still working), I've updated the plan as follows. In summary, I now propose we install the fifth FRG in the TP1 foreline (PTP1 location) and leave P2 and P3 where they are, as they are no longer needed elsewhere. Any comments on this plan? I plan to order all the necessary gaskets, blanks, etc. tomorrow.

Volume Sensor Location Status Proposed Action
Main P1a functioning leave
Main P1b local readback only leave
Main CC1 dead replace with FRG
Main CCMC dead remove; cap with 2.75" CF blank
Pumpspool PTP1 dead replace with FRG
Pumpspool P2 functioning leave
Pumpspool CC2 dead replace with FRG
Pumpspool PTP2 functioning leave
Pumpspool P3 functioning leave
Pumpspool CC3 dead replace with FRG
Pumpspool PTP3 functioning leave
Pumpspool PRP functioning leave
RGA P4 functioning leave
RGA CC4 dead replace with FRG
RGA IG1 dead remove; cap with 2.75" CF blank
Annuli PAN functioning leave
Annuli PASE functioning leave
Annuli PASV functioning leave
Annuli PABS functioning leave
Annuli PAEV functioning leave
Annuli PAEE functioning leave
  15721   Wed Dec 9 20:14:49 2020 gautamUpdateVACUPS failure

Summary:

  1. The (120V) UPS at the vacuum rack is faulty.
  2. The drypump backing TP2 is faulty.
  3. Current status of vacuum system: 
    • The old UPS is now powering the rack again. Sometime ago, I noticed the "replace battery" indicator light on this unit was on. But it is no longer on. So I judged this is the best course of action. At least this UPS hasn't randomly failed before...
    • main vol is being pumped by TP1, backed by TP3.
    • TP2 remains off.
    • The annular volumes are isolated for now while we figure out what's up with TP2.
    • The pressure went up to ~1 mtorr (c.f. ~600utorr that is the nominal value with the stuck RV2) during the whole episode but is coming back down now.
  4. Steve seems to have taken the reliability of the vacuum system with him.

Details:

Around 7pm, the UPS at the vacuum rack seems to have failed. Don't ask me why I decided to check the vacuum screen 10 mins after the failure happened, but the point is, this was a silent failure so the protocols need to be looked into.

Going to the rack, I saw (unsurprisingly) that the 120V UPS was off. 

  • Pushed the power on button - the LCD screen would briefly light up, say the line voltage was 120 V, and then turned itself off. Not great.
  • I traced the power connection to the UPS itself to a power strip under the rack - then I moved the plug from one port to another. Now the UPS stays on. okay...
  • but after ~3 mins while I'm hunting for a VGA cable, I hear an incessant beeping. The UPS display has the "Fault" indicator lit up. 
  • I decided to shift everything back to the old UPS. After the change was made, I was able to boot up the c1vac machine again, and began the recovery process.
  • When I tried to start TP2, the drypump was unusually noisy, and I noticed PTP2 bottomed out at ~500 torr (yes torr). So clearly something is not right here. This pump supposedly had its tip-seal replaced by Jordan just 3 months ago. This is not a normal lifetime for the tip seal - we need to investigate more in detail what's going on here...
  • Decided that an acceptable config is to pump the main volume (so that we can continue working on other parts of the IFO). The annuli are all <10mtorr and holding, so that's just fine I think.

Questions:

  1. Are the failures of TP2 drypump and UPS related? Or coincidence? Who is the chicken and who is the egg?
  2. What's up with the short tip seal lifetime?
  3. Why did all of this happen without any of our systems catching it and sending an alert??? I have left the UPS connected to the USB/ethernet interface in case anyone wants to remotely debug this.

For now, I think this is a safe state to leave the system in. Unless I hear otherwise, I will leave it so - I will be in the lab another hour tonight (~10pm).

Some photos and a screen-cap of the Vac medm screen attached.

  15722   Thu Dec 10 11:07:24 2020 ChubUpdateVACUPS fault

Is that a fault code that you can decipher in the manual, or just a light telling you nothing but your UPS is dead?

  15723   Thu Dec 10 11:17:50 2020 ChubUpdateVACUPS fault

I can't find anything in the manual that describes the nature of the FAULT message.  In fact, it's not mentioned at all.  If the unit detects a fault at its output, I would expect a bit more information.  This unit does a programmable level of input error protection, too, usually set at 100%.  Still, there is no indication in the manual whether an input issue would be described as a fault; that usually means a short or lifted ground at the output.

Quote:

Is that a fault code that you can decipher in the manual, or just a light telling you nothing but your UPS is dead?

  15724   Thu Dec 10 13:05:52 2020 JonUpdateVACUPS failure

I've investigated the vacuum controls failure that occurred last night. Here's what I believe happened.

From looking at the system logs, it's clear that there was a sudden loss of power to the control computer (c1vac). Also, the system was actually down for several hours. The syslog shows normal EPICS channel writes (pressure readback updates, etc., and many of them per minute) which suddenly stop at 4:12 pm. There are no error or shutdown messages in the syslog or in the interlock log. The next activity is the normal start-up messaging at 7:39 pm. So this is all consistent with the UPS suddenly failing.

According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.

Preventing this in the future:

First, there are too many electronics on the 1 kVA UPS. The reason I asked us to buy a dual 208/120V UPS (which we did buy) is to relieve the smaller 120V UPS. I envision moving the turbo pumps, gauge controllers, etc. all to the 5 kVA unit and reserving the smaller 1 kVA unit for the c1vac computer and its peripherals. We now have the dual 208/120V UPS in hand. We should make it a priority to get that installed.

Second, there are 1 Hz "blinker" channels exposed for c1vac and all the slow controls machines, each reporting the machine's alive status. I don't think they're being monitored by any auto-notification program (running on a central machine), but they could be. Maybe there already exists code that could be co-opted for this purpose? There is an MEDM screen displaying the slow machine statuses at Sitemap > CDS > SLOW CONTROLS STATUS, pictured in Attachment 2. This is the only way I know to catch sudden failures of the control computer itself.

  15725   Thu Dec 10 14:29:26 2020 gautamUpdateVACUPS failure

I don't buy this story - P2 only briefly burped around GPStime 1291608000 which is around 8pm local time, which is when I was recovering the system.

Today. Jordan talked to Jon Feicht - apparently there is some kind of valve in the TP2 forepump, which only opens ~15-20 seconds after turning the pump on. So the loud sound I was hearing yesterday was just some transient phenomenon. So today morning at ~9am, we turned on TP2. Once again, PTP2 pressure hovered around 500 torr for about 15-20 seconds. Then it started to drop, although both Jordan and I felt that the time it took for the pressure to drop in the range 5 mtorr - 1 mtorr was unusually long. Jordan suspects some "soft-start" feature of the Turbo Pumps, which maybe spins up the pump in a more controlled way than usual after an event like a power failure. Maybe that explains why the pressure dropped so slowly? One thing is for sure - the TP2 controller displayed "TOO HIGH LOAD" yesterday when I tried the first restart (before migrating everything to the older UPS unit). This is what led me to interpret the loud sound on startup of TP2 to indicate some issue with the forepump - as it turns out, this is just the internal valve not being opened.

Anyway, we left TP2 on for a few hours, pumping only on the little volume between it and V4, and PTP2 remained stable at 20 mtorr. So we judged it's okay to open V4. For today, we will leave the system with both TP2 and TP3 backing TP1. Given the lack of any real evidence of a failure from TP2, I have no reason to believe there is elevated risk.

As for prioritising UPS swap - my opinion is that it's better to just replace the batteries in the UPS that has worked for years. We can run a parallel reliability test of the new UPS and once it has demonstrated stability for some reasonable time (>4 months), we can do the swap.


I was able to clear the FAULT indicator on the new UPS by running a "self-test". pressing and holding the "mute" button on the front panel initiates this test according to the manual, and if all is well, it will clear the FAULT indicator, which it did. I'm still not trusting this unit and have left all units powered by the old UPS.


Update 1100 Dec 11: The config remained stable overnight so today I reverted to the nominal config of TP3 pumping the annuli and TP2 backing TP1 which pumps the main volume (through the partially open RV2).

Quote:
 

According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.

  15748   Wed Jan 6 15:28:04 2021 gautamUpdateVACVac rack UPS batteries replaced

[chub, gautam]

the replacement was done this afternoon. The red "Replace Battery" indicator is no longer on.

  16047   Mon Apr 19 09:17:51 2021 JordanUpdateVACEmpty N2 Tanks

When I came into the lab this morning, I noticed that both N2 tanks were empty. I had swapped one on Friday (4-16-21) before I left the lab. Looking at the logs, the right tank (T2) sprung a leak shortly shortly after install. I leak checked the tank coupling after install but did not see a leak. There could a leak further down the line, possibly at the pressure transducer.

The left tank (T1) emptied normally over the weekend, and I quickly swapped the left tank for a full one, and is curently at ~2700 psi. It was my understanding that if both tanks emptied, V1 would close automatically and a mailer would be sent out to the 40m group. I did not receive an email over the weekend, and I checked the Vac status just now and V1 was still open.

I will keep an eye on the tank pressure throughout the day, and will try to leak check the T2 line this afternoon, but someone should check the vacuum interlocks and verify.

 

  16064   Wed Apr 21 12:56:00 2021 JordanUpdateVACEmpty N2 Tanks

Installed T2 today, and leaked checked the entire line. No issues found. It could have been a bad valve on the tank itself. Monitored T2 pressure for ~2 hours to see if there was any change. All seems ok.

Quote:

When I came into the lab this morning, I noticed that both N2 tanks were empty. I had swapped one on Friday (4-16-21) before I left the lab. Looking at the logs, the right tank (T2) sprung a leak shortly shortly after install. I leak checked the tank coupling after install but did not see a leak. There could a leak further down the line, possibly at the pressure transducer.

The left tank (T1) emptied normally over the weekend, and I quickly swapped the left tank for a full one, and is curently at ~2700 psi. It was my understanding that if both tanks emptied, V1 would close automatically and a mailer would be sent out to the 40m group. I did not receive an email over the weekend, and I checked the Vac status just now and V1 was still open.

I will keep an eye on the tank pressure throughout the day, and will try to leak check the T2 line this afternoon, but someone should check the vacuum interlocks and verify.

 

 

  16090   Wed Apr 28 11:31:40 2021 JonUpdateVACEmpty N2 Tanks

I checked out what happened on c1vac. There are actually two independent monitoring codes running:

  1. The interlock service, which monitors the line directly connected to the valves.
  2. A seaparate convenience mailer, running as a cronjob, that monitors the tanks.

The interlocks did not trip because the low-pressure delivery line, downstream of the dual-tank regulator, never fell below the minimum pressure to operate the valves (65 PSI). This would have eventually occurred, had Jordan been slower to replace the tanks. So I see no problem with the interlocks.

On the other hand, the N2 mailer should have sent an email at 2021-04-18 15:00, which was the first time C1:Vac-N2T1_pressure dropped below the 600 PSI threshold. N2check.log shows these pressures were recorded at this time, but does not log that an email was sent. Why did this fail? Not sure, but I found two problems which I did fix:

  • One was that the code was checking the sensor on the low-pressure side (C1:Vac-N2_pressure; nominally 75 PSI) against the same 600 PSI threshold as the tanks. This channel should either be removed or a separate threshold (65 PSI) defined just for it. I just removed it from the list because monitoring of this channel is redundant with the interlock service. This does not explain the failure to send an email.
  • The second issue was that the pyN2check.sh script appeared to be calling Python 3 to run a Python 2 script. At least that was the situation when I tested it, and this was causing it to fail partway through. This might well explain the problem with no email. I explicitly set python --> python2 in the shell script.

The code then ran fine for me when I retested it. I don't see any further issues.

Quote:

Installed T2 today, and leaked checked the entire line. No issues found. It could have been a bad valve on the tank itself. Monitored T2 pressure for ~2 hours to see if there was any change. All seems ok.

Quote:

When I came into the lab this morning, I noticed that both N2 tanks were empty. I had swapped one on Friday (4-16-21) before I left the lab. Looking at the logs, the right tank (T2) sprung a leak shortly shortly after install. I leak checked the tank coupling after install but did not see a leak. There could a leak further down the line, possibly at the pressure transducer.

The left tank (T1) emptied normally over the weekend, and I quickly swapped the left tank for a full one, and is curently at ~2700 psi. It was my understanding that if both tanks emptied, V1 would close automatically and a mailer would be sent out to the 40m group. I did not receive an email over the weekend, and I checked the Vac status just now and V1 was still open.

I will keep an eye on the tank pressure throughout the day, and will try to leak check the T2 line this afternoon, but someone should check the vacuum interlocks and verify.

 

  16305   Wed Sep 1 14:16:21 2021 JordanUpdateVACEmpty N2 Tanks

The right N2 tank had a bad/loose valve and did not fully open. This morning the left tank was just about empty and the right tank showed 2000+ psi on the gauge. Once the changeover happened the copper line emptied but the valve to the N2 tank was not fully opened. I noticed the gauges were both reading zero at ~1pm just before the meeting. I swapped the left tank, but not in time. The vacuum interlocks tripped at 1:04 pm today when the N2 pressure to the vacuum valves fell below 65psi. After the meeting, Chub tightened the valve, fully opened it and refilled the lines. I will monitor the tank pressures today and make sure all is ok.

There used to be a mailer that was sent out when the sum pressure of the two tanks fell <600 psi, telling you to swap tanks. Does this no longer exist?

  16316   Wed Sep 8 18:00:01 2021 KojiUpdateVACcronjobs & N2 pressure alert

In the weekly meeting, Jordan pointed out that we didn't receive the alert for the low N2 pressure.

To check the situation, I went around the machines and summarized the cronjob situation.
[40m wiki: cronjob summary]
Note that this list does not include the vacuum watchdog and mailer as it is not on cronjob.

Now, I found that there are two N2 scripts running:

1. /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh on megatron and is running every minute (!)
2. /opt/rtcds/caltech/c1/scripts/Admin/N2check/pyN2check.sh on c1vac and is running every 3 hours.

Then, the N2 log file was checked: /opt/rtcds/caltech/c1/scripts/Admin/n2Check.log

Wed Sep 1 12:38:01 PDT 2021 : N2 Pressure: 76.3621
Wed Sep 1 12:38:01 PDT 2021 : T1 Pressure: 112.4
Wed Sep 1 12:38:01 PDT 2021 : T2 Pressure: 349.2
Wed Sep 1 12:39:02 PDT 2021 : N2 Pressure: 76.0241
Wed Sep 1 12:39:02 PDT 2021 : N2 pressure has fallen to 76.0241 PSI !

Tank pressures are 94.6 and 98.6 PSI!

This email was sent from Nodus.  The script is at /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh

Wed Sep 1 12:40:02 PDT 2021 : N2 Pressure: 75.5322
Wed Sep 1 12:40:02 PDT 2021 : N2 pressure has fallen to 75.5322 PSI !

Tank pressures are 93.6 and 97.6 PSI!

This email was sent from Nodus.  The script is at /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh

...

The error started at 11:39 and lasted until 13:01 every minute. So this was coming from the script on megatron. We were supposed to have ~20 alerting emails (but did none).
So what's happened to the mails? I tested the script with my mail address and the test mail came to me. Then I sent the test mail to 40m mailing list. It did not reach.
-> Decided to put the mail address (specified in /etc/mailname , I believe) to the whitelist so that the mailing list can accept it.
I did run the test again and it was successful. So I suppose the system can now send us the alert again.
And alerting every minute is excessive. I changed the check frequency to every ten minutes.

What's happened to the python version running on c1vac?
1) The script is running, spitting out some error in the cron report (email on c1vac). But it seems working.
2) This script checks the pressures of the bottles rather than the N2 pressure downstream. So it's complementary.
3) During the incident on Sept 1, the checker did not trip as the pressure drop happened between the cronjob runs and the script didn't notice it.
4) On top of them, the alert was set to send the mails only to an our former grad student. I changed it to deliver to the 40m mailing list. As the "From" address is set to be some ligox...@gmail.com, which is a member of the mailing list (why?), we are supposed to receive the alert. (And we do for other vacuum alert from this address).

 

 

 

 

  16404   Thu Oct 14 18:30:23 2021 KojiSummaryVACFlange/Cable Stand Configuration

Flange Configuration for BHD

We will need total 5 new cable stands. So Qty.6 is the number to be ordered.


Looking at the accuglass drawing, the in-vaccum cables are standard dsub 25pin cables only with two standard fixing threads.

https://www.accuglassproducts.com/sites/default/files/PDF/Partpdf/110070_3.pdf

For SOSs, the standard 40m style cable bracket works fine. https://dcc.ligo.org/D010194-x0

However, for the OMCs, we need to make the thread holes available so that we can mate DB25 male cables to these cables.
One possibility is to improvise this cable bracket to suspend the cables using clean Cu wires or something. I think we can deal with this issue in situ.


Ha! The male side has the 4-40 standoff (jack) screws. So we can hold the male side on the bracket using the standoff (jack) screws and plug the female cables. OK! The issue solved!

https://www.accuglassproducts.com/sites/default/files/PDF/Partpdf/110029_3.pdf

  16410   Mon Oct 18 10:02:17 2021 KojiUpdateVACVent Started / Completed

[Chub, Jordan, Anchal, Koji]

- Checked the main volume is isolated.
- TP1 and TP2 were made isolated from other volumes. Stopped TP1. Closed V4 to isolate TP1 from TP2.
- TP3 was made isolated. TP3 was stopped.
- We wanted to vent annuli, but it was not allowed as VA6 was open. We closed VA6 and vented the annuli with VAVEE.
- We wanted to vent the volume between VA6, V5, VM3, V7 together with TP1. So V7 was opened. This did not change the TP1 pressure (P2 = 1.7mmTorr) .
- We wanted to connect the TP1 volume with the main volume. But this was not allowed as TP1 was not rotating. We will vent TP1 through TP2 once the vent of the main volume is done.

- Satrted venting the main volume@Oct 18, 2021 9:45AM PDT

- We started from 10mTorr/min, and increased the vent speed to 200mTorr/min, 700mTorr/min, and now it is 1Torr/min @ 20Torr
- 280Torr @11:50AM
- 1atm  @~2PM


We wanted to vent TP1. We rerun the TP2 and tried to slowly introduce the air via TP2. But the interlock prevents the action.

Right now the magenta volume in the attachment is still ~1mTorr. Do we want to open the gate valves manually? Or stop the interlock process so that we can bypass it?

  16412   Tue Oct 19 10:59:09 2021 KojiUpdateVACVent Started / Completed

[Chub, Jordan, Yehonathan, Anchal, Koji]

North door of the BS chamber opened

 

  16413   Tue Oct 19 11:30:39 2021 KojiUpdateVACHow to vent TP1

I learned that TP1 was vented through the RGA room in the past. This can be done by opening VM2 and a manual valve ("needle valve")
I checked the setup and realized that this will vent RGA. But it is OK as long as we turns of the RGA during vent and bake it once TP1 is back.

Additional note:

- It'd be nice to take a scan for the current background level before the work.
- Turn RGA EM and filament off, let it cool down overnight. 
- Vent with clean N2 or clean air. (Normal operating temp ~80C is to minimize accumulation of H-C contaminations.)
- There is a manual switch and indicators on the top of the RGA amp. It has auto protection to turn filament off if the pressure increase over ~1e-5.

  16418   Wed Oct 20 15:58:27 2021 KojiUpdateVACHow to vent TP1

Probably the hard disk of c0rga is dead. I'll follow up in this elog later today.

Looking at the log in /opt/rtcds/caltech/c1/scripts/RGA/logs , it seemed that the last RGA scan was Sept 2, 2021, the day when we had the disk full issue of chiara.
I could not login to c0rga from control machines.
I was not aware of the presence for c0rga until today, but I could locate it in the X arm.
The machine was not responding and it was rebooted, but could not restart. It made some knocking sound. I am afraid that the HDD failed.

I think we can
- prepare a replacement linux machine for the python scripts
or
- integrate it with c1vac

  16490   Mon Dec 6 14:26:52 2021 KojiUpdateVACPumping down the RGA section

Jordan reported that the RGA section needs to be pumped down to allow the analyzer to run at sufficiently low pressure (P<1e-4 torr).
The RGA section was pumped down with the TP2/TP3. The procedure is as listed below.
If the pressure go up to P>1e-4 torr, we need to keep the pump running until the scan is ready.

----
### Monitor / Control screen setup ###
1. On c1vac: cd /cvs/cds/caltech/target/c1vac/medm
2. medm -x C0VAC_MONITOR.adl&
3. RGA section (P4) 3.6e-1 torr / P3/P2 still atm.
4. medm -x C0VAC_CONTROL.adl

### TP2/TP3 backing ###
5. Turn on AUX RP with the circuit breaker hanging on the AC.
6. Open manual valve for TP2/3/ backing / PTP2/3 ~ 8torr

### TP2/TP3 starting ###
7. Turned on TP2/TP3 with the Standby OFF

### Pump down the pump spool ###
8. Connect manual RP line (Quick Connect)
9. Turned on RP1/RP3 -> quickly reached 0.4 torr
10. Open V6 for pump spool pumping -> Immediately go down to sufficiently low pressure for TP2/TP3.
(10.5 I had to close V6 at this point)
11. Open V5 to start pumping pump spool with TP3 (TP2 still stand by) -> P3 immediately goes down below 1e-4 torr. This automatically closed V6 because of the low pressure of P3 (interlocking)

### Pump down the RGA section ###
12. Open VM3 to pump down RGA section -> P4 goes down to <1e-4 torr
13. P2 is still 2e-3. So decided to open V4 to use TP2 (now it's ready) too. -> Saturated at 1.7e-3

### Shutting down ###
14. Close VM3
15. Close V4/V5 to isolate TP2/TP3
16. Stop TP2/TP3 -> Slowing down
17. Stop RP1/RP3
18. Close the manual valves for TP2/3/ backing
19. Stop AUX RP with the circuit breaker hanging on the AC.

  16493   Tue Dec 7 13:12:50 2021 KojiUpdateVACPumping down the RGA section

So that Jordan can run the RGA scan this afternoon, I ran TP3 and started pumping down the RGA section.

Procedure:
- Same 1~4
- Same 5
- 6 Opened only the backing path for TP3
- 7 Turned on TP3 only

- TP3 reached the nominal full speed @75kRPM

- 11 Opened V5 to pump the pump spool -> Immediately reached P3<1e-4
- 12 Opened VM3 to pump the RGA section -> Immediately reached P4<1e-4

The pumps are kept running. I'll come back later to shut down the pumps.
=> Jordan wants to heat the filament (?) and to run the scan tomorrow.
So we decided to keep TP3 running overnight. I switched TP3 to the stand-by mode (= lower rotation speed @50kRPM)

 

  16494   Wed Dec 8 10:14:43 2021 JordanUpdateVACPumping down the RGA section

After an overnight pumpdown/RGA warm up, I took a 100 amu scan of the RGA volume and subsequent pumping line. Attached is a screenshot along with the .txt file. Given the high argon peak (40) and the N2/O2 ratio, it looks like there is a decent sized air leak somehwere in the volume.

Are we interested in the hydrocarbon leak rates of this volume? That will require another scan with one of the calibrated leaks opened.

Edit: Added a Torr v AMU plot to see the partial pressures

Quote:

So that Jordan can run the RGA scan this afternoon, I ran TP3 and started pumping down the RGA section.

Procedure:
- Same 1~4
- Same 5
- 6 Opened only the backing path for TP3
- 7 Turned on TP3 only

- TP3 reached the nominal full speed @75kRPM

- 11 Opened V5 to pump the pump spool -> Immediately reached P3<1e-4
- 12 Opened VM3 to pump the RGA section -> Immediately reached P4<1e-4

The pumps are kept running. I'll come back later to shut down the pumps.
=> Jordan wants to heat the filament (?) and to run the scan tomorrow.
So we decided to keep TP3 running overnight. I switched TP3 to the stand-by mode (= lower rotation speed @50kRPM)

 

 

  16501   Fri Dec 10 19:22:01 2021 KojiUpdateVACPumping down the RGA section

The scan result was ~x10 higher than the previously reported scan on 2020/9/15 (https://nodus.ligo.caltech.edu:8081/40m/15570), which was sort of high from the reference taken on 2018/7/18.

This just could mean that the vacuum level at the RGA was x10 high.
We'll just go ahead with the vacuum repair and come back to the RGA once we return to "vacuum normal".

Meanwhile, I asked Jordan to turn off the RGA to make it cool down. I shut off RGA section and turned TP2 off.

  16508   Wed Dec 15 15:06:08 2021 JordanUpdateVACVacuum Feedthru Install

Jordan, Chub

We installed the 4x DB25 feedthru flange on the North-West port of ITMX chamber this afternoon. It is ready to go.

  16529   Tue Dec 21 16:35:39 2021 KojiUpdateVACITMX NW feedthru (LO1-1) connector pin bent

I've received a report that a pin of an ITMX NW feedthru connector was bent. (Attachment 1)
The connector is #1 (upper left) and planned to be used for LO1-1.

This is Pin25 and used for the PD K of OSEM #1. This means that Coil Driver #1 (3 OSEMs) uses this pin, but Coil Driver #2 (2 OSEMs) does not.

Anyways, I tried to fix it by bending it back. WIth some tools, it was straightened enough for plugging the cable connector. (Attachment 2)

It seemed that the pins were exceptionally soft compared to the ones used for usual DSUBs, probably because of the vacuum compatibility.
So it's better to approach the pins in parallel to the surface and not apply mating pressure until you are sure that all the 25pins are inserted in the counterpart holes.

  16634   Mon Jan 31 10:39:19 2022 JordanUpdateVACTP1 and Manual Gate Valve Removal

Jordan, Chub

Today, Chub and I removed TP1 and the failed manual gate valve off of the pumping spool.

First, P2 needed to be vented in order to remove TP1. TP1 has a purge valve on the side of the pump which we slowly opened bringing the P2 volume up to atmosphere. Although, this was not vented using the dry air/N2, using this purge valve eliminated the need to vent the RGA volume.

Then we disconnected TP1 foreline, removed TP1+8" flange reducer, then the gate valve. All of the removed hardware looked good, so no need to replace bolts/nuts, only needs new gaskets. TP1 and the failed valve are sitting on a cart wrapped in foil next to the pumping station.

  16643   Thu Feb 3 10:25:59 2022 JordanUpdateVACTP1 and Manual Gate Valve Install

Jordan, Chub

Chub and I installed the new manual gate valve (Nor-Cal GVM-6002-CF-K79) and reinstalled TP1. The new gate valave was placed with the sealing side towards the main 40m volume, then TP1 was installed on top and the foreline reattched to TP1.

This valve has a hard stop in the actuator to prevent over torquing.

 

  16682   Sat Feb 26 01:01:40 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I will make a detailed elog later today giving a detailed outline of the connection from the Agilent gauge controller to the vacuum subnet and the work I have been doing over the past two days to get data from the unit to EPICs channels. I just want to mention that I have plugged the XGS-600 gauge controller into the serial server on the vacuum subnet. I check the vacuum medm screen and I can confirm that the other sensors did not experience and issues are a result of this. I also currently have two of the FRG-700 connected to the controller but I have powered the unit down after the checks.

  16683   Sat Feb 26 15:45:14 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I have attached a flow diagram of my understanding of how the gauges are connected to the network.

Earlier today, I connected the XGS-600 gauge controller to the IOLAN Serial Device Server on port 192.168.114.22 .

The plan is a follows:

1. Update the serial device yaml file to include this new ip entry for the XGS-600 gauge controller

2. Create a serial gauge class "serial_gauge_xgs.py" for the XGS-600 gauge controller that inherits from the serial gauge parent class for EPICS communication with a serial device via TCP sockets.

  • Might be better to use the current channels of the devices that are being replaced initially, i.e.
  • C1:Vac-FRG1_pressure C1:Vac-CC1_pressure
    C1:Vac-FRG2_pressure C1:Vac-CCMC_pressure
    C1:Vac-FRG3_pressure C1:Vac-PTP1_pressure
    C1:Vac-FRG4_pressure C1:Vac-CC4_pressure
    C1:Vac-FRG5_pressure C1:Vac-IG1_pressure

3. Modify the launcher file to include the XGS gauge controller. Following the same pattern used  to start the service for the other serial gauges, we can start the communication between the XGS-600 gauge controller and the IOLAN serial server and write data to EPICS channels using

controls@c1vac> python launcher.py XGS600

If we are able to establish communication between the XGS-600 gauge controller and write it gause data to EPICS channels, go on to steps 4.

4. Create a serial service file "serial_XGS600.service" and place it in the service folder

5. Add the new EPICS channels to the database file

6. Add the "serial_XGS600.service" to line 10 and 11 of modbusIOC.service

7. Later on, when we are ready, we can restart the updated modbusIOC service

 

For vacuum signal flow and Acromag channel assignments see [1]  and [2] respectively. For the 16 port IOLAN SDS (Serial Device Server) ethernet connections, see [3]. 

[1] https://wiki-40m.ligo.caltech.edu/Vacuum-Upgrade-2018?action=AttachFile&do=view&target=40m_Vacuum_System_Signal_Flow.pdf

[2] https://wiki-40m.ligo.caltech.edu/Vacuum-Upgrade-2018?action=AttachFile&do=view&target=AcromagChannelAssignment.pdf

[3] https://git.ligo.org/40m/vac/-/blob/master/python/serial/serial_devices.yaml

  16688   Mon Feb 28 19:15:10 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I decided to create an independent service for the XGS data readout so we can get this to work first before trying to integrate into current system. After starting the service, I noticed that the EPICS channel were not updating as expected. So I started to debug the problem and managed to track it down to an ip socket connect() error, i.e. we get a connection error for the ip address assigned to the LAN port to which the XGS box was connected. After trying a few things and searching the internet, I think the error indicates that this particular LAN port is not yet configured. I reached this conclusion after noting that only a select number of LAN ports connected without issues and these are the ports that already had devices connected. So it must be the case that the LAN ports were somehow configured. The next step is to look at the IOLAN manual to figure out how to configure the ip port for the XGS controller. Fingers crossed.

ELOG V3.1.3-