40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 334 of 344  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  15668   Tue Nov 10 11:59:37 2020 gautamUpdateVACStuck RV2

I've uploaded some more photos here. I believe the problem is a worn out thread where the main rotary handle attaches to the shaft that operates the valve.

This morning, I changed the valve config such that TP2 backs TP1 and that combo continues to pump on the main volume through the partially open RV2. TP3 was reconfigured to pump the annuli - initially, I backed it with the AUX drypump but since the load has decreased now, I am turning the AUX drypump off. At some point, if we want to try it, we can try pumping the main volume via the RGA line using TP2/TP3 and see if that allows us to get to a lower pressure, but for now, I think this is a suitable configuration to continue the IFO work.

There was a suggestion at the meeting that the saturation of the main volume pressure at 1mtorr could be due to a leak - to test, I closed V1 for ~5 hours and saw the pressure increased by 1.5 mtorr, which is in line with our estimates from the past. So I think we can discount that possibility.

Attachment 1: damagedThread.001.jpeg
damagedThread.001.jpeg
Attachment 2: IFOstatus.png
IFOstatus.png
Attachment 3: P1a_leakTest.png
P1a_leakTest.png
  15681   Wed Nov 18 17:51:50 2020 gautamUpdateVACAgilent pressure gauge controller delivered

It is stored along with the cables that arrived a few weeks ago, awaiting the gauges which are now expected next week sometime.

  15686   Mon Nov 23 16:33:10 2020 gautamUpdateVACMore vacuum deliveries

Five Agilent pressure gauges were delivered to the 40m. It is stored with the controller and cables in the office area. This completes the inventory for the gauge replacement - we have all the ordered parts in hand (though. not necessarily all the adaptor flanges etc). I'll see if I can find some cabinet space in the VEA to store these, the clutter is getting out of hand again...
 

in addition, the spare gate valve from LHO was also delivered today to the 40m. It is stored at EX with the other spare valves. 

Quote:

It is stored along with the cables that arrived a few weeks ago, awaiting the gauges which are now expected next week sometime.

  15692   Wed Dec 2 12:27:49 2020 JonUpdateVACReplacing pressure gauges

Now that the new Agilent full-range gauges (FRGs) have been received, I'm putting together an installation plan. Since my last planning note in Sept. (ELOG 15577), two more gauges appear to be malfunctioning: CC2 and PAN. Those are taken into account, as well. Below are the proposed changes for all the sensors in the system.

In summary:

  • Four of the FRGs will replace CC1/2/3/4.
  • The fifth FRG will replace CCMC if the 15.6 m cable (the longest available) will reach that location.
  • P2 and P3 will be moved to replace PTP1 and PAN, as they will be redundant once the new FRGs are installed.

Required hardware:

  • 3x CF 2.75" blanks
  • 10x CF 2.75" gaskets
  • Bolts and nut plates
Volume Sensor Location Status Proposed Action
Main P1a functioning leave
Main P1b local readback only leave
Main CC1 dead replace with FRG
Main CCMC dead replace with FRG*
Pumpspool PTP1 dead replace with P2
Pumpspool P2 functioning replace with 2.75" CF blank
Pumpspool CC2 intermittent replace with FRG
Pumpspool PTP2 functioning leave
Pumpspool P3 functioning replace with 2.75" CF blank
Pumpspool CC3 dead replace with FRG
Pumpspool PTP3 functioning leave
Pumpspool PRP functioning leave
RGA P4 functioning leave
RGA CC4 dead replace with FRG
RGA IG1 dead replace with 2.75" CF blank
Annuli PAN intermittent replace with P3
Annuli PASE functioning leave
Annuli PASV functioning leave
Annuli PABS functioning leave
Annuli PAEV functioning leave
Annuli PAEE functioning leave

 

Quote:

For replacements, I recommend we consider the Agilent FRG-700 Pirani Inverted Magnetron Gauge. It uses dual sensing techniques to cover a broad pressure range from 3e-9 torr to atmosphere in a single unit. Although these are more expensive, I think we would net save money by not having to purchase two separate gauges (Pirani + hot/cold cathode) for each location. It would also simplify the digital controls and interlocking to have a streamlined set of pressure readbacks.

For controllers, there are two options with either serial RS232/485 or Ethernet outputs. We probably want the Agilent XGS-600, as it can handle all the gauges in our system (up to 12) in a single controller and no new software development is needed to interface it with the slow controls.

 

  15698   Thu Dec 3 10:33:00 2020 gautamUpdateVACTrippLite UPS delivered

The latest greatest UPS has been delivered. I will move it to near the vacuum rack in its packaging for storage. It weighs >100lbs so care will have to be taken when installing - can the rack even support this?

Attachment 1: DFDD4F39-3F8A-439D-888D-7C0CE2E030CF.jpeg
DFDD4F39-3F8A-439D-888D-7C0CE2E030CF.jpeg
  15703   Thu Dec 3 14:53:58 2020 JonUpdateVACReplacing pressure gauges

Update to the gauge replacement plan (15692), based on Jordan's walk-through today. He confirmed:

  • All of the gauges being replaced are mounted via 2.75" ConFlat flange. The new FRGs have the same footprint, so no adapters are required.
  • The longest Agilent cable (50 ft) will NOT reach the CCMC location. The fifth FRG will have to be installed somewhere closer to the X-end.

Based on this info (and also info from Gautam that the PAN gauge is still working), I've updated the plan as follows. In summary, I now propose we install the fifth FRG in the TP1 foreline (PTP1 location) and leave P2 and P3 where they are, as they are no longer needed elsewhere. Any comments on this plan? I plan to order all the necessary gaskets, blanks, etc. tomorrow.

Volume Sensor Location Status Proposed Action
Main P1a functioning leave
Main P1b local readback only leave
Main CC1 dead replace with FRG
Main CCMC dead remove; cap with 2.75" CF blank
Pumpspool PTP1 dead replace with FRG
Pumpspool P2 functioning leave
Pumpspool CC2 dead replace with FRG
Pumpspool PTP2 functioning leave
Pumpspool P3 functioning leave
Pumpspool CC3 dead replace with FRG
Pumpspool PTP3 functioning leave
Pumpspool PRP functioning leave
RGA P4 functioning leave
RGA CC4 dead replace with FRG
RGA IG1 dead remove; cap with 2.75" CF blank
Annuli PAN functioning leave
Annuli PASE functioning leave
Annuli PASV functioning leave
Annuli PABS functioning leave
Annuli PAEV functioning leave
Annuli PAEE functioning leave
  15721   Wed Dec 9 20:14:49 2020 gautamUpdateVACUPS failure

Summary:

  1. The (120V) UPS at the vacuum rack is faulty.
  2. The drypump backing TP2 is faulty.
  3. Current status of vacuum system: 
    • The old UPS is now powering the rack again. Sometime ago, I noticed the "replace battery" indicator light on this unit was on. But it is no longer on. So I judged this is the best course of action. At least this UPS hasn't randomly failed before...
    • main vol is being pumped by TP1, backed by TP3.
    • TP2 remains off.
    • The annular volumes are isolated for now while we figure out what's up with TP2.
    • The pressure went up to ~1 mtorr (c.f. ~600utorr that is the nominal value with the stuck RV2) during the whole episode but is coming back down now.
  4. Steve seems to have taken the reliability of the vacuum system with him.

Details:

Around 7pm, the UPS at the vacuum rack seems to have failed. Don't ask me why I decided to check the vacuum screen 10 mins after the failure happened, but the point is, this was a silent failure so the protocols need to be looked into.

Going to the rack, I saw (unsurprisingly) that the 120V UPS was off. 

  • Pushed the power on button - the LCD screen would briefly light up, say the line voltage was 120 V, and then turned itself off. Not great.
  • I traced the power connection to the UPS itself to a power strip under the rack - then I moved the plug from one port to another. Now the UPS stays on. okay...
  • but after ~3 mins while I'm hunting for a VGA cable, I hear an incessant beeping. The UPS display has the "Fault" indicator lit up. 
  • I decided to shift everything back to the old UPS. After the change was made, I was able to boot up the c1vac machine again, and began the recovery process.
  • When I tried to start TP2, the drypump was unusually noisy, and I noticed PTP2 bottomed out at ~500 torr (yes torr). So clearly something is not right here. This pump supposedly had its tip-seal replaced by Jordan just 3 months ago. This is not a normal lifetime for the tip seal - we need to investigate more in detail what's going on here...
  • Decided that an acceptable config is to pump the main volume (so that we can continue working on other parts of the IFO). The annuli are all <10mtorr and holding, so that's just fine I think.

Questions:

  1. Are the failures of TP2 drypump and UPS related? Or coincidence? Who is the chicken and who is the egg?
  2. What's up with the short tip seal lifetime?
  3. Why did all of this happen without any of our systems catching it and sending an alert??? I have left the UPS connected to the USB/ethernet interface in case anyone wants to remotely debug this.

For now, I think this is a safe state to leave the system in. Unless I hear otherwise, I will leave it so - I will be in the lab another hour tonight (~10pm).

Some photos and a screen-cap of the Vac medm screen attached.

Attachment 1: rackBeforenAfter.pdf
rackBeforenAfter.pdf
Attachment 2: IMG_0008.jpg
IMG_0008.jpg
Attachment 3: IMG_0009.jpg
IMG_0009.jpg
Attachment 4: vacStatus.png
vacStatus.png
  15722   Thu Dec 10 11:07:24 2020 ChubUpdateVACUPS fault

Is that a fault code that you can decipher in the manual, or just a light telling you nothing but your UPS is dead?

  15723   Thu Dec 10 11:17:50 2020 ChubUpdateVACUPS fault

I can't find anything in the manual that describes the nature of the FAULT message.  In fact, it's not mentioned at all.  If the unit detects a fault at its output, I would expect a bit more information.  This unit does a programmable level of input error protection, too, usually set at 100%.  Still, there is no indication in the manual whether an input issue would be described as a fault; that usually means a short or lifted ground at the output.

Quote:

Is that a fault code that you can decipher in the manual, or just a light telling you nothing but your UPS is dead?

  15724   Thu Dec 10 13:05:52 2020 JonUpdateVACUPS failure

I've investigated the vacuum controls failure that occurred last night. Here's what I believe happened.

From looking at the system logs, it's clear that there was a sudden loss of power to the control computer (c1vac). Also, the system was actually down for several hours. The syslog shows normal EPICS channel writes (pressure readback updates, etc., and many of them per minute) which suddenly stop at 4:12 pm. There are no error or shutdown messages in the syslog or in the interlock log. The next activity is the normal start-up messaging at 7:39 pm. So this is all consistent with the UPS suddenly failing.

According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.

Preventing this in the future:

First, there are too many electronics on the 1 kVA UPS. The reason I asked us to buy a dual 208/120V UPS (which we did buy) is to relieve the smaller 120V UPS. I envision moving the turbo pumps, gauge controllers, etc. all to the 5 kVA unit and reserving the smaller 1 kVA unit for the c1vac computer and its peripherals. We now have the dual 208/120V UPS in hand. We should make it a priority to get that installed.

Second, there are 1 Hz "blinker" channels exposed for c1vac and all the slow controls machines, each reporting the machine's alive status. I don't think they're being monitored by any auto-notification program (running on a central machine), but they could be. Maybe there already exists code that could be co-opted for this purpose? There is an MEDM screen displaying the slow machine statuses at Sitemap > CDS > SLOW CONTROLS STATUS, pictured in Attachment 2. This is the only way I know to catch sudden failures of the control computer itself.

Attachment 1: TP2_time_history.png
TP2_time_history.png
Attachment 2: slow_controls_monitors.png
slow_controls_monitors.png
  15725   Thu Dec 10 14:29:26 2020 gautamUpdateVACUPS failure

I don't buy this story - P2 only briefly burped around GPStime 1291608000 which is around 8pm local time, which is when I was recovering the system.

Today. Jordan talked to Jon Feicht - apparently there is some kind of valve in the TP2 forepump, which only opens ~15-20 seconds after turning the pump on. So the loud sound I was hearing yesterday was just some transient phenomenon. So today morning at ~9am, we turned on TP2. Once again, PTP2 pressure hovered around 500 torr for about 15-20 seconds. Then it started to drop, although both Jordan and I felt that the time it took for the pressure to drop in the range 5 mtorr - 1 mtorr was unusually long. Jordan suspects some "soft-start" feature of the Turbo Pumps, which maybe spins up the pump in a more controlled way than usual after an event like a power failure. Maybe that explains why the pressure dropped so slowly? One thing is for sure - the TP2 controller displayed "TOO HIGH LOAD" yesterday when I tried the first restart (before migrating everything to the older UPS unit). This is what led me to interpret the loud sound on startup of TP2 to indicate some issue with the forepump - as it turns out, this is just the internal valve not being opened.

Anyway, we left TP2 on for a few hours, pumping only on the little volume between it and V4, and PTP2 remained stable at 20 mtorr. So we judged it's okay to open V4. For today, we will leave the system with both TP2 and TP3 backing TP1. Given the lack of any real evidence of a failure from TP2, I have no reason to believe there is elevated risk.

As for prioritising UPS swap - my opinion is that it's better to just replace the batteries in the UPS that has worked for years. We can run a parallel reliability test of the new UPS and once it has demonstrated stability for some reasonable time (>4 months), we can do the swap.


I was able to clear the FAULT indicator on the new UPS by running a "self-test". pressing and holding the "mute" button on the front panel initiates this test according to the manual, and if all is well, it will clear the FAULT indicator, which it did. I'm still not trusting this unit and have left all units powered by the old UPS.


Update 1100 Dec 11: The config remained stable overnight so today I reverted to the nominal config of TP3 pumping the annuli and TP2 backing TP1 which pumps the main volume (through the partially open RV2).

Quote:
 

According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.

Attachment 1: vacDiag1.png
vacDiag1.png
  15748   Wed Jan 6 15:28:04 2021 gautamUpdateVACVac rack UPS batteries replaced

[chub, gautam]

the replacement was done this afternoon. The red "Replace Battery" indicator is no longer on.

  16047   Mon Apr 19 09:17:51 2021 JordanUpdateVACEmpty N2 Tanks

When I came into the lab this morning, I noticed that both N2 tanks were empty. I had swapped one on Friday (4-16-21) before I left the lab. Looking at the logs, the right tank (T2) sprung a leak shortly shortly after install. I leak checked the tank coupling after install but did not see a leak. There could a leak further down the line, possibly at the pressure transducer.

The left tank (T1) emptied normally over the weekend, and I quickly swapped the left tank for a full one, and is curently at ~2700 psi. It was my understanding that if both tanks emptied, V1 would close automatically and a mailer would be sent out to the 40m group. I did not receive an email over the weekend, and I checked the Vac status just now and V1 was still open.

I will keep an eye on the tank pressure throughout the day, and will try to leak check the T2 line this afternoon, but someone should check the vacuum interlocks and verify.

 

Attachment 1: N2_Pressure.PNG
N2_Pressure.PNG
  16064   Wed Apr 21 12:56:00 2021 JordanUpdateVACEmpty N2 Tanks

Installed T2 today, and leaked checked the entire line. No issues found. It could have been a bad valve on the tank itself. Monitored T2 pressure for ~2 hours to see if there was any change. All seems ok.

Quote:

When I came into the lab this morning, I noticed that both N2 tanks were empty. I had swapped one on Friday (4-16-21) before I left the lab. Looking at the logs, the right tank (T2) sprung a leak shortly shortly after install. I leak checked the tank coupling after install but did not see a leak. There could a leak further down the line, possibly at the pressure transducer.

The left tank (T1) emptied normally over the weekend, and I quickly swapped the left tank for a full one, and is curently at ~2700 psi. It was my understanding that if both tanks emptied, V1 would close automatically and a mailer would be sent out to the 40m group. I did not receive an email over the weekend, and I checked the Vac status just now and V1 was still open.

I will keep an eye on the tank pressure throughout the day, and will try to leak check the T2 line this afternoon, but someone should check the vacuum interlocks and verify.

 

 

Attachment 1: Screenshot_2021-04-21_12-53-26.png
Screenshot_2021-04-21_12-53-26.png
  16090   Wed Apr 28 11:31:40 2021 JonUpdateVACEmpty N2 Tanks

I checked out what happened on c1vac. There are actually two independent monitoring codes running:

  1. The interlock service, which monitors the line directly connected to the valves.
  2. A seaparate convenience mailer, running as a cronjob, that monitors the tanks.

The interlocks did not trip because the low-pressure delivery line, downstream of the dual-tank regulator, never fell below the minimum pressure to operate the valves (65 PSI). This would have eventually occurred, had Jordan been slower to replace the tanks. So I see no problem with the interlocks.

On the other hand, the N2 mailer should have sent an email at 2021-04-18 15:00, which was the first time C1:Vac-N2T1_pressure dropped below the 600 PSI threshold. N2check.log shows these pressures were recorded at this time, but does not log that an email was sent. Why did this fail? Not sure, but I found two problems which I did fix:

  • One was that the code was checking the sensor on the low-pressure side (C1:Vac-N2_pressure; nominally 75 PSI) against the same 600 PSI threshold as the tanks. This channel should either be removed or a separate threshold (65 PSI) defined just for it. I just removed it from the list because monitoring of this channel is redundant with the interlock service. This does not explain the failure to send an email.
  • The second issue was that the pyN2check.sh script appeared to be calling Python 3 to run a Python 2 script. At least that was the situation when I tested it, and this was causing it to fail partway through. This might well explain the problem with no email. I explicitly set python --> python2 in the shell script.

The code then ran fine for me when I retested it. I don't see any further issues.

Quote:

Installed T2 today, and leaked checked the entire line. No issues found. It could have been a bad valve on the tank itself. Monitored T2 pressure for ~2 hours to see if there was any change. All seems ok.

Quote:

When I came into the lab this morning, I noticed that both N2 tanks were empty. I had swapped one on Friday (4-16-21) before I left the lab. Looking at the logs, the right tank (T2) sprung a leak shortly shortly after install. I leak checked the tank coupling after install but did not see a leak. There could a leak further down the line, possibly at the pressure transducer.

The left tank (T1) emptied normally over the weekend, and I quickly swapped the left tank for a full one, and is curently at ~2700 psi. It was my understanding that if both tanks emptied, V1 would close automatically and a mailer would be sent out to the 40m group. I did not receive an email over the weekend, and I checked the Vac status just now and V1 was still open.

I will keep an eye on the tank pressure throughout the day, and will try to leak check the T2 line this afternoon, but someone should check the vacuum interlocks and verify.

 

  16305   Wed Sep 1 14:16:21 2021 JordanUpdateVACEmpty N2 Tanks

The right N2 tank had a bad/loose valve and did not fully open. This morning the left tank was just about empty and the right tank showed 2000+ psi on the gauge. Once the changeover happened the copper line emptied but the valve to the N2 tank was not fully opened. I noticed the gauges were both reading zero at ~1pm just before the meeting. I swapped the left tank, but not in time. The vacuum interlocks tripped at 1:04 pm today when the N2 pressure to the vacuum valves fell below 65psi. After the meeting, Chub tightened the valve, fully opened it and refilled the lines. I will monitor the tank pressures today and make sure all is ok.

There used to be a mailer that was sent out when the sum pressure of the two tanks fell <600 psi, telling you to swap tanks. Does this no longer exist?

  16316   Wed Sep 8 18:00:01 2021 KojiUpdateVACcronjobs & N2 pressure alert

In the weekly meeting, Jordan pointed out that we didn't receive the alert for the low N2 pressure.

To check the situation, I went around the machines and summarized the cronjob situation.
[40m wiki: cronjob summary]
Note that this list does not include the vacuum watchdog and mailer as it is not on cronjob.

Now, I found that there are two N2 scripts running:

1. /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh on megatron and is running every minute (!)
2. /opt/rtcds/caltech/c1/scripts/Admin/N2check/pyN2check.sh on c1vac and is running every 3 hours.

Then, the N2 log file was checked: /opt/rtcds/caltech/c1/scripts/Admin/n2Check.log

Wed Sep 1 12:38:01 PDT 2021 : N2 Pressure: 76.3621
Wed Sep 1 12:38:01 PDT 2021 : T1 Pressure: 112.4
Wed Sep 1 12:38:01 PDT 2021 : T2 Pressure: 349.2
Wed Sep 1 12:39:02 PDT 2021 : N2 Pressure: 76.0241
Wed Sep 1 12:39:02 PDT 2021 : N2 pressure has fallen to 76.0241 PSI !

Tank pressures are 94.6 and 98.6 PSI!

This email was sent from Nodus.  The script is at /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh

Wed Sep 1 12:40:02 PDT 2021 : N2 Pressure: 75.5322
Wed Sep 1 12:40:02 PDT 2021 : N2 pressure has fallen to 75.5322 PSI !

Tank pressures are 93.6 and 97.6 PSI!

This email was sent from Nodus.  The script is at /opt/rtcds/caltech/c1/scripts/Admin/n2Check.sh

...

The error started at 11:39 and lasted until 13:01 every minute. So this was coming from the script on megatron. We were supposed to have ~20 alerting emails (but did none).
So what's happened to the mails? I tested the script with my mail address and the test mail came to me. Then I sent the test mail to 40m mailing list. It did not reach.
-> Decided to put the mail address (specified in /etc/mailname , I believe) to the whitelist so that the mailing list can accept it.
I did run the test again and it was successful. So I suppose the system can now send us the alert again.
And alerting every minute is excessive. I changed the check frequency to every ten minutes.

What's happened to the python version running on c1vac?
1) The script is running, spitting out some error in the cron report (email on c1vac). But it seems working.
2) This script checks the pressures of the bottles rather than the N2 pressure downstream. So it's complementary.
3) During the incident on Sept 1, the checker did not trip as the pressure drop happened between the cronjob runs and the script didn't notice it.
4) On top of them, the alert was set to send the mails only to an our former grad student. I changed it to deliver to the 40m mailing list. As the "From" address is set to be some ligox...@gmail.com, which is a member of the mailing list (why?), we are supposed to receive the alert. (And we do for other vacuum alert from this address).

 

 

 

 

  16404   Thu Oct 14 18:30:23 2021 KojiSummaryVACFlange/Cable Stand Configuration

Flange Configuration for BHD

We will need total 5 new cable stands. So Qty.6 is the number to be ordered.


Looking at the accuglass drawing, the in-vaccum cables are standard dsub 25pin cables only with two standard fixing threads.

https://www.accuglassproducts.com/sites/default/files/PDF/Partpdf/110070_3.pdf

For SOSs, the standard 40m style cable bracket works fine. https://dcc.ligo.org/D010194-x0

However, for the OMCs, we need to make the thread holes available so that we can mate DB25 male cables to these cables.
One possibility is to improvise this cable bracket to suspend the cables using clean Cu wires or something. I think we can deal with this issue in situ.


Ha! The male side has the 4-40 standoff (jack) screws. So we can hold the male side on the bracket using the standoff (jack) screws and plug the female cables. OK! The issue solved!

https://www.accuglassproducts.com/sites/default/files/PDF/Partpdf/110029_3.pdf

Attachment 1: 40m_flange_layout_20211014.pdf
40m_flange_layout_20211014.pdf
  16410   Mon Oct 18 10:02:17 2021 KojiUpdateVACVent Started / Completed

[Chub, Jordan, Anchal, Koji]

- Checked the main volume is isolated.
- TP1 and TP2 were made isolated from other volumes. Stopped TP1. Closed V4 to isolate TP1 from TP2.
- TP3 was made isolated. TP3 was stopped.
- We wanted to vent annuli, but it was not allowed as VA6 was open. We closed VA6 and vented the annuli with VAVEE.
- We wanted to vent the volume between VA6, V5, VM3, V7 together with TP1. So V7 was opened. This did not change the TP1 pressure (P2 = 1.7mmTorr) .
- We wanted to connect the TP1 volume with the main volume. But this was not allowed as TP1 was not rotating. We will vent TP1 through TP2 once the vent of the main volume is done.

- Satrted venting the main volume@Oct 18, 2021 9:45AM PDT

- We started from 10mTorr/min, and increased the vent speed to 200mTorr/min, 700mTorr/min, and now it is 1Torr/min @ 20Torr
- 280Torr @11:50AM
- 1atm  @~2PM


We wanted to vent TP1. We rerun the TP2 and tried to slowly introduce the air via TP2. But the interlock prevents the action.

Right now the magenta volume in the attachment is still ~1mTorr. Do we want to open the gate valves manually? Or stop the interlock process so that we can bypass it?

Attachment 1: Screen_Shot_2021-10-18_at_14.52.34.png
Screen_Shot_2021-10-18_at_14.52.34.png
Attachment 2: Screenshot_2021-10-18_15-08-59.png
Screenshot_2021-10-18_15-08-59.png
  16412   Tue Oct 19 10:59:09 2021 KojiUpdateVACVent Started / Completed

[Chub, Jordan, Yehonathan, Anchal, Koji]

North door of the BS chamber opened

 

  16413   Tue Oct 19 11:30:39 2021 KojiUpdateVACHow to vent TP1

I learned that TP1 was vented through the RGA room in the past. This can be done by opening VM2 and a manual valve ("needle valve")
I checked the setup and realized that this will vent RGA. But it is OK as long as we turns of the RGA during vent and bake it once TP1 is back.

Additional note:

- It'd be nice to take a scan for the current background level before the work.
- Turn RGA EM and filament off, let it cool down overnight. 
- Vent with clean N2 or clean air. (Normal operating temp ~80C is to minimize accumulation of H-C contaminations.)
- There is a manual switch and indicators on the top of the RGA amp. It has auto protection to turn filament off if the pressure increase over ~1e-5.

Attachment 1: Screen_Shot_2021-10-18_at_14.52.34.png
Screen_Shot_2021-10-18_at_14.52.34.png
  16418   Wed Oct 20 15:58:27 2021 KojiUpdateVACHow to vent TP1

Probably the hard disk of c0rga is dead. I'll follow up in this elog later today.

Looking at the log in /opt/rtcds/caltech/c1/scripts/RGA/logs , it seemed that the last RGA scan was Sept 2, 2021, the day when we had the disk full issue of chiara.
I could not login to c0rga from control machines.
I was not aware of the presence for c0rga until today, but I could locate it in the X arm.
The machine was not responding and it was rebooted, but could not restart. It made some knocking sound. I am afraid that the HDD failed.

I think we can
- prepare a replacement linux machine for the python scripts
or
- integrate it with c1vac

  16490   Mon Dec 6 14:26:52 2021 KojiUpdateVACPumping down the RGA section

Jordan reported that the RGA section needs to be pumped down to allow the analyzer to run at sufficiently low pressure (P<1e-4 torr).
The RGA section was pumped down with the TP2/TP3. The procedure is as listed below.
If the pressure go up to P>1e-4 torr, we need to keep the pump running until the scan is ready.

----
### Monitor / Control screen setup ###
1. On c1vac: cd /cvs/cds/caltech/target/c1vac/medm
2. medm -x C0VAC_MONITOR.adl&
3. RGA section (P4) 3.6e-1 torr / P3/P2 still atm.
4. medm -x C0VAC_CONTROL.adl

### TP2/TP3 backing ###
5. Turn on AUX RP with the circuit breaker hanging on the AC.
6. Open manual valve for TP2/3/ backing / PTP2/3 ~ 8torr

### TP2/TP3 starting ###
7. Turned on TP2/TP3 with the Standby OFF

### Pump down the pump spool ###
8. Connect manual RP line (Quick Connect)
9. Turned on RP1/RP3 -> quickly reached 0.4 torr
10. Open V6 for pump spool pumping -> Immediately go down to sufficiently low pressure for TP2/TP3.
(10.5 I had to close V6 at this point)
11. Open V5 to start pumping pump spool with TP3 (TP2 still stand by) -> P3 immediately goes down below 1e-4 torr. This automatically closed V6 because of the low pressure of P3 (interlocking)

### Pump down the RGA section ###
12. Open VM3 to pump down RGA section -> P4 goes down to <1e-4 torr
13. P2 is still 2e-3. So decided to open V4 to use TP2 (now it's ready) too. -> Saturated at 1.7e-3

### Shutting down ###
14. Close VM3
15. Close V4/V5 to isolate TP2/TP3
16. Stop TP2/TP3 -> Slowing down
17. Stop RP1/RP3
18. Close the manual valves for TP2/3/ backing
19. Stop AUX RP with the circuit breaker hanging on the AC.

  16493   Tue Dec 7 13:12:50 2021 KojiUpdateVACPumping down the RGA section

So that Jordan can run the RGA scan this afternoon, I ran TP3 and started pumping down the RGA section.

Procedure:
- Same 1~4
- Same 5
- 6 Opened only the backing path for TP3
- 7 Turned on TP3 only

- TP3 reached the nominal full speed @75kRPM

- 11 Opened V5 to pump the pump spool -> Immediately reached P3<1e-4
- 12 Opened VM3 to pump the RGA section -> Immediately reached P4<1e-4

The pumps are kept running. I'll come back later to shut down the pumps.
=> Jordan wants to heat the filament (?) and to run the scan tomorrow.
So we decided to keep TP3 running overnight. I switched TP3 to the stand-by mode (= lower rotation speed @50kRPM)

 

  16494   Wed Dec 8 10:14:43 2021 JordanUpdateVACPumping down the RGA section

After an overnight pumpdown/RGA warm up, I took a 100 amu scan of the RGA volume and subsequent pumping line. Attached is a screenshot along with the .txt file. Given the high argon peak (40) and the N2/O2 ratio, it looks like there is a decent sized air leak somehwere in the volume.

Are we interested in the hydrocarbon leak rates of this volume? That will require another scan with one of the calibrated leaks opened.

Edit: Added a Torr v AMU plot to see the partial pressures

Quote:

So that Jordan can run the RGA scan this afternoon, I ran TP3 and started pumping down the RGA section.

Procedure:
- Same 1~4
- Same 5
- 6 Opened only the backing path for TP3
- 7 Turned on TP3 only

- TP3 reached the nominal full speed @75kRPM

- 11 Opened V5 to pump the pump spool -> Immediately reached P3<1e-4
- 12 Opened VM3 to pump the RGA section -> Immediately reached P4<1e-4

The pumps are kept running. I'll come back later to shut down the pumps.
=> Jordan wants to heat the filament (?) and to run the scan tomorrow.
So we decided to keep TP3 running overnight. I switched TP3 to the stand-by mode (= lower rotation speed @50kRPM)

 

 

Attachment 1: 40m_RGAVolume_12_8_21.PNG
40m_RGAVolume_12_8_21.PNG
Attachment 2: 40m_RGAVolume_Torr_12_8_21.PNG
40m_RGAVolume_Torr_12_8_21.PNG
  16501   Fri Dec 10 19:22:01 2021 KojiUpdateVACPumping down the RGA section

The scan result was ~x10 higher than the previously reported scan on 2020/9/15 (https://nodus.ligo.caltech.edu:8081/40m/15570), which was sort of high from the reference taken on 2018/7/18.

This just could mean that the vacuum level at the RGA was x10 high.
We'll just go ahead with the vacuum repair and come back to the RGA once we return to "vacuum normal".

Meanwhile, I asked Jordan to turn off the RGA to make it cool down. I shut off RGA section and turned TP2 off.

  16508   Wed Dec 15 15:06:08 2021 JordanUpdateVACVacuum Feedthru Install

Jordan, Chub

We installed the 4x DB25 feedthru flange on the North-West port of ITMX chamber this afternoon. It is ready to go.

  16529   Tue Dec 21 16:35:39 2021 KojiUpdateVACITMX NW feedthru (LO1-1) connector pin bent

I've received a report that a pin of an ITMX NW feedthru connector was bent. (Attachment 1)
The connector is #1 (upper left) and planned to be used for LO1-1.

This is Pin25 and used for the PD K of OSEM #1. This means that Coil Driver #1 (3 OSEMs) uses this pin, but Coil Driver #2 (2 OSEMs) does not.

Anyways, I tried to fix it by bending it back. WIth some tools, it was straightened enough for plugging the cable connector. (Attachment 2)

It seemed that the pins were exceptionally soft compared to the ones used for usual DSUBs, probably because of the vacuum compatibility.
So it's better to approach the pins in parallel to the surface and not apply mating pressure until you are sure that all the 25pins are inserted in the counterpart holes.

Attachment 1: PXL_20211222_002019620.jpg
PXL_20211222_002019620.jpg
Attachment 2: PXL_20211222_003014068.jpg
PXL_20211222_003014068.jpg
  16634   Mon Jan 31 10:39:19 2022 JordanUpdateVACTP1 and Manual Gate Valve Removal

Jordan, Chub

Today, Chub and I removed TP1 and the failed manual gate valve off of the pumping spool.

First, P2 needed to be vented in order to remove TP1. TP1 has a purge valve on the side of the pump which we slowly opened bringing the P2 volume up to atmosphere. Although, this was not vented using the dry air/N2, using this purge valve eliminated the need to vent the RGA volume.

Then we disconnected TP1 foreline, removed TP1+8" flange reducer, then the gate valve. All of the removed hardware looked good, so no need to replace bolts/nuts, only needs new gaskets. TP1 and the failed valve are sitting on a cart wrapped in foil next to the pumping station.

Attachment 1: 20220131_100637.jpg
20220131_100637.jpg
Attachment 2: 20220131_102807.jpg
20220131_102807.jpg
Attachment 3: 20220131_102818.jpg
20220131_102818.jpg
Attachment 4: 20220131_100647.jpg
20220131_100647.jpg
  16643   Thu Feb 3 10:25:59 2022 JordanUpdateVACTP1 and Manual Gate Valve Install

Jordan, Chub

Chub and I installed the new manual gate valve (Nor-Cal GVM-6002-CF-K79) and reinstalled TP1. The new gate valave was placed with the sealing side towards the main 40m volume, then TP1 was installed on top and the foreline reattched to TP1.

This valve has a hard stop in the actuator to prevent over torquing.

 

Attachment 1: 20220203_101455.jpg
20220203_101455.jpg
Attachment 2: 20220203_094831.jpg
20220203_094831.jpg
Attachment 3: 20220203_094823.jpg
20220203_094823.jpg
  16682   Sat Feb 26 01:01:40 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I will make a detailed elog later today giving a detailed outline of the connection from the Agilent gauge controller to the vacuum subnet and the work I have been doing over the past two days to get data from the unit to EPICs channels. I just want to mention that I have plugged the XGS-600 gauge controller into the serial server on the vacuum subnet. I check the vacuum medm screen and I can confirm that the other sensors did not experience and issues are a result of this. I also currently have two of the FRG-700 connected to the controller but I have powered the unit down after the checks.

  16683   Sat Feb 26 15:45:14 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I have attached a flow diagram of my understanding of how the gauges are connected to the network.

Earlier today, I connected the XGS-600 gauge controller to the IOLAN Serial Device Server on port 192.168.114.22 .

The plan is a follows:

1. Update the serial device yaml file to include this new ip entry for the XGS-600 gauge controller

2. Create a serial gauge class "serial_gauge_xgs.py" for the XGS-600 gauge controller that inherits from the serial gauge parent class for EPICS communication with a serial device via TCP sockets.

  • Might be better to use the current channels of the devices that are being replaced initially, i.e.
  • C1:Vac-FRG1_pressure C1:Vac-CC1_pressure
    C1:Vac-FRG2_pressure C1:Vac-CCMC_pressure
    C1:Vac-FRG3_pressure C1:Vac-PTP1_pressure
    C1:Vac-FRG4_pressure C1:Vac-CC4_pressure
    C1:Vac-FRG5_pressure C1:Vac-IG1_pressure

3. Modify the launcher file to include the XGS gauge controller. Following the same pattern used  to start the service for the other serial gauges, we can start the communication between the XGS-600 gauge controller and the IOLAN serial server and write data to EPICS channels using

controls@c1vac> python launcher.py XGS600

If we are able to establish communication between the XGS-600 gauge controller and write it gause data to EPICS channels, go on to steps 4.

4. Create a serial service file "serial_XGS600.service" and place it in the service folder

5. Add the new EPICS channels to the database file

6. Add the "serial_XGS600.service" to line 10 and 11 of modbusIOC.service

7. Later on, when we are ready, we can restart the updated modbusIOC service

 

For vacuum signal flow and Acromag channel assignments see [1]  and [2] respectively. For the 16 port IOLAN SDS (Serial Device Server) ethernet connections, see [3]. 

[1] https://wiki-40m.ligo.caltech.edu/Vacuum-Upgrade-2018?action=AttachFile&do=view&target=40m_Vacuum_System_Signal_Flow.pdf

[2] https://wiki-40m.ligo.caltech.edu/Vacuum-Upgrade-2018?action=AttachFile&do=view&target=AcromagChannelAssignment.pdf

[3] https://git.ligo.org/40m/vac/-/blob/master/python/serial/serial_devices.yaml

Attachment 1: Vac-gauges-flow-diagram.png
Vac-gauges-flow-diagram.png
  16688   Mon Feb 28 19:15:10 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

I decided to create an independent service for the XGS data readout so we can get this to work first before trying to integrate into current system. After starting the service, I noticed that the EPICS channel were not updating as expected. So I started to debug the problem and managed to track it down to an ip socket connect() error, i.e. we get a connection error for the ip address assigned to the LAN port to which the XGS box was connected. After trying a few things and searching the internet, I think the error indicates that this particular LAN port is not yet configured. I reached this conclusion after noting that only a select number of LAN ports connected without issues and these are the ports that already had devices connected. So it must be the case that the LAN ports were somehow configured. The next step is to look at the IOLAN manual to figure out how to configure the ip port for the XGS controller. Fingers crossed.

  16691   Tue Mar 1 20:38:49 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

During my investigation, I inadvertently overwrote the serial port configuration for the connected devices. So I am now working to get it all back. I have attached screenshots of the config settings that brought back communication that is not garbled. There is no physical connection to port 6, which I guess was initially used for the UPS serial communication but not anymore. Also, ports 9 and 10 are connected to Hornet and SuperBee, both of which have not been communicating for a while and are to be replaced, so there is no way to confirm communication with them. Otherwise, the remaining devices seem to be communicating as before.

I still could not establish communication with the XGS-600 controller using the serial port settings given in the manual, which also happen to work via Serial to USB adapter, so I will revisit the problem later. My immediate plan is to do a Serial Ethernet, then Ethernet to Serial, and then Serial to USB connection to see if the USB code still works. If it does then at least I know the problem is not coming from the Serial to Ethernet adapters. Then I guess I will replace the controller with my laptop and see what signal comes through when I send a message to the controller via the IOLAN serial device server. Hopefully, I can discover what's wrong by this point.

 

Note to self: Before doing anything, do a sanity check by comparing the settings on the IOLAN SDS and the config settings that worked for the Serial to USB communication and post an elog for this for reference.

Attachment 1: Working_Serial_Port_List_1.png
Working_Serial_Port_List_1.png
Attachment 2: Working_Serial_Port_List_2.png
Working_Serial_Port_List_2.png
Attachment 3: Working_Config_Ports#1-5.png
Working_Config_Ports#1-5.png
Attachment 4: Working_Config_Ports#7-8.png
Working_Config_Ports#7-8.png
  16692   Wed Mar 2 11:50:39 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Here is the IOLAN SDS TCP socket setting and the USBserial setting for comparison.

I have also included the python script and output from the USBserial test from earlier.

Attachment 1: XGS600_IOLAN_settings_1.png
XGS600_IOLAN_settings_1.png
Attachment 2: XGS600_IOLAN_settings_2.png
XGS600_IOLAN_settings_2.png
Attachment 3: XGS600_USBserial_settings.png
XGS600_USBserial_settings.png
Attachment 4: XGS600_comm_test.py
#!/usr/bin/env python

#Created 2/24/22 by Tega Edo
'''Script to read/write to the XGS-600 Gauge Controller'''

import serial
import sys,os,math,time

ser = serial.Serial('/dev/cu.usbserial-1410') # open serial port 

... 74 more lines ...
Attachment 5: XGS600_comm_test_result.txt
----- Multiple Sensor Read Commands -----

Sent to XGS-600 -> #0001\r : Read XGS contents
response : >FE4CFE4CFE4C

Sent to XGS-600 -> #0003\r : Read Setpoint States
response : >0000

Sent to XGS-600 -> #0005\r : Read software revision
response : >0206,0200,0200,0200
... 69 more lines ...
  16693   Wed Mar 2 12:40:08 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Connector Test:

A quick test to rule out any issue with the Ethernet to Serial adapter was done using the setup shown in Attachment 1. The results rule out any connector problem.

 

IOLAN COMM test (as per Koji's suggestion):

The next step is to swap the controller with a laptop set up to receive serial commands using the same settings as the XGS600 controller. Basically, run a slightly modified version of python script where we go into listening mode. Then send commands to the TCP socket on the IOLAN SDS unit using c1vac and check what data makes its way to the laptop USBserial terminal. After working on this for a bit, I realized that we do not need to do anything on the c1vac machine. We only need to start the service as it would work normally. So I wrote a small python code for a basic XGS-600 controller emulator, see Attachment 4. The outputs from the laptop and c1vac terminals are Attachments 5 and 6 respectively. 

These results show that we can communicate via the assigned IP address "192.168.114.22" and the commands that are sent from c1vac reaches the laptop in the correct format. Furthermore, the serial_XSG service, a part modbusIOC_XGS service, which usually exits with an error seems fine now after successfully communicating with the laptop. I don't know why it did not die after the tests. I also found a bug in my code as a result of the test, where the status field for the fourth gauge didn't get written to. 

 

Pressure reading issue:

I noticed that the pressure reading was not giving the atmospheric value of ~760 Torrs as expected. Looking through my previous readouts, it seems the unit showed this atm value of ~761 Torrs when the first gauge was attached. However, a closer look at the issue revealed a transient behavior, i.e. when the unit is turned on the reading dips to atm value but eventually rises up to 1000 Torrs. I don't think this is a calibration problem bcos the value of 1000 Torrs is the maximum value for the gauge range. I also found out that when the XGS-controller has been running for a while, a power cycle does not have this transient behavior. So maybe a faulty capacitor somewhere? I have attached a short video clip that shows what happens when the XGS-controller unit is turned on.

Attachment 1: IMG_20220302_123529382.jpg
IMG_20220302_123529382.jpg
Attachment 2: XGS600_Serial2Ethernet2Serial2USB_comm_test_result.txt
$ python3 XGS600_comm_test.py  

----- Multiple Sensor Read Commands -----

Sent to XGS-600 -> #0001\r : Read XGS contents
response : >FE4CFE4CFE4C

Sent to XGS-600 -> #0003\r : Read Setpoint States
response : >0000

... 73 more lines ...
Attachment 3: VID-20220302-WA0001.mp4
Attachment 4: comm_test_c1vac_to_laptop_via_iolansds.py
#!/usr/bin/env python

#Created 3/2/22 by Tega Edo
'''Script to emulate XGS-600 controller using laptop USBserial port'''

import serial
import sys,os,math,time

ser = serial.Serial('/dev/cu.usbserial-1410') # open serial port 

... 19 more lines ...
Attachment 5: laptop_terminal.txt
(base) tega.edo@Tegas-MBP serial % python3 comm_test_c1vac_to_laptop_via_iolansds.py

----- Listen for USBserial command and asynchronously send data in XGS600 format -----

Command received from c1vac [1] : 

Data sent to c1vac [1] : >1.000E+00,NOCBL    ,NOCBL    ,NOCBL    ,2.00E+00,NOCBL\r
Command received from c1vac [2] : 

Data sent to c1vac [2] : >2.000E+00,NOCBL    ,NOCBL    ,NOCBL    ,3.00E+00,NOCBL\r
... 54 more lines ...
Attachment 6: c1vac_terminal.txt
controls@c1vac:/opt/target/python/serial$ caget C1:Vac-FRG1_status && caget C1:Vac-FRG2_status && caget C1:Vac-FRG3_status && caget C1:Vac-FRG4_status && caget C1:Vac-FRG5_status
C1:Vac-FRG1_status             1.530E+02
C1:Vac-FRG2_status             OFF
C1:Vac-FRG3_status             OFF
C1:Vac-FRG4_status             NO COMM
C1:Vac-FRG5_status             1.55E+02
controls@c1vac:/opt/target/python/serial$ caget C1:Vac-FRG1_status && caget C1:Vac-FRG2_status && caget C1:Vac-FRG3_status && caget C1:Vac-FRG4_status && caget C1:Vac-FRG5_status
C1:Vac-FRG1_status             1.630E+02
C1:Vac-FRG2_status             OFF
C1:Vac-FRG3_status             OFF
... 70 more lines ...
  16701   Fri Mar 4 18:12:44 2022 KojiUpdateVACRGA pumping down

1. Jordan reported that the newly installed Pirani gauge for P2 shows 850Torr while PTP2 show 680 Torr. Because of this, the vacuum interlock fails when we try to open V4.

2. Went to c1vac. Copied the interlock setting file interlock_conditions.yaml to interlock_conditions_220304.yaml
3. Deleted diffpressure line and pump_underspeed line for V4
4. Restarted the interlock service

controls@c1vac:/opt/target/python/interlocks$ sudo systemctl status interlock.service  
controls@c1vac:/opt/target/python/interlocks$ sudo systemctl restart interlock.service
controls@c1vac:/opt/target/python/interlocks$ sudo systemctl status interlock.service

5. The above 2~4 was unnecessary. Start over.


Let RP1/3 pump down TP1 section through the pump spool. Then let TP2 pump down TP1 and RGA.

1. Open V7. This made P2 a bit lower (P2 is alive) and P3.
2. Connected the main RP tube to the RP port.
2. Started RP1/3. PRP quickly reaches 0.4Torr.
3. Opened V6 this made P3 and O2 below 1Torr.
4. Close V6. Shutdown RP1/3. Disconnect the RP tube.
5. Turn on auxRP at the wall powe
6. Turn on TP2. Wait for the starting up.
7. Open V4. Once the pressure is below Pirani range, open VM3.
8. Keep it running over the weekend.

9. Once TP2 reached the nominal speed, the "StandBy" button was clicked to lower the rotation speed (for longer life of TP2)

Attachment 1: Screenshot_2022-03-04_19-38-11.png
Screenshot_2022-03-04_19-38-11.png
  16704   Sun Mar 6 18:14:45 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Following repeated failure to establish communication between c1vac and the XGS600 controller via Perle IOLAN serial device server, I decided to monitor the signal voltage of the communication channels (pin#2, pin#3 and pin#5) using an oscilloscope. The result of this investigation is presented in the attached pdf document. In summary, it seems I have used a crossed wired RS232 serial cable instead of a normal RS232 serial cable, so the c1vac read request command is being relayed on the wrong comm channel (pin#2 instead of pin#3). I will swap out the cable to see if this resolves the problem.  

Attachment 1: iolan_xgs_comm_investigation.pdf
iolan_xgs_comm_investigation.pdf iolan_xgs_comm_investigation.pdf iolan_xgs_comm_investigation.pdf iolan_xgs_comm_investigation.pdf
  16706   Mon Mar 7 13:53:40 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

So it appears that my deduction from the pictures of needing a cable swap was correct, however, it turns out that the installed cable was actually the normal RS232 and what we need instead is the RS232 null cable. After the swap was done, the communication between c1vac and the XGS600 controller became active. Although, the data makes it all to the to c1vac without any issues, the scope view of it shows that it is mainly utilizing the upper half of the voltage range which is just over 50% of the available range. I don't know what to make of this.

 

I guess, the only remaining issue now is the incorrect atmospheric pressure reading of 1000 Torrs. 

 

Quote:

Following repeated failure to establish communication between c1vac and the XGS600 controller via Perle IOLAN serial device server, I decided to monitor the signal voltage of the communication channels (pin#2, pin#3 and pin#5) using an oscilloscope. The result of this investigation is presented in the attached pdf document. In summary, it seems I have used a crossed wired RS232 serial cable instead of a normal RS232 serial cable, so the c1vac read request command is being relayed on the wrong comm channel (pin#2 instead of pin#3). I will swap out the cable to see if this resolves the problem.  

 

Attachment 1: iolan_xgs_comm_live.pdf
iolan_xgs_comm_live.pdf
  16707   Mon Mar 7 14:52:34 2022 KojiUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

Great trouble shoot!

> I guess, the only remaining issue now is the incorrect atmospheric pressure reading of 1000 Torrs. 

This is just a calibration issue. The controller should have the calibration function.
(The other Pirani showing 850Torr was also a calibration issue although I didn't bother to correct it. I think the pirani's typically has large distribution of the calibration values and requires individual calibration)

  16713   Tue Mar 8 12:08:47 2022 TegaUpdateVACOngoing work to get the FRG gauges readouts to EPICs channels

OMG, it worked! It was indeed a calibration issue and all I had to do was press the "OK" button after selecting the "CAL" tab beside the pressure reading. Wow.

Quote:

Great trouble shoot!

> I guess, the only remaining issue now is the incorrect atmospheric pressure reading of 1000 Torrs. 

This is just a calibration issue. The controller should have the calibration function.
(The other Pirani showing 850Torr was also a calibration issue although I didn't bother to correct it. I think the pirani's typically has large distribution of the calibration values and requires individual calibration)

 

Attachment 1: XGS600_calibration.pdf
XGS600_calibration.pdf
  16717   Wed Mar 9 10:23:28 2022 JordanUpdateVACLeak Testing of New Manual Gate Valve - Attempt #1

Jordan, Chub, Paco

Chub and I went into the lab this morning to leak check the new gate valve after pumping over the weekend. This would be done through the RGA and spraying helium around the newly installed flanges. The RGA is set to monitor the partial pressure of helium versus time and we visually watch for any spikes in pressure to indicate an air leak.

So, I mistakenly thought the gate valve was opened and both sides were being pumped on, this was not the case. The vavle was closed so there was a pressure differential whenI turned the handle it tripped the interlocks and closed V4. I closed VM3 to the RGA volume to prevent the filament from being damaged.

Then the medm screen for the vac controls started flashing rapidly, I closed the window and reopened the controls to find all the panels were white, but we could still see the read only vac screen. So Paco restarted c1vac and the controls were restored. V7 closed, but was then reopened.

We now need to restart pumping with the odd pressure differentials between V4 and VM3.

Plan

- TP2 turned off along with the AUX pump

- VM3 opened to vent RGA volume (RGA turned off, filament cooled)

- Open the manual vent screw on TP1 to bring the RGA+TP1 volume back to atmosphere, now no pressure diff. on V4

- Open V4, restart AUX pump to rough out the volume

- Connect RP1/3 line to the pump scroll, turn on RP1/3 RGA+TP1 volume went to mtorr

- Slowly open the manual gate valve

- Connect AUX pump to TP2 and rough out

- Restart TP2, once at speed open V4, close V6 and turn off RP1/3

Now we are pumping on the RGA/TP1 volume with TP2, leak check attempt #2 will happen tomorrow morning

  16721   Thu Mar 10 09:39:59 2022 JordanUpdateVACLeak Testing of New Manual Gate Valve - Attempt #2

This morning Chub and I leak checked the manual gate valve with the RGA and helium. There was no change in the helium partial pressure while spraying helium around the flanges, all looks good.

 

I also took a 100 AMU analog scan, after the filament had warmed up overnight and the plot was quite noisy even with the lowest scan speed. I recommend this unit go back to SRS for a filament replacement/recalibration. I am worried yesterday's "vent" of the RGA volume may have burned the filament. See the comparison of yesterday's analog scan to today's below.

Attachment 1: 3-09-2022.PNG
3-09-2022.PNG
Attachment 2: 3-10-2022.PNG
3-10-2022.PNG
  16747   Tue Mar 29 15:07:11 2022 TegaSummaryVACOpening of MC and ETM chambers

[Anchal, Chub, Ian, Paco, Tega]

Today, we opened the MC chamber, ETMX chamber and ETMY chamber.

  16769   Mon Apr 11 11:00:30 2022 JancarloUpdateVACC1VAC Reboot and Nitrogen tanks

[Paco, JC, Ian, Jordan, Chub]

Checking in the morning, I walked over to the Nitrogen tanks to check the levels. Noticed one tank was empty, so I swapped it out. Chub came over to check the levels and to take note of how many tanks were left available for usage (None). Chub continued to put in a work order for a set of full Nitrogen tanks. We should be set on Nitrogen until Thursday this week (4/14/22).

As for C1VAC, this morning, Paco and I attempted to open the PSL shutter, but the interlock system was tripped so we didn't get any light into the IFO. We traced the issue down to C1VAC being unresponsive. We discussed this may have interlocked as a result of the Nitrogen tanks running out, but we do not believe this was the issue since we would have recieved an email. We tried troubleshooting as much as possible avoiding a reboot, but were unable to solve the issue. In response, we ran the idea of a reboot across Jordan and Ian, where everyone was in agreement, and fixed the system. Restarting c1vac seems to have closed V4, but this didn't cause any issues with the current state of the vacuum system.

After opening the PSL shutter again, we see the laser down the IFO, so we resume alignment work

  16772   Tue Apr 12 09:05:21 2022 JordanUpdateVACNew Pressure Gauge Install/Pump Spool Vent

Today, Tega and I would like to vent the pump spool an dinstall the new FRG-400 Agilent Pressure Gauges (per elog 15703). The attached picture shows the volume needed to be vented highlighted in red, and the gauges that need to be replaced/removed (purple dot next to the name).

The vent plan is as follows:

Open RV2

Open VM3

Open V7

Open V4

Shut down TP2

Install new gauges

Will add to post with updates post vent.

Attachment 1: Screenshot_2022-04-12_08-42-33.png
Screenshot_2022-04-12_08-42-33.png
Attachment 2: 81CB0936-1B19-4722-8A32-C3DC1D1FBC21.heic
Attachment 3: 2262ECEF-6200-4E95-8E32-C83CB9EB4F17.heic
Attachment 4: B5712B34-ECF6-43BE-BCB2-38CD775CF653.heic
Attachment 5: BEE14A07-976A-45C2-82A0-1774D377941E.heic
Attachment 6: 84557D6E-6AAE-47CF-A3AD-1DF329FEB550.heic
  16773   Wed Apr 13 12:50:07 2022 TegaUpdateVACNew Pressure Gauge Install/Pump Spool Vent

[Jordan, JC, Tega]

We have installed all the FRGs and updated the VAC medm screens to display their sensor readings. The replacement map is CC# -> FRG#, where # in [1..4] and PRP1 -> FRG5. We now need to clean up the C1VAC python code so that it is not overloaded with non-function gauges (CC1,CC2,CC3,CC4,PRP1). Also, need to remove the connection cables for the old replaced gauges.

Attachment 1: VAC_FRG_UPGRADE.png
VAC_FRG_UPGRADE.png
  16777   Thu Apr 14 09:04:30 2022 JordanUpdateVACRGA Volume RGA Scans

Prior to venting the RGA volume on Tuesday (4/12/2022) I took an RGA scan of the volume to be vented (RGA+TP1 volume+Manual Gate Valve) to see if there was a difference after replacing the manual gate valve. Attached is the plot from 4/12/22, and an overlay plot to complare 4/12/22 to 12/10/2021, when the same volume was scanned with the old (defective) manual gate valve.

There is a significant drop in the ratio O2 compared the the nitrogen peak and reduced Argon (AMU 40) which indicates there is no longer a large air leak.

12/10/21 N2/O2 ratio ~ 4 (Air 78%N2 / 21%O2)

4/12/22 N2/O2 ratio ~ 10      

There is one significant (above noise level) peak above AMU 46, which is at AMU 58. This could possibly be acetone (AMU 43 and 58) but overall the new RGA Volume scans look significantly better after the manual gate valve replacement. Well done!

Attachment 1: 40mRGA_Overlay.pdf
40mRGA_Overlay.pdf
Attachment 2: RGAVolume_4_12_22.PNG
RGAVolume_4_12_22.PNG
  16794   Thu Apr 21 11:31:35 2022 JCUpdateVACGauges P3/P4

[Jordan, JC]

It was brought to attention during yesterday's meeting that the pressures in the vacuum system were not equivalent althought the valve were open. So this morning, Jordan and I reviewed the pressure gauges P3 and P4. We attempted to recalibrate, but the gauges were unresponsive. Following this, we proceeded to connect new gauges on the outside to test for a calibration. The two gauges successfully calibrated at atmosperic pressure. We then proceeded to remove the old gauges and install the new ones. 

Attachment 1: IMG_0560.jpeg
IMG_0560.jpeg
Attachment 2: IMG_0561.jpeg
IMG_0561.jpeg
  16796   Thu Apr 21 16:36:56 2022 TegaUpdateVACcleanup work for vacuum git repo

git repo - https://git.ligo.org/40m/vac

Finally incorporated the FRGs into the main modbusIOC service and everything seems to be working fine. I have also removed the old sensors (CC1,CC2,CC3,CC4,PTP1,IG1) from the serial client list and their corresponding EPICS channels. Furthermore, the interlock service python script has been updated so that all occurrence of old sensors (turns out to be only CC1) were replaced by their corresponding new FRG sensor (FRG1) and a redundnacy was also enacted for P1a where the interlock condition is replicated with P1a being replaced with FRG1 because they both sense the main volume pressure.

ELOG V3.1.3-