40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 248 of 341  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  15421   Mon Jun 22 10:43:25 2020 JonConfigurationVACVac maintenance at 11 am

The vac system is going down at 11 am today for planned maintenance:

  • Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417]
  • Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413]

We will advise when the work is completed.

  15424   Mon Jun 22 20:06:06 2020 JonConfigurationVACVac maintenance complete

This work is finally complete. The dry pump replacement was finished quickly but the controls updates required some substantial debugging.

For one, the mailer code I had been given to install would not run against Python 3.4 on c1vac, the version run by the vac controls since about a year ago. There were some missing dependencies that proved difficult to install (related to Debian Jessie becoming unsupported). I ultimately solved the problem by migrating the whole system to Python 3.5. Getting the Python keyring working within systemd (for email account authentication) also took some time.

Edit: The new interlock flag channel is named C1:Vac-interlock_flag.

Along the way, I discovered why the interlocks had been failing to auto-close the PSL shutter: The interlock was pointed to the channel C1:AUX-PSL_ShutterRqst. During the recent c1psl upgrade, we renamed this channel C1:PSL-PSL_ShutterRqst. This has been fixed.

The main volume is being pumped down, for now still in a TP3-backed configuration. As of 8:30 pm the pressure had fallen back to the upper 1E-6 range. The interlock protection is fully restored. Any time an interlock is triggered in the future, the system will send an immediate notification to 40m mailing list. 👍

Quote:

The vac system is going down at 11 am today for planned maintenance:

  • Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417]
  • Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413]
  15425   Tue Jun 23 17:54:56 2020 ranaConfigurationVACVac maintenance complete

I propose we go for all CAPS for all channel names. The lower case names is just a holdover from Steve/Alan from the 90's. All other systems are all CAPS.

It avoids us having to force them all to UPPER in the scripts and channel lists.

  15748   Wed Jan 6 15:28:04 2021 gautamUpdateVACVac rack UPS batteries replaced

[chub, gautam]

the replacement was done this afternoon. The red "Replace Battery" indicator is no longer on.

  14380   Thu Jan 3 15:08:37 2019 gautamOmnistructureVACVac status unknown

Larry W came by the 40m, and reported that there was a campus-wide power glitch (he was here to check if our networking infrastructure was affected). I thought I'd check the status of the vacuum.

  • Attachment #1 is a screenshot of the Vac overview MEDM screen. Clearly something has gone wrong with the modbus process(es). Only the PTP2 and PTP3 gauges seem to be communicative.
  • Attachment #2 shows the minute trend of the pressure gauges for a 12 day period - it looks like there is some issue with the frame builder clock, perhaps this issue resurfaced? But checking the system time on FB doesn't suggest anything is wrong.. I double checked with dataviewer as well that the trends don't exist... But checking the status of the individual daqd processes indeed showed that the dates were off by 1 year, so I just restarted all of them and now the time seems correct. How can we fix this problem more permanently? Also, the P1b readout looks suspicious - why are there periods where it seems like we are reading values better than the LSB of the device?

I decided to check the systemctl process status on c1vac:

controls@c1vac:~$ sudo systemctl status modbusIOC.service
● modbusIOC.service - ModbusIOC Service via procServ
   Loaded: loaded (/etc/systemd/system/modbusIOC.service; enabled)
   Active: active (running) since Thu 2019-01-03 14:53:49 PST; 11min ago
 Main PID: 16533 (procServ)
   CGroup: /system.slice/modbusIOC.service
           ├─16533 /usr/bin/procServ -f -L /opt/target/modbusIOC.log -p /run/...
           ├─16534 /opt/epics/modules/modbus/bin/linux-x86_64/modbusApp /opt/...
           └─16582 caRepeater

Jan 03 14:53:49 c1vac systemd[1]: Started ModbusIOC Service via procServ.

Warning: Unit file changed on disk, 'systemctl daemon-reload' recommended.

So something did happen today that required restart of the modbus processes. But clearly not everything has come back up gracefully. A few lines of dmesg (there are many more segfaults):

[1706033.718061] python[23971]: segfault at 8 ip 000000000049b37d sp 00007fbae2b5fa10 error 4 in python2.7[400000+31d000]
[1706252.225984] python[24183]: segfault at 8 ip 000000000049b37d sp 00007fd3fa365a10 error 4 in python2.7[400000+31d000]
[1720961.451787] systemd-udevd[4076]: starting version 215
[1782064.269844] audit: type=1702 audit(1546540443.159:38): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.269866] audit: type=1302 audit(1546540443.159:39): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/85/tmp_obj_uAXhPg" inode=173019272 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.365240] audit: type=1702 audit(1546540443.255:40): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.365271] audit: type=1302 audit(1546540443.255:41): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/58/tmp_obj_KekHsn" inode=173019274 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.460620] audit: type=1702 audit(1546540443.347:42): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.460652] audit: type=1302 audit(1546540443.347:43): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/cb/tmp_obj_q62Pdr" inode=173019276 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.545449] audit: type=1702 audit(1546540443.435:44): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.545480] audit: type=1302 audit(1546540443.435:45): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/e3/tmp_obj_gPI4qy" inode=173019277 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.640756] audit: type=1702 audit(1546540443.527:46): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1783440.878997] systemd[1]: Unit serial_TP3.service entered failed state.
[1784682.147280] systemd[1]: Unit serial_TP2.service entered failed state.
[1786407.752386] systemd[1]: Unit serial_MKS937b.service entered failed state.
[1792371.508317] systemd[1]: serial_GP316a.service failed to run 'start' task: No such file or directory
[1795550.281623] systemd[1]: Unit serial_GP316b.service entered failed state.
[1796216.213269] systemd[1]: Unit serial_TP3.service entered failed state.
[1796518.976841] systemd[1]: Unit serial_GP307.service entered failed state.
[1796670.328649] systemd[1]: serial_Hornet.service failed to run 'start' task: No such file or directory
[1797723.446084] systemd[1]: Unit serial_MKS937b.service entered failed state.

 

I don't know enough about the new system so I'm leaving this for Jon to debug. Attachment #3 shows that the analog readout of the P1 pressure gauge suggests that the IFO is still under vacuum, so no random valve openings were effected (as expected, since we valved off the N2 line for this very purpose).

  15556   Fri Sep 4 15:26:55 2020 JonUpdateVACVac system UPS installation

The vac controls are going down now to pull and test software changes. Will advise when the work is completed.

  15557   Fri Sep 4 21:12:51 2020 JonUpdateVACVac system UPS installation

The vac work is completed. All of the vacuum equipment is now running on the new 120V UPS, except for TP1. The 230V TP1 is still running off wall power, as it always has. After talking with Tripp Lite support today, I believe there is a problem with the 230V UPS. I will post a more detailed note in the morning.

Quote:

The vac controls are going down now to pull and test software changes. Will advise when the work is completed.

  15558   Sat Sep 5 12:01:10 2020 JonUpdateVACVac system UPS installation

Summary

Yesterday's UPS switchover was mostly a success. The new Tripp Lite 120V UPS is fully installed and is communicating with the slow controls system. The interlocks are configured to trigger a controlled shutdown upon an extended power outage (> ~30 s), and they have been tested. All of the 120V pumpspool equipment (the full c1vac/LAN/Acromag system, pressure gauges, valves, and the two small turbo pumps) has been moved to the new UPS. The only piece of equipment which is not 120V is TP1, which is intended to be powered by a separate 230V UPS. However that unit is still not working, and after more investigation and a call to Tripp Lite, I suspect it may be defective. A detailed account of the changes to the system follow below.

Unfortunately, I think I damaged the Hornet (the only working cathode ionization gauge in the main volume) by inadvertently unplugging it while switching over equipment to the new UPS. The electronics are run from multiple daisy-chained power strips in the bottom of the rack and it is difficult to trace where everything goes. After the switchover, the Hornet repeatedly failed to activate (either remotely or manually) with the error "HV fail." Its compatriot, the Pirani SuperBee, also failed about a year ago under similar circumstances (or at least its remote interface did, making it useless for digital monitoring and control). I think we should replace them both, ideally with ones with some built-in protection against power failures.

New EPICS channels

Four new soft channels per UPS have been created, although the interlocks are currently predicated on only C1:Vac-UPS120V_status.

Channel Type Description Units
C1:Vac-UPS120V_status stringin Operational status -
C1:Vac-UPS120V_battery ai Battery remaining %
C1:Vac-UPS120V_line_volt ai Input line voltage V
C1:Vac-UPS120V_line_freq ai Input line frequency Hz
C1:Vac-UPS240V_status stringin Operational status -
C1:Vac-UPS240V_battery ai Battery remaining %
C1:Vac-UPS240V_line_volt ai Input line voltage V
C1:Vac-UPS240V_line_freq ai Input line frequency Hz

These new readbacks are visible in the MEDM vacuum control/monitor screens, as circled in Attachment 1:

Continuing issues with 230V UPS

Yesterday I brought with me a custom power cable for the 230V UPS. It adapts from a 208/120V three-phase outlet (L21-20R) to a standard outlet receptacle (5-15P) which can mate with the UPS's C14 power cable. I installed the cable and confirmed that, at the UPS end, 208V AC was present split-phase (i.e., two hot wires separated 120 deg in phase, each at 120V relative to ground). This failed to power on the unit. Then Jordan showed up and suggested to try powering it instead from a single-phase 240V outlet (L6-20R). However we found that the voltage present at this outlet was exactly the same as what the adapter cable provides: 208V split-phase.

This UPS nominally requires 230V single-phase. I don't understand well enough how the line-noise-isolation electronics work internally, so I can think of three possible explanations:

  1. 208V AC is insufficient to power the unit.
  2. The unit requires a true neutral wire (i.e., not a split-phase configuration), in which case it is not compatible with the U.S. power grid.
  3. The unit is defective.

I called Tripp Lite technical support. They thought the unit should work as powered in the configuration I described, so this leads me to suspect #3.

@Chub and Jordan: Can you please look into somehow replacing this unit, potentially with a U.S.-specific model? Let's stick with the Tripp Lite brand though, as I already have developed the code to interface those.

UPS-host computer communications

Unlike our older equipment, which communicates serially with the host via RS232/485, the new UPS units can be connected with a USB 3.0 cable. I found a great open-source package for communicating directly with the UPS from within Python, Network UPS Tools (NUT), which eliminates the dependency on Tripp Lite's proprietary GUI. The package is well documented, supports hundreds of power-management devices, and is available in the Debian package manager from Jessie (Debian 8) up. It consists of a large set of low-level, device-specific drivers which communicate with a "server" running as a systemd service. The NUT server can then be queried using a uniform set of programming commands across a huge number of devices.

I document the full set-up procedure below, as we may want to use this with more USB devices in the future.

How to set up

First, install the NUT package and its Python binding:

$ sudo apt install nut python-nut

This automatically creates (and starts) a set of systemd processes which expectedly fail, since we have not yet set up the config. files defining our USB devices. Stop these services, delete their default definitions, and replace them with the modified definitions from the vacuum git repo:

$ sudo systemctl stop nut-*.service
$ sudo rm /lib/systemd/system/nut-*.service
$ sudo cp /opt/target/services/nut-*.service /etc/systemd/system
$ sudo systemctl daemon-reload

Next copy the NUT config. files from the vacuum git repo to the appropriate system location (this will overwrite the existing default ones). Note that the file ups.conf defines the UPS device(s) connected to the system, so for setups other than c1vac it will need to be edited accordingly.

$ sudo cp /opt/target/services/nut/* /etc/nut

Now we are ready to start the NUT server, and then enable it to automatically start after reboots:

$ sudo systemctl start nut-server.service
$ sudo systemctl enable nut-server.service

If it succeeds, the start command will return without printing any output to the terminal. We can test the server by querying all the available UPS parameters with

$ upsc 120v

which will print to the terminal screen something like

battery.charge: 100
battery.runtime: 1215
battery.type: PbAC
battery.voltage: 13.5
battery.voltage.nominal: 12.0
device.mfr: Tripp Lite 
device.model: Tripp Lite UPS 
device.type: ups
driver.name: usbhid-ups
driver.parameter.pollfreq: 30
driver.parameter.pollinterval: 2
driver.parameter.port: auto
driver.parameter.productid: 2010
driver.parameter.vendorid: 09ae
driver.version: 2.7.2
driver.version.data: TrippLite HID 0.81
driver.version.internal: 0.38
input.frequency: 60.1
input.voltage: 120.3
input.voltage.nominal: 120
output.frequency.nominal: 60
output.voltage.nominal: 120
ups.beeper.status: enabled
ups.delay.shutdown: 20
ups.mfr: Tripp Lite 
ups.model: Tripp Lite UPS 
ups.power.nominal: 1000
ups.productid: 2010
ups.status: OL
ups.timer.reboot: 65535
ups.timer.shutdown: 65535
ups.vendorid: 09ae
ups.watchdog.status: 0

Here 120v is the name assigned to the 120V UPS device in the ups.conf file, so it will vary for setups on other systems.

If all succeeds to this point, what we have set up so far is a set of command-line tools for querying (and possibly controlling) the UPS units. To access this functionality from within Python scripts, a set of official Python bindings are provided by the python-nut package. However, at the time of writing, these bindings only exist for Python 2.7. For Python 3 applications (like the vacuum system), I have created a Python 3 translation which is included in the vacuum git repo. Refer to the UPS readout script for an illustration of its usage.

  14456   Fri Feb 15 11:58:45 2019 JonUpdateVACVac system is back up

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals) that could only be cleared by power cycling the units. After resetting the system, the main volume pressure dropped quickly and is now < 2e-5 torr, so normal operations can resume. For future reference, below is the procedure to safely reset these units from a trouble state.

Vacromag Reset Procedure

  • TP2 and TP3 can be left running, but isolate them by closing valves V4 and V5.
  • TP1 can also be left running, but manually flip the operation mode on the front of the controller from REMOTE to LOCAL. This prevents the pump from receiving a "stop" command when its control Acromag shuts down.
  • Close all the pneumatic valves in the system (they'll otherwise close automatically when their control Acromags shut down).
  • On c1vac, stop the modbusIOC service. Sometimes this takes ~1 min to actually terminate.
  • Turn off the Acromags by flipping the "24 V" on the back of the chassis.
  • Wait ~10 sec, then turn them back on.
  • Start the modbusIOC service. It may take up to ~1 min for all the readings on the MEDM screen to initialize.
  • Ensure that the rotation speed of TP1,2,3 are still all nominal.
  • If pumps are OK, open V4, V5, and V7, then open V1. This restores the system to the "Maximum pumping speed" state.
  • Flip the TP1 controller operation state back to REMOTE.
  14458   Fri Feb 15 18:41:18 2019 ranaUpdateVACVac system is back up

If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?

Quote:

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)

  14460   Fri Feb 15 19:50:09 2019 ranaUpdateVACVac system is back up

The acromags are on the UPS. I suspect the transient came in on one of the signal lines. Chub tells me he unplugged one of the signal cables from the chassis around the time things died on Monday, although we couldn't reproduce the problem doing that again today.

In this situation it wasn't the software that died, but the acromag units themselves. I have an idea to detect future occurrences using a "blinker" signal. One acromag outputs a periodic signal which is directly sensed by another acromag. The can be implemented as another polling condition enforced by the interlock code.

Quote:

If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?

Quote:

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)

 

  14387   Mon Jan 7 11:54:12 2019 JonConfigurationComputer Scripts / ProgramsVac system shutdown

I'm making a controlled shutdown of the vac controls to add new ADC channels. Will advise when it's back up.

  14388   Mon Jan 7 19:21:45 2019 JonConfigurationComputer Scripts / ProgramsVac system shutdown

ADC work finished for the day. The vac controls are back up, with all valves CLOSED and all pumps OFF.

Quote:

I'm making a controlled shutdown of the vac controls to add new ADC channels. Will advise when it's back up.

 

  3343   Sat Jul 31 22:35:01 2010 KojiUpdateVACVac-P1 still 1.2mtorr

I resumed the pumping from 19:00.

Now the valve RV1 is full open. But the pumping is really slow as we are using only one RP.

After 3hrs of pumping, P1 reached 1.2mmtorr but still we need 2hrs of pumping...

I stopped pumping at 22:30.

  14452   Thu Feb 14 15:37:35 2019 gautamUpdateVACVacromag failure

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

Details:

  1. Chub alerted me he had changed the main N2 line pressure, but this did not show up in the trend data. In fact, the trend data suggested that all 3 N2 gauges had stopped logging data (they just held the previous value) since sometime on Monday, see Attachment #1.
  2. We verified that the gauges were being powered, and that the analog voltage output of the gauges made sense in the drill press room ---> So this suggested something was wrong at the Vacuum rack electronics rack.
  3. Went to the vacuum rack, saw no obvious indicator lights signalling a fault.
  4. So I restarted the modbus process on c1vac using sudo systemctl restart modbusIOC.service. The way Jon has this setup, this service controls all the sub-processes talking to gauges and TPs, so resatrting this master process should have brought everything back.
  5. This tripped the interlock, and all valves got closed.
  6. Once the modbus service restarted, most things came back normally. However, V1, V3, V4 and V5 readbacks were listed as "UNDEF".
  7. The way the interlock code works, it checks a valve state change request against the monitor channel, so all these valves could not be opened.
  8. We confirmed that the valves themselves were operational, by bypassing the itnerlock logic and directly actuating on the valve - but this is not a safe way of running overnight so we decided to shut everything down.
  9. We also confirmed that the problem is with one particular Acromag unit - switching the readback Dsub connector to another channel (e.g. V1 --> VM2) showed the expected readback.
  10. As a further check - I connected a windows laptop with the Acromag software installed, to the suspected XT1111 - it reported an error message saying "USB device may be damaged". Plugging into another XT111 in the crate, I was able to access the unit in the normal way.
  11. The phoenix connector architecture of the Acromags makes it possible to replace this single unit (we have spare XT1111 units) without disturbing the whole system - so barring objections, we plan to do this at 9am tomorrow. The replacement plan is summarized in Attachment #2.

Pressure of the main volume seems to have stabilized - see Attachment #3, so it should be fine to leave the IFO in this state overnight.

Questions:

  1. What caused the original failure of the writing to the ADC channels hooked up to the N2 gauges? There isn't any logging setup from the modbus processes afaik.
  2. What caused the failure of the XT1111? What is the failure mode even? Because some other channels on the same XT1111 are working...
  3. Was it user error? The only operation carried out by me was restarting the modbus services - how did this damage the readback channels for just four valves? I think Chub also re-arranged some wires at the end, but unplugging/re-connecting some cables shouldn't produce this kind of response...

The whole point of the upgrade was to move to a more reliable system - but seems quite flaky already.

  14453   Thu Feb 14 18:16:24 2019 JonUpdateVACVacromag failure

I sent Gautam instructions to first try stopping the modbus service, power cycling the Acromag chassis, then restarting the service. I've seen the Acromags go into an unresponsive state after a strong electrical transient or shorted signal wires, and the unit has to be power cycled to be reset.

If this doesn't resolve it, I'll come in tomorrow to help with the Acromag replacement. We have plenty of spares.

Quote:

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

 

  14309   Mon Nov 19 23:38:41 2018 JonOmnistructure Vacuum Acromag Channel Assignments

I've completed bench testing of all seven vacuum Acromags installed in a custom rackmount chassis. The system contains five XT1111 modules (sinking digital I/O) used for readbacks of the state of the valves, TP1, CP1, and the RPs. It also contains two XT1121 modules (sourcing digital I/O) used to pass 24V DC control signals to the AC relays actuating the valves and RPs. The list of Acromag channel assignments is attached.

I tested each input channel using a manual flip-switch wired between signal pin and return, verifying the EPICS channel readout to change appropriately when the switch is flipped open vs. closed. I tested each output channel using a voltmeter placed between signal pin and return, toggling the EPICS channel on/off state and verifying the output voltage to change appropriately. These tests confirm the Acromag units all work, and that all the EPICS channels are correctly addressed.

  14296   Wed Nov 14 21:34:44 2018 JonOmnistructure Vacuum Acromags installed and tested

All 7 Acromag units are now installed in the vacuum chassis. They are connected to 24V DC power and Ethernet.

I have merged and migrated the two EPICS databases from c1vac1 and c1vac2 onto the new machine, with appropriate modifications to address the Acromags rather than VME crate.

I have tested all the digital output channels with a voltmeter, and some of the inputs. Still more channels to be tested.

I’ll follow up with a wiring diagram for channel assignments.

  14375   Thu Dec 20 21:29:41 2018 JonOmnistructureUpgradeVacuum Controls Switchover Completed

[Jon, Chub, Koji, Gautam]

Summary

Today we carried out the first pumpdown with the new vacuum controls system in place. It performed well. The only problem encountered was with software interlocks spuriously closing valves as the Pirani gauges crossed 1E-4 torr. At that point their readback changes from a number to "L OE-04, " which the system interpreted as a gauge failure instead of "<1E-4." This posed no danger and was fixed on the spot. The main volume was pumped to ~10 torr using roughing pumps 1 and 3. We were limited only by time, as we didn't get started pumping the main volume until after 1pm. The three turbo pumps were also run and tested in parallel, but were isolated to the pumpspool volume. At the end of the day, we closed every pneumatic valve and shut down all five pumps. The main volume is sealed off at ~10 torr, and the pumpspool volume is at ~1e-6 torr. We are leaving the system parked in this state for the holidays. 

Main Volume Pumpdown Procedure

In pumping down the main volume, we carried out the following procedure.

  1. Initially: All valves closed (including manual valves RV1 and VV1); all pumps OFF.
  2. Manually connected roughing pump line to pumpspool via KF joint.
  3. Turned ON RP1 and RP2.
  4. Waited until roughing pump line pressure (PRP) < 0.5 torr.
  5. Opened V3.
  6. Waited until roughing pump line pressure (PRP) < 0.5 torr.
  7. Manually opened RV1 throttling valve to main volume until pumpdown rate reached ~3 torr/min (~3 hours on roughing pumps).
  8. Waited until main volume pressure (P1a/P1b) < 0.5 torr.

We didn't quite reach the end of step 8 by the time we had to stop. The next step would be to valve out the roughing pumps and to valve in the turbo pumps.

Hardware & Channel Assignments

All of the new hardware is now permanently installed in the vacuum rack. This includes the SuperMicro rack server (c1vac), the IOLAN serial device server, a vacuum subnet switch, and the Acromag chassis. Every valve/pump signal cable that formerly connected to the VME bus through terminal blocks has been refitted with a D-sub connector and screwed directly onto feedthroughs on the Acromag chassis.

The attached pdf contains the master list of assigned Acromag channels and their wiring.

  14493   Thu Mar 21 18:36:59 2019 JonOmnistructureUpgradeVacuum Controls Switchover Completed

Updated vac channel list is attached. There are several new ADC channels.

Quote:

Hardware & Channel Assignments

All of the new hardware is now permanently installed in the vacuum rack. This includes the SuperMicro rack server (c1vac), the IOLAN serial device server, a vacuum subnet switch, and the Acromag chassis. Every valve/pump signal cable that formerly connected to the VME bus through terminal blocks has been refitted with a D-sub connector and screwed directly onto feedthroughs on the Acromag chassis.

The attached pdf contains the master list of assigned Acromag channels and their wiring.

  14315   Sun Nov 25 17:41:43 2018 JonOmnistructure Vacuum Controls Upgrade - Status and Plans

New hardware has been installed in the vacuum controls rack. It is shown in the below post-install photo.

  • Supermicro server (c1vac) which will be replacing c1vac1 and c1vac2.
  • 16-port Ethernet switch providing a closed local network for all vacuum devices.
  • 16-port IOLAN terminal server for multiplexing/Ethernetizing all RS-232 serial devices.

Below is a high-level summary of where things stand, and what remains to be done.

Completed:

 Set up of replacement controls server (c1vac).

  • Supermicro 1U rackmount server, running Debian 8.5.
  • Hosting an EPICS modbus IOC, scripted to start/restart automatically as a system service.
  • First Ethernet interface put on the martian network at 192.168.113.72.
  • Second Ethernet interface configured to host a LAN at 192.168.114.xxx for communications with all vacuum electronics. It connects to a 16-port Ethernet switch installed in the vacuum electronics rack.
  • Server installed in vacuum electronics rack (see photo).

 Set up of Acromag terminals.

  • 6U rackmount chassis frame assembled; 15V DC, 24V DC, and Ethernet wired.
  • Acromags installed in chassis and configured for the LAN (5 XT1111 units, 2 XT1121 units).

 EPICS database migration.

  • All vacuum channels moved to the modbus IOC, with the database updated to address the new Acromags. [The new channels are running concurrently at "C1:Vac2-...." to avoid conflict with the existing system.]
  • Each hard channel was individually tested on the electronics bench to confirm correct addressing and Acromag operation.

 Set up of 16-port IOLAN terminal server (for multiplexing/Ethernetizing the serial devices).

  • Configured for operation on the LAN. Each serial device port is assigned a unique IP address, making the terminal server transparent to client TCP applications.
  • Most of the pressure gauges are now communicating with the controls server via TCP.

Ongoing this week:

  • [Jon] Continue migrating serial devices to ports on the terminal server. Still left are the turbo pumps, N2 gauge, and RGA.
  • [Jon] Continue developing Python code for communicating with gauges and pumps via TCP sockets. A beta version of gauge readout code is running now.
  • [Chub] Install feedthrough panels on the Acromag chassis. Connect the wiring from feedthrough panels to the assigned Acromag slots.
  • [Chub/Jon] Test all the hard EPICS channels on the electronics bench, prior to installing the crate in the vacuum rack.
  • [Chub/Jon] Install the crate in the vacuum rack; connect valve/pump readbacks and actuators; test each hard EPICS channel in situ.
  • [Jon] Once all the signal connections have been made, in situ testing of the Python interlock code can begin.
  13179   Wed Aug 9 16:34:46 2017 ranaUpdateVACVacuum Document recovered

Steve and I found the previous draft of the 40m Vacuum Document. Someone in 2015 had browsed into the Docs history and then saved the old 2013 version as the current one.

We restored the version from 2014 which has all of Steve's edits. I have put that version (which is now the working copy) into the DCC:  https://dcc.ligo.org/E1500239.

The latest version is in our Google Docs place as usual. Steve is going to have a draft ready for us to ready be Tuesday, so please take a look then and we can discuss what needs doing at next Wednesday's 40m meeting.

  16508   Wed Dec 15 15:06:08 2021 JordanUpdateVACVacuum Feedthru Install

Jordan, Chub

We installed the 4x DB25 feedthru flange on the North-West port of ITMX chamber this afternoon. It is ready to go.

  9017   Fri Aug 16 09:35:18 2013 SteveUpdateVACVacuum Normal state recognition is back

Quote:

Quote:

Quote:

Quote:

Apparently all of the ION pump valves (VIPEE, VIPEV, VIPSV, VIPSE) opened, which vented the main volume up to 62 mTorr.  All of the annulus valves (VAVSE, VAVSV, VAVBS, VAVEV, VAVEE) also appeared to be open.  One of the roughing pumps was also turned on.  Other stuff we didn't notice?  Bad. 

 Several of the suspensions were kicked pretty hard (600+ mV on some sensors) as a result of this quick vent wind.  All of the suspensions are damped now, so it doesn't look like we suffered any damage to suspensions.

CLOSE CALL on the vacuum system:

Jamie and I disabled V1, VM2 and VM3 gate valves by disconnecting their 120V solenoid actuator before the swap of the VME crate.

The vacuum controller unexpectedly lost control over the swap as Jamie described it. We were lucky not to do any damage! The ion pumps were cold and clean. We have not used them for years so their outgassing possibly  accumulated to reach ~10-50 Torr

I disconnected_ immobilized and labelled the following 6 valves:  the 4 large ion pump gate valves and VC1,  VC2  of the cryo pump. Note: the valves on the cryo pump stayed closed. It is crucial that a warm cry pump is kept closed!

This will not allow the same thing to happen again and protect the IFO from warm cryo contamination.

The down side of this that the computer can not identify vacuum states any longer.

This vacuum system badly needs an upgrade. I will make a list.

 While I was doing the oil change of the roughing pumps I accidentally touched the 24 V adjustment knob on the power supply.

All valve closed to default condition. I realized that the current indicator was red at 0.2A  and the voltage fluctuated from 3-13V

Increased current limiter to 0.4A and set voltage to 24V     I think this was the reason for the caos of valve switching during the VME swap.

 

 Based on the facts above I reconnected VC1 and VC2 valves.  State recognition is working.  Ion pumps are turned off and their gate valves are disabled. 

We learned that even with closed off gate valves while at atmosphere  ion pumps outgass hydrocarbons at 1e-6 Torr level.  We have not used them for this reason in the passed 9 rears.

 

I need help with implementing V1 interlock triggered by Maglev failure signal  and-or P2 pressure.

MEDM screen agrees with vacuum rack signs.

  16980   Fri Jul 8 14:03:33 2022 JCHowToVACVacuum Preparation for Power Shutdown

[Koji, JC]

Koji and I have prepared the vacuum system for the power outage on Saturday.

  1. Closed V1 to isolate the main volume.
  2. Closed of VASE, VASV, VABSSCI,VABS, VABSSCO, VAEV, and VAEE.
  3. Closed V6, then close VM3 to isolate RGA
  4. Turn off TP1 (You must check the RPMs on the TP1 Turbo Controller Module)
  5. Close V5
  6. Turn off TP3 (There is no way to check the RPMs, so be patient)
  7. Close V4 (System State changes to 'All pneumatic valves are closed)
  8. Turn off TP2 (There is no way to check the RPMs, so be patient)
  9. Close Vacuum Valves (on TP2 and TP3) which connect to the AUX Pump.
  10. Turn of AUX Pump with the breaker switch wall plug.

From here, we shutdown electronics.

  1. Run /sbin/shutdown -h now on c1vac to shut the host down.
  2. Manually turn off power to electronic modules on the rack.
    • GP316a
    • GP316b
    • Vacuum Acromags
    • PTP3
    • PTP2
    • TP1
    • TP2 (Unplugged)
    • TP3 (Unplugged)

 

  14308   Mon Nov 19 22:45:23 2018 JonOmnistructure Vacuum System Subnetwork

I've set up a closed subnetwork for interfacing the vacuum hardware (Acromags and serial devices) with the new controls machine (c1vac; 192.168.113.72). The controls machine has two Ethernet interfaces, one which faces outward into the martian network and another which faces the internal subnetwork, 192.168.114.xxx. The second network interface was configured via the following procedure.

1. Add the following lines to /etc/network/interfaces:

allow-hotplug eth1
iface eth1 inet static
address 192.168.114.9
netmask 255.255.255.0

2. Restart the networking services:

$sudo /etc/init.d/networking restart

3. Enable DNS lookup on the martian network by adding the following lines to /etc/resolv.conf:

search martian
nameserver 192.168.113.104

4. Enable IP forwarding from eth1 to eth0:

$sudo echo 1 > /proc/sys/net/ipv4/ip_forward

5. Configure IP tables to allow outgoing connections, while keeping the LAN invisible from outside the gateway (c1vac):

$sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
$sudo iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
$sudo iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT

6. Finally, because the EPICS 3.14 server binds to all network interfaces, client applications running on c1vac now see two instances of the EPICS server---one at the outward-facing address and one at the LAN address. To resolve this ambiguity, two additional enviroment variables must be set that specify to local clients which server address to use. Add the following lines to /home/controls/.bashrc:

EPICS_CA_AUTO_ADDR_LIST=NO
EPICS_CA_ADDR_LIST=192.168.113.72

A list of IP addresses so far assigned on the subnetwork follows.

Device IP Address
Acromag XT1111a 192.168.114.1
Acromag XT1111b 192.168.114.2
Acromag XT1111c 192.168.114.3
Acromag XT1111d 192.168.114.4
Acromag XT1111e 192.168.114.5
Acromag XT1121a 192.168.114.6
Acromag XT1121b 192.168.114.7
Perle IOLAN SDS16 192.168.114.8
c1vac 192.168.114.9
  5180   Wed Aug 10 22:47:22 2011 ranaSummaryVACVacuum Workstation (linux3) re-activated

For some reason the workstation at the vac rack was off and unplugged. Nicole and I plugged its power back in to the EX rack.

I turned it on and it booted up fine; its not dead. To get it on to the network I just made the conversion from 131.215 to 192.168 that Joe had done on all the other computers several months ago.

Now it is showing the Vacuum overview screen correctly again and so Steve no longer has to monopolize one of the Martian laptops over there.

  11352   Wed Jun 10 15:54:14 2015 SteveUpdateVACVacuum comp. rebooted

Koji and Steve succeded rebooting C1vac1, C1vac2 and pressure reading is working now

More tomorrow .........

 

  11353   Thu Jun 11 19:40:59 2015 KojiUpdateVACVacuum comp. rebooted

The serial connections to the vacuum gauges were recovered by rebooting c1vac1 and c1vac2.

Steve claimed that the vacuum screen had showed "NO COMM" at the vacuum pressure values.
The epics connection to c1vac was fine. We could logged in to c1vac1 with telnet too although c1vac2 had no response.

After some inspection, we decided to reboot the slow machines. Steve manually XXXed YYY valves (to be described)
to prepare for any possible unwanted switching. Initially Koji thought only c1vac2 can be rebooted. But it was wrong.
If the reset button is pushed, all of the modules on the same crate is reset. So everything was reset. After ~3min we still
don't have the connection to c1vac1 restored. We decided to another reboot. This time I pushed c1vac1 reset button.
After waiting about two minutes, the ADCs started to show green lights and the switch box started scanning.
We recovered the telnet connection to c1vac1 and epics functions. c1vac2 is still note responding to telnet, and
the values associated with c1vac2 are still blank.

Steve restored the valves and everything was back to normal.

  11354   Fri Jun 12 08:40:17 2015 SteveUpdateVACVacuum comp. rebooted

Koji and Steve,

One computer expert and one vacuum expert required.

Quote:

The serial connections to the vacuum gauges were recovered by rebooting c1vac1 and c1vac2.

Steve claimed that the vacuum screen had showed "NO COMM" at the vacuum pressure values.
The epics connection to c1vac was fine. We could logged in to c1vac1 with telnet too although c1vac2 had no response.

After some inspection, we decided to reboot the slow machines. Steve manually XXXed YYY valves (to be described)
to prepare for any possible unwanted switching. Initially Koji thought only c1vac2 can be rebooted. But it was wrong.
If the reset button is pushed, all of the modules on the same crate is reset. So everything was reset. After ~3min we still
don't have the connection to c1vac1 restored. We decided to another reboot. This time I pushed c1vac1 reset button.
After waiting about two minutes, the ADCs started to show green lights and the switch box started scanning.
We recovered the telnet connection to c1vac1 and epics functions. c1vac2 is still note responding to telnet, and
the values associated with c1vac2 are still blank.

Steve restored the valves and everything was back to normal.

Atm 1,  problem condition: gauges are not reading for a week, error message "NO COMM" and all computer LEDs are green

Atm 2, prepare to safe reboot:

            a, close V1, disconnect it's power cable and turn off Maglev, wait till rotation stops

            b, close PSL shutter ( take adrenaline if needed )

            c, close V4, V5, VA6 valves and disconnect their cables. "Moving" error message indicating this condition.

               V1 is not showing "Moving" because its power cable disconnected only! It will show it if its position indicator cable is disconnected too. There is no need for that.

               These valves closed and disabled will not allow accidental venting of main volume.

            d, push reset, reseting c1vac2 will reset c1vac1 also, wait ~ 6 minutes

"Vacuum Normal" valve configuration was restored after succesful reboot as follows:

             a, reconnect cable and open V4 and V5 at P2 & P3 <1e-1 Torr

             b, observe that P2  < 1e-3 Torr and retsart Maglev

             c, wait till Maglev reaches full speed of 560 Hz and reconnect-open V1

             d, reconnect-open VA6 at P3 <1e-3 Torr

NOTE: VM1 valve was locked in open position and it was not responding before and after reboot

          Error message on Atm2 is indicating this locked condition: "opening VM1 will vent IFO"

          This is a fauls message. The valve is frozen in open position. We need a softwear expert help.

 

 

  1673   Mon Jun 15 15:17:33 2009 josephb, SteveConfigurationVACVacuum control and monitor screens

We updated the vacuum control and monitor screens  (C0VAC_MONITOR.adl and C0VAC_CONTROL.adl).  We also updated the /cvs/cds/caltech/target/c1vac1/Vac.db file.

1) We changed the C1:Vac-TP1_lev channel to C1:Vac-TP1_ala channel, since it now is an alarm readback on the new turbo pump rather than an indication of levitation.  The logic on printing the "X" was changed from X is printed on a 1 = ok status) to X is printed on a 0 = problem status.  All references within the Vac.db file to C1:Vac-TP1_lev were changed.  The medm screens also now are labeled Alarm, instead of Levitating.

2) We changed the text displayed by the CP1 channel (C1:Vac-CP1_mon in Vac.db) from "On" and "Off" to "Cold - On" and "Warm - OFF".

3) We restarted the c1vac1 front end as well as the framebuilder after these changes.

  15499   Thu Jul 23 15:58:24 2020 JonSummaryVACVacuum controls refurbishment plan

This year we've struggled with vacuum controls unreliability (e.g., spurious interlock triggers) caused by decaying hardware. Here are details of the vacuum refurbishment plan I described on the 40m call this week.

 Refurbish TP2 and TP3 dry pumps. Completed [ELOG 15417].

 Automated notifications of interlock-trigger events. Email to 40m list and a new interlock flag channel. Completed [ELOG 15424].

Replace failing UPS.

  • Two new Tripp Lite units on order, 110V and 230V [ELOG 15465].
  • Jordan will install them in the vacuum rack once received.
  • Once installed, Jon will come test the new units, set up communications, and integrate them into the interlock system following this plan [ELOG 15446].
  • Jon will move the pumps and other equipment to the new UPS units only after completing the above step.

Remove interlock dependencies on TP2/TP3 serial readbacks. Due to persistent glitching [ELOG 15140, ELOG 15392].

Unlike TP2 and TP3, the TP1 readbacks are real analog signals routed to Acromags. As these have caused us no issues at all, the plan is to eliminate dependence on the TP2/3 digital readbacks in favor of the analog controller outputs. All the digital readback channels will continue to exist, but the interlock system will no longer depend on them. This will require adding 2 new sinking BI channels each for TP2 and TP3 (for a total of 4 new channels). We have 8 open Acromag XT1111 channels in the c1vac system [ELOG 14493], so the new channels can be accommodated. The below table summarizes the proposed changes.

Channel Type Status Description Interlock
C1:Vac-TP1_current AI exists Current draw (A) keep
C1:Vac-TP1_fail BI exists Critical fault has occurred keep
C1:Vac-TP1_norm BI exists Rotation speed is within +/-10% of set point new
C1:Vac-TP2_rot soft exists Rotation speed (krpm) remove
C1:Vac-TP2_temp soft exists Temperature (C) remove
C1:Vac-TP2_current soft exists Current draw (A) remove
C1:Vac-TP2_fail BI new Critical fault has occurred new
C1:Vac-TP2_norm BI new Rotation speed is >80% of set point new
C1:Vac-TP3_rot soft exists Rotation speed (krpm) remove
C1:Vac-TP3_temp soft exists Temperature (C) remove
C1:Vac-TP3_current soft exists Current draw (A) remove
C1:Vac-TP3_fail BI new Critical fault has occurred new
C1:Vac-TP3_norm BI new Rotation speed is >80% of set point new
  14419   Fri Jan 25 16:14:51 2019 gautamUpdateVACVacuum interlock code, N2 warning

I reset the remote of this git repo to the 40m version instead of Jon's personal one, to ensure consistency between what's on the vacuum machine and in the git repo. There is now a N2 checker python mailer that will email the 40m list if all the tank pressures are below 600 PSI (>12 hours left for someone to react before the main N2 line pressure drops and the interlocks kick in). For now, the script just runs as a cron job every 3 hours, but perhaps we should integrate it with the interlock process?

Quote:

All the python code running on c1vac is archived to the git repo: 

https://git.ligo.org/40m/vacpython

  15283   Wed Mar 25 15:15:55 2020 gautamUpdateVACVacuum interlock code, N2 warning update

The email address in the N2 checking script wasn't right - I now updated it to email the 40m list if the sum of reserve tank pressures fall below 800 PSI. The checker itself is only run every 3 hours (via cron on c1vac).

Quote:

I reset the remote of this git repo to the 40m version instead of Jon's personal one, to ensure consistency between what's on the vacuum machine and in the git repo. There is now a N2 checker python mailer that will email the 40m list if all the tank pressures are below 600 PSI (>12 hours left for someone to react before the main N2 line pressure drops and the interlocks kick in). For now, the script just runs as a cron job every 3 hours, but perhaps we should integrate it with the interlock process

  15501   Mon Jul 27 15:48:36 2020 JonSummaryVACVacuum parts ordered

To carry out the next steps of the vac refurbishment plan [ELOG 15499], I've ordered parts necessary for interfacing the UPS units and the analog TP2/3 controller outputs with c1vac. The purchase list is appended to the main BHD list and is located here. Some parts we already had in the boxes of Acromag materials. Jordan is gathering what we do already have and staging it on the vacuum controls console table - please don't move them or put them away.

Quote:

Replace failing UPS.

Remove interlock dependencies on TP2/TP3 serial readbacks. Due to persistent glitching [ELOG 15140, ELOG 15392].

  16312   Thu Sep 2 21:21:14 2021 KojiSummaryComputersVacuum recovery 2

Attachment 1:
We are pumping the main volume with TP2. Once P1a reached the pressure ~2.2mtorr, we could open the PSL shutter. The TP2 voltage went up once but came down to ~20V. It's close to nominal now.
We wondered if we should use TP3 or not. I checked the vacuum pressure trends and found that the annulus pressures were going up. So we decided to open the annulus valves.

Attachment 2:
The current vacuum status is as shown in the MEDM screenshot.

There is no trend data of the valve status (sad)

  7272   Fri Aug 24 16:03:39 2012 SteveUpdateVACVacuum related work at atm

Vacuum related work at atmosphere:

Atm1,  Check all chamber dog clamps tightness with torque wrench,

Atm2,  Replace old, black molibdenum disulfite bolts -nut with new silicon bronze nuts and clean SS bolts.

Atm3,  Replace CC1 cold cathode gauges: horizontal and vertical.

  15526   Fri Aug 14 10:10:56 2020 JonConfigurationVACVacuum repairs today

The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up.

  15527   Sat Aug 15 02:02:13 2020 JonConfigurationVACVacuum repairs today

Vacuum work is completed. The TP2 and TP3 interlocks have been overhauled as proposed in ELOG 15499 and seem to be performing reliably. We're now back in the nominal system state, with TP2 again backing for TP1 and TP3 pumping the annuli. I'll post the full implementation details in the morning.

I did not get to setting up the new UPS units. That will have to be scheduled for another day.

Quote:

The vac system is going down now for planned repairs [ELOG 15499]. It will likely take most of the day. Will advise when it's back up.

  8752   Wed Jun 26 01:30:31 2013 ranaSummaryPEMVariation in 10-30 Hz seismic RMS

For quite a while (no one knows how long), we've seen fluctuations in the 10-30 Hz seismic motion. This shows up as the purple trace on the seismic BLRMS on the wall projector.

The second plot shows that this is not only a periodic increase in the usual 29.5 Hz HVAC peak, but also an anomolous 32.2 Hz peak. Probably some malfunctioning machinery - maybe in the 40m or maybe on the roof.

  828   Tue Aug 12 12:21:13 2008 josephbConfigurationCamerasVariation in fit over 140 images for GC650 and GC750
Used matlab to calculate Gaussian fits on 145 GC650 images and 142 GC750 images. These were individual images (no averaging) looking at the PSL output from May 29th 2008. The GC650 and GC750 were looking at a split, but had different exposure values, slightly different distances to the nominal waist of the beam, and were not centered on the beam identically. Mostly this is a test of the fluctuations in the fit from image to image.

Note the mm refer to the size or position on the CCD or CMOS detector itself.
GC650

Mean
Amplitude  X center       Y center       X waist  Y waist  Background offset from zero
           position (mm)  position (mm)  (mm)     (mm)
0.3743     1.7378         2.6220         0.7901   0.8650  0.0047

Standard Deviation
Amplitude  X center       Y center       X waist  Y waist  Background offset from zero
           position (mm)  position (mm)  (mm)     (mm)
0.0024     0.0006         0.0005         0.0005   0.0003  0.00001

Std/Mean x100 (percent)
Amplitude  X center       Y center       X waist  Y waist  Background offset from zero
           position (mm)  position (mm)  (mm)     (mm)
0.6%       0.03%          0.02%          0.06%    0.04%    0.29%


GC750

Mean
Amplitude  X center       Y center       X waist  Y waist  Background offset from zero
           position (mm)  position (mm)  (mm)     (mm)
0.2024     2.5967         1.4458         0.8245   0.9194  0.0418

Standard Deviation
Amplitude  X center       Y center       X waist  Y waist  Background offset from zero
           position (mm)  position (mm)  (mm)     (mm)
0.0011     0.0005         0.0005         0.0003   0.0005  0.00003

Std/Mean x100 (percent)
Amplitude  X center       Y center       X waist  Y waist  Background offset from zero
           position (mm)  position (mm)  (mm)     (mm)
0.6%       0.02%          0.04%          0.04%    0.05%    0.07%
  8882   Fri Jul 19 22:35:06 2013 KojiSummaryLSCVarious Arm signal (Yarm)

The StripTool plot attached below shows various arm signals measured with the Y arm cavity swept using ALS.

Yellow: TRY

Blue: ALS additive OFFSET to the error signal

Red: Raw PDH error signal (POY11I)

Purple: Linearized PDH error (POY11/TRY)

Green: 1/Sqrt(TRY)-5 (No normalization)

Inverse Sqrt of the TRY had been implemented when this LSC controller was first coded.
It is confirmed that the calculation is working correctly.

  9685   Mon Mar 3 17:35:10 2014 KojiUpdateLSCVarious demod phase measurement

I wanted to check how the refl signals looked like.
I decided to measure the demod phase where PRCL and MICH appear, one by one.

The method I used is to actuate PRCL or MICH at a fixed frequency and rotate the demod phase such that
the signal at the actuating frequency disappears.

For the PRCL actuation, PRM was actuated by the lock-in oscillator with the amplitude of 100cnt.
For MICH, the ITMX and ITMY was actuate at the amplitude of 1000cnt and 1015cnt respectively.

The script I used was something like this

ezcaread C1:LSC-REFL11_PHASE_R
ezcaservo -r C1:CAL-SENSMAT_CARM_REFL11_Q_I_OUTPUT C1:LSC-REFL11_PHASE_R -g 100 -t 60
ezcaread C1:LSC-REFL11_PHASE_R

"11" should be changed according to the PD you want to test.
"Q" should be changed to "I" depending on form which quadrature you want to eliminate the signal

The option "-g" specifies the servo gain. This specifies which slope (up or down) of the sinusoidal curve the signal is locked.
Therefore, it is important to flip the signal angle 180degree if a negative gain is used.


Note: Original phase settings before touching them

REFL11  - 19.2
REFL33   135.4
REFL55    48.0
RELF165 -118.5

 

Here in the measurement PRMI was locked with AS55Q (MICH) and REFL55I (PRCL)


Without no serious reason I injected a peak at 503.1Hz. This peak is not notched out by the servo. There may have been
some residual effect of the feedback loops.

PRCL: By elliminating the peak from the Q quadrature, we optimize the I phase for PRCL.

REFL11,   minimize PRCL in "Q", gain, -1, -19.3659 deg
REFL33,   minimize PRCL in "Q", gain, -1, 132.813 deg
REFL55,   minimize PRCL in "Q", gain, -1, 20.9747 deg
REFL165, minimize PRCL in "Q", gain, -1, -119.004 deg

MICH: By elliminating the peak from the I quadrature, we optimize the Q phase for MICH.
If PRCL and MICH appears at the same phase, the resulting angles shows an identical number.

REFL11,   minimize PRCL in "I", gain, -1, -28.4526 deg
REFL33,   minimize PRCL in "I", gain, -1, 65.9148 deg
REFL55,   minimize PRCL in "I", gain, -1, 12.4051 deg
REFL165, minimize PRCL in "I", gain, -0.1, -143.75 deg


Then, the signal frequency was changed to 675Hz where the notch filters in the servo is active.

PRCL: By elliminating the peak from the Q quadrature, we optimize the I phase for PRCL.

REFL11,   minimize PRCL in "Q", gain, 1, -19.5224 deg
REFL33,   minimize PRCL in "Q", gain, -1, 135.868 deg
REFL55,   minimize PRCL in "Q", gain, 1, 48.5716 deg
REFL165, minimize PRCL in "Q", gain, 1, -122.398 deg

MICH: By elliminating the peak from the I quadrature, we optimize the Q phase for MICH.
If PRCL and MICH appears at the same phase, the resulting angles shows an identical number.

REFL11,   minimize PRCL in "I", gain, -10, -73.7153 deg
REFL33,   minimize PRCL in "I", gain, -10, 135.5 deg
REFL55,   minimize PRCL in "I", gain, 10, -2.55868 deg
REFL165, minimize PRCL in "I", gain, -5, -156.135 deg


 

 

This is just a test of the REFL channels for the arms signals. ETMX or ETMY were actuated.

YARM

REFL11, minimize ETMY in "Q", gain 100 => C1:LSC-REFL11_PHASE_R = 145.694
REFL55, minimize ETMY in "Q", gain 100 => C1:LSC-REFL11_PHASE_R = -60.1512

XARM

REFL11, minimize ETMX in "Q", gain 100 => C1:LSC-REFL11_PHASE_R = 142.365
REFL55, minimize ETMX in "Q", gain 100 => C1:LSC-REFL55_PHASE_R = -68.6521

  2556   Mon Feb 1 18:33:10 2010 steveUpdateMOPAVe half the lazer!

The 2W NPRO from Valera arrived today and I haf hidden it somewere in the 40m lab!

 

Rana was so kind to make this entry for me

  17036   Tue Jul 26 19:50:25 2022 DeekshaUpdateComputer Scripts / ProgramsVector fitting

Trying to vectfit to the data taken from the DFD previously but failing horribly. I will update this post as soon as I get anything semi-decent. For now here is this fit.

  17038   Tue Jul 26 21:16:41 2022 KojiUpdateComputer Scripts / ProgramsVector fitting

I think the fit fails as the measurement quality is not good enough.

 

  16955   Tue Jun 28 16:26:58 2022 CiciSummaryGeneralVector fitting open loop transfer function/Audio cancellation of optical table enclosure

[Deeksha, Cici]

We attempted to use vectfit to fit our earlier transfer function data, and were generally unsuccessful (see vectfit_firstattempt.png), but are much closer to understanding vectfit than before. Couple of problems to address - finding the right set of initial poles to start with has been very hard, and also however vectfit is plotting the phase data is unwrapping it, which makes it generally unreadable. Still working on how to mess with the vectfit automatically-generated plots. In general, our data is very messy (this is old data of the transfer function from last week), so we took more data today to see if our coherence was the problem (see TFSR785_28-06-2022_161937.pdf). As is visible from the graph, our coherence is terrible, and above 1kHz is almost entirely below 0.5 (or 0.2) on both channels. Figuring out why this is and fixing it is our first priority.

In the process of taking new data, we also found out that the optical table enclosure at the end of the X-arm does a decent job of sound isolation (see enclosure_open.mp4 and enclosure_closed.mp4). The clicking from the shutter is visible on a spectrogram at high frequencies when the enclosure is open, but not when it is closed. We also discovered that the script to toggle the shutter can run indefinitely, which can break the shutter, so we need to fix that problem!

  14064   Fri Jul 13 10:54:55 2018 aaronUpdateVACVent 80

[aaron, steve]

Steve gave me a venting tutorial. I'll record this in probably a bit more detail than is strictly necessary, so I can keep track of some of the minor details for future reference.

Here is Steve's checklist:

  • Check that all jam nuts are tightened
  • all viewports are closed
  • op levs are off
  • take a picture of the MEDM screens
  • Check particle counts
  • Check that the cranes work & wiped
  • Check that HV is off

Gautam already did the pre-vent checks, and Steve took a screenshot of the IFO alignment, IMC alignment, master op lev screen, suspension condition, and shutter status to get a reference point. We later added the TT_CONTROL screen. Steve turned off all op levs.

We then went inside to do the mechanical checks

  • N2 cylinders in the 40m antechamber are all full enough (have ~700psi/day of nitrogen)
  • We manually record the particle count
    • this should be <10,000 on the 0.5um particles to be low enough to vent, otherwise we will contaminate the system
    • note: need to multiply the reading on the particle counter by 10 to get the true count
    • the temperature inside the PSL enclosure should be 23-24C +/- 3 degrees
    • We recorded the particle counts at ~830 and ~930, and the 0.5um count was up to ~3000
  • We put a beam stop in front of the laser at the PSL table
  • Checked that all HV supplies are either off or supplying something in air
    • we noticed four HV supplies on 1X1 that were on. Two were accounted for on the PSL table (FSS), and the other two were for C1IOO_ASC but ran along the upper cable rack. We got ahold of Gautam (sorry!) and he told us these go to the TT driver on OMC_SOUTH, where we verified the HV cables are disconnected. We took this to mean these HV supplies are not powering anything, and proceeded without turning these HV off.
    • There are HV supplies which were all either off or supplying something in-air at: 1Y4, 1Y2, OMC N rack, 1X9 (green steering HV)
  • Checked that the crane works--both move up and down
    • vertex crane switch is on the wall at the inner corner of the IFO
    • y arm crane switch is on the N wall at the Y end
    • turn off the cranes at the control strip after verifying they work
  • While walking around checking HV, we checked that the jam nuts and viewports are all closed
    • we replaced one viewport at the x arm that was open for a camera

After completing these checks, we grabbed a nitrogen cylinder and hooked it up to the VV1 filter. Steve gave me a rundown of how the vacuum system works. For my own memory, the oil pumps which provide the first level of roughing backstream below 500mtorr, so we typically turn on the turbo pumps (TP) below that level... just in case there is a calibrated leak to keep the pressure above 350mtorr at the oil pumps. TP2 has broken, so during this vent we'll install a manual valve so we can narrow the aperture that TP1 sees at V1 so we can hand off to the turbo at 500mtorr without overwhelming it. When the turbos have the pressure low enough, we open the mag lev pump. Close V1 if things screw up to protect the IFO. This 6" id manual gatevalve will allow us throttle the load on the small turbo while the maglev is taking over the pumping  The missmatch in pumping speed is 390/70 l/s [ maglev/varian D70 ]  We need to close down the conductive intake of the TP1 with manual gate valve so the 6x smaller turbo does not get overloaded...

We checked CC1, which read 7.2utorr.

Open the medm c0/ce/VacControl_BAK.adl to control the valves.

Steve tells me we are starting from vacuum normal state, but that some things are broken so it doesn't exactly match the state as described. In particular, VA6 is 'moving' because it has been disconnected and permanently closed to avoid pumping on the annulus. During this v ent, we will also keep pumping on the RGA since it is a short vent; steve logged the RGA yesterday.

We began the vent by following the vacuum normal to chamber open procedure.

  1. VM1 closed
  2. We didn't open VM3, because we want to keep the RGA on
  3. Closed V1
  4. Connect the N2 to the VV1 filter
    1. first puged the line with nitrogen
    2. We confirmed visually that V1 is closed
  5. We opened VM2 to pump on the RGA with the mag lev pump.
    1. This is a nonstandard step because we are keeping the RGA pumped down.
    2. The current on TP3 is ~0.19A, which is a normal, low load on the pump
  6. VV1 opened to begin the vent at ~10:30am
    1. use crescent wrench to open, torque wrench wheel to close
    2. Keep the pressure regulator below 10 psi for the vent. We started the vent with about 2psi, then increased to 8psi after confirming that the SUS sensors looked OK.
  7. We checked the pressure plot and ITMX/ETMX motion to make sure we weren't venting too quickly or kicking the optics
    1. Should look at eg C1:SUS-ITMX_SENSOR_LL, as well as C1:Vac-P1_pressure
  8. Once the pressure reaches 25torr, we switched over to dry air
    1. wipe off the outside dolly wheels with a wet rag, and exit through the x-arm door to get the air. Sweep off the area outside the door, and wipe off new air containers with the rag.
    2. Bring the cylinder inside, get the regulator ready/purged, and swap relatively quickly.
    3. We increased the vent speed to 10psi. 
    4. Steve says the vents typically take 4 of 300 cf cylinders from Airgas "Ultra Zero" AI  UZ300 that contains 0.1 PPM of THC

Everything looks good, so I'm monitoring the vent and swapping out cylinders.

At 12:08pm, the pressure was at 257 torr and I swapped out in a new cylinder.

Steve: Do not overpressurize the vacuum envelope! Stop around 720 Torr and let lab air do the rest. Our bellows are thin walled for seismic isolation.

  14066   Fri Jul 13 16:26:52 2018 SteveUpdateVACVent 80 is completing...

Steve and Aaron,

6 hrs vent is reaching equlibrium to room air. It took 3 and a half instrument grade air cilynders [ AI UZ300 as labelled ] at 10 psi pressure. Average vent speed ~ 2 Torr/min

Valve configuration: IFO at atm and RGA is pumped through VM2 by TP1 maglev.

 

  14081   Wed Jul 18 03:14:48 2018 AnnalisaUpdateGeneralVent 80 recovery

[Gautam, Johannes, Koji, Annalisa]

Tonight we increased the power of the PSL laser and we achieved the lock of both arms with high power.

The AUX beam alignment to the Y arm was recovered and the PLL restored (using the Marconi as LO).

We made a quick measurement of the phase noise and the results will be posted tomorrow.

The beam on the PSL has been blocked, as well as the AUX beam on the AS table. The Marconi has been switched off.


gautam:

  1. Before turning up PSL power, I placed a block in front of MC refl to avoid any PD burning. Replaced HR Y1 2" optic with the usual 10% reflective BS to direct MC REFL to the locking PD.
  2. Waveplate was rotated back to 180 deg (original position before the vent). After optimizing PMC transmission, I measured 1.05 W going into the IMC (pre-vent value was 1.07 W, prolly within power meter absolute accuracy).
  3. IMC autolocker restored to usual high power version on megatron.
  4. There seems to be some kind of vacuum interlock in effect that prevents me from opening the PSL shutter via EPICS - I had to toggle the position on the shutter controller under the table. After tonight's work, I returned the controller to the NC state, to avoid any further interference with this interlock code that may prevent pumping in the AM.
  5. PLL gain was re-adjusted to achieve maximum stability (judged by eye) of the beat-note in lock triggered on the Marconi LO signal. Alignment onto the NF beatPD was also tweaked to squeeze out as much beat as possible.
  6. The main objective tonight was to send AUX beam in, recover transmission beat, scan the AUX frequency, and resolve some peaks (MAX HOLD scanning technique, magnitude only for now, no phase info). Thanks to JE's expert fiber alignment and beatnote maximization, we achieved this yes. Annalisa will post a plot tmr. 
  7. For unknown reasons, the Y arm ASS does not maximize TRY. So we are in the unfortunate situation of neither arm having a working ASS servo. To be worked on later.
ELOG V3.1.3-