40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 286 of 354  Not logged in ELOG logo
IDup Date Author Type Category Subject
  14364   Tue Dec 18 11:42:40 2018 ChubUpdateGeneralAcromag box wired

The Auxiliary DAQ Chassis, or Acromag box, is now wired and ready for testing.  I will be sorting the cables at the vacuum rack to make connection to the box easier.

 

  14365   Tue Dec 18 18:13:32 2018 aaronUpdateOMCOMC L HV Piezo driver tests (again)

I tested the OMC-L HV driver box again, and made the following observations:

  • Drove the HV diff pins (2,7) with a 5V triangle wave
    • Observed that with a ~0.4V offset on the drive, the HV output (measured directly with a 10x probe) has a 0-(almost)200V triangle wave (for 200V HV in), saturating near 200V and near 0V somewhat before reaching the full range of the triangle
    • The HV mon gives the same answer as measuring the HV output directly, and is reduced 100x compared to the HV output.
    • At 1Hz and above, the rolloff of the low pass still attenuates the drive a bit, and we don't reach the full range.
  • Drove the HV dither pins (1,6) with a 100mV to 10V triangle wave, around 15kHz
    • Even at 10V, the dithering is near the noise of the mon channel, so while I could see a slight peak changing on the FFT near the dither frequency, I couldn't directly observe this on a scope using the mon channel
    • However, measuring the HV directly I do see the dither applied on top of the HV signal. The amplitude of the dither is the same on the HV output as on the dither drive.

[gautam, aaron]

We searched for blips while nominally scanning the OMC length.

We sent a 0.1Hz, 10Vpp triangle wave to the OMC piezo drive diff channels, so the piezo length is seeing a slow triangle wave from 0-200V.

Then, we applied a ~15kHz dither to the OMC length. This dither is added directly onto the HV signal, so the amplitude of the dither at the OMC is the same as the amplitude of the dither into the HV driver.

We monitored the OMC REFL signal (where we saw no blips yesterday) and mixed this with the 15kHz dither signal to get an error signal. Gautam found a pomona box with a low pass filter, so we also low passsed to get rid of some unidentified high frequency noise we were seeing (possibly a ground loop at the function generator? it was present with the box off, but gone with the AC line unplugged). [So we made our own lock-in amplifier.] Photo attached.

We tested the transfer function of the LP, and finding it at 100kHz rather than the advertised 10kHz, we opened the box, removed a resistor to change the 3dB back to 10kHz, and confirmed this by measuring the TF.

We didn't see flashes of error signal in the mixed reflection either, so we suspect that either the PZT is not actuating on the OMC or the alignment is bad. Based on what appears to be the shimmering of far-misaligned fringes on the AS camera, Aaron's suspicion from aligning the cavity with the card, and the lack of flashes, we suspect the alignment. To avoid being stymied by a malfunctioning PZT, we can scan the laser frequency next time rather than the PZT length.

Attachment 1: IMG_4576_copy.jpg
IMG_4576_copy.jpg
  14366   Wed Dec 19 00:12:46 2018 gautamUpdateOMC40m OMC DCC node

I made a node to collect drawings/schematics for the 40m OMC, added the length drive for now. We should collect other stuff (TT drivers, AA/AI, mechanical drawings etc) there as well for easy reference.

Some numbers FTR:

  • OMC length PZT capacitance was measured to be 209 nF.
  • Series resistance in HV path of OMC lenght PZT driver is 10 kohms, so this creates a LP with corner 1/2/pi/10kohm/200nF ~80 Hz.
  • Per Rob's thesis, the length PZT has DC actuation coefficient of 8.3 nm/V, ∼ 2 µm range. 
  14367   Wed Dec 19 14:19:15 2018 KojiSummaryVACPlan for pumpoing down test

We still need elaborated test procedure posted

12/29 Wed

  • Jon continues to work on valve actuator tests.
  • Chub continues to work on wiring / fixing wiring.
  • At the end of the day Jon is going to send out a notification email of "GO"/"NO GO" for pumping.

 

12/30 Thu

  • 9AM: Start closing two doors unless Jon gives us NO GO sign.
  • 10AM: Start pumping down
    • Test roughing pump capability via new control system
    • (Independently) Test turbo rotating procedure. This time we will not open the gate valve between the TP1 and the main volume. This is because we want to take care of the backing turbo loads while we gradually open the gate valve. This will take more hours to be done and we will not be able to finish this test by the end of Thu.
    • At the end of the procedure, we isolate the main volume, stop all the pumps, and vent the roghing pumps to save them from the oil backstream.

gautam: Koji and I were just staring at the vacuum screen, and realized that the drypumps, which are the backing pumps for TP2 and TP3, are not reflected on the MEDM screen. This should be rectified.

Steve also mentioned that the new small turbo controller does not directly interface with the drypump. So we need some system to delay the starting of the turbo itself, once the drypump has been engaged. Does this system exist?

Attachment 1: Screenshot_from_2018-12-19_14-49-34.png
Screenshot_from_2018-12-19_14-49-34.png
  14368   Wed Dec 19 15:15:56 2018 gautamUpdateIOOTT1/TT2 stepping

I removed the ND filter from the ETMYT camera to facilitate searching for a TRY beam. This should be replaced before we go back to high power.

  14369   Wed Dec 19 19:51:19 2018 gautamUpdateGeneralPumpdown prep

[Koji, gautam]

Summary:

We are ready to put the heavy doors back on the chambers and do some test pumpdowns tomorrow morning if Jon gives us the go-ahead. Also, Koji made the OMC resonate some of the AUX beam light we send into ityes

Details:

  1. EY work:
    • IMC was locked, and we attempted to locate the beam with an IR card inside the chamber.
    • Koji found that the beam was too high, we were over-shooting the entire black-glass baffle on the EY table.
    • So I moved the TTs to try and center the beam through the aperture of aforementioned baffle.
    • Once this was done, we found that the beam was misaligned in yaw by ~1-inch in transmission on the EY optics table (there was an iris in place marking the cavity transmission axis). This explains why I couldn't find any TRY flashes while moving the TTs around.
    • We hypothesize that without the 2 degree ETM wedge in place, there isn't a compatible axis for the ITM transmission to also make it through the EY baffle and transmission iris. Over ~1m, the 2 degree wedge makes roughly 1.4 inch translation in yaw, so this seems to be a plausible hypothesis.
    • The ETMY suspension was moved from the mini-cleanroom setup back into the EY vacuum chamber. Two clamps (finger tightened only) hold it in place on the NE edge of the optical table. We decided that this is a better resting palce for the cage over the holidays than an in-air cleanroom.
  2. OMC chamber work:
    • While we were in clean garb, we decided to also investigate the OMC situation a bit.
    • It quickly became apparent that it was hopeless for me to work in chamber in the tightly confined IOO chamber. So Koji went in to have a look.
    • Koji will post the detailed alignment procedure - but after some alignment of the AUX laser input beam axis using in air steering mirrors and Koji's expert tweaking of the pointing into the OMC, we observed some resonances of the OMC.
    • Attachment #1 shows the full-range triangle ramp applied to the OMC length PZT (top row) and the OMC REFL signal (bottom row), measured using a PDA520 (chosen for its large active area) connected to a scope (AC-coupled, 1Mohm impedance, averaged to make the dips more prominent).
    • The OMC transmission was also (barely) visible on an IR card.
    • So the OMC length PZT seems capable of sweeping the length of the cavity. Based on the size of the dips we saw, the MM into the cavity is sub 1-percent.
    • The transmission PDs didn't output any measurable signal - but I'm not sure that the satellite box / readout electronics have been carefully characterized on the electroncis bench, so that will have to be done first.
    • We replaced the copper cover of the OMC (finger tightened for now) in case we do any test pumpdowns tomorrow. HV supply has been turned off, and the AUX laser has been reverted to standby mode.
Attachment 1: OMCscan.pdf
OMCscan.pdf
  14370   Wed Dec 19 21:14:50 2018 gautamUpdateVACPumpdown tomorrow

I just spoke to Jon who asked me to make this elog - we will be ready to test one or more parts of the pumpdown procedure tomorrow (12/20), so we should proceed as planned to put the heavy doors back on EY and OMC chambers at 9am tomorrow morning. Jon will circulate a more detailed procedure about the pumpdown steps later today evening.

  14371   Wed Dec 19 22:11:28 2018 KojiUpdateGeneralHow to align the copper OMC

The OMC input optics layout is attached

Checked the spot position on OMMT-FM1. It was off from the center. This was causing the spot on OMMT1 off-center. This was fixed by the steering mirror for the AUX laser.

The beam alignment onto the OMC was tweaked with OMC-SM1 and OMC-SM2. This was the painful part. We had to make a sensor card that could get in to the narrow space of the OMC. (Attachment 2 right)

Attachment 2 left shows the naming convention of the OMC mirrors.

For the alignment, we gave 5Vpp trig waves at 3.1Hz to the input of the PZT amp so that the cavity is kept scanned continuously. Firstly check the rough spot positions for OMC-CM1 and OMC-CM2. If you carefully use the card, you can check if the beam is returning to OMC-IC. This return beam should have roughtly same hight as the incident beam. This can be adjusted by either of the steering mirrors.

Once the beam is going around the mirrors multiple times, the spot alignment can be checked at OMC-CM1. Bring a card right in front of CM1. If the card is lifter slightly above the incident spot, this automatically allows for the outgoing beam to go through. Depending on the pitch alignment, the next roundtrip (1RT) will be seen on the card. As you lift the card up more, you will be able to see more round trip beams (e.g. 2RT, 3RT, in the figure). If the yaw alignment is perfect, these spots would be lined up vertically. So you can try to align the horizontal direction with the steering mirrors. Then the vertical alignment can be done with the pitch knobs.

At this point you should be able to see some super high-order transmission at the OMC trans. For today, we stopped here as we already ran out of the knob ranges at multiple knobs. This is because the beam height in the mode matching telescope was not right, and the steering mirrors had to work more than their range.

Attachment 1: 110804_40m_OMC_layout.pdf
110804_40m_OMC_layout.pdf
Attachment 2: OMC_alignment.pdf
OMC_alignment.pdf
  14372   Thu Dec 20 08:38:27 2018 JonUpdateVACPumpdown tomorrow

Linked is the pumpdown procedure, contained in the old 40m documentation. The relevant procedure is "All Off --> Vacuum Normal" on page 11.

Quote:

I just spoke to Jon who asked me to make this elog - we will be ready to test one or more parts of the pumpdown procedure tomorrow (12/20), so we should proceed as planned to put the heavy doors back on EY and OMC chambers at 9am tomorrow morning. Jon will circulate a more detailed procedure about the pumpdown steps later today evening.

 

  14373   Thu Dec 20 10:28:43 2018 gautamUpdateVACHeavy doors back on for pumpdown 82

[Chub, Koji, Gautam]

We replaced the EY and IOO chamber heavy doors by 10:10 am PST. Torquing was done first oen round at 25 ft-lb, next at 45 ft-lb (we trust the calibration on the torque wrench, but how reliable is this? And how important are these numbers in ensuring a smooth pumpdown?). All went smooth. The interior of the IOO chamber was found to be dirty when Koji ran a wipe along some surfaces.

For this pumpdown, we aren't so concerned with having the IFO in an operating state as we will certainly vent it again early next year. So we didn't follow the full close-up checklist.

Jon and Chub and Koji are working on starting the pumpdown now... In order to not have to wear laser safety goggles while we closed doors and pumped down, I turned off all the 1064nm lasers in the lab.

  14374   Thu Dec 20 17:17:41 2018 gautamUpdateCDSLogging of new Vacuum channels

Added the following channels to C0EDCU.ini:

[C1:Vac-P1b_pressure]
units=torr
[C1:Vac-PRP_pressure]
units=torr
[C1:Vac-PTP2_pressure]
units=torr
[C1:Vac-PTP3_pressure]
units=torr
[C1:Vac-TP2_rot]
units=kRPM
[C1:Vac-TP3_rot]
units=kRPM

Also modified the old P1 channel to

[C1:Vac-P1a_pressure]
units=torr

Unfortunately, we realized too late that we don't have these channels in the frames, so we don't have the data from this test pumpdown logged, but we will have future stuff. I say we should also log diagnostics from the pumps, such as temperature, current etc. After making the changes, I restarted the daqd processes.


Things to add to ASA wiki page once the wiki comes back online:

  1. What is the safe way to clean the cryo pump if we want to use it again?
  2. What are safe conditions to turn the RGA on?
  14375   Thu Dec 20 21:29:41 2018 JonOmnistructureUpgradeVacuum Controls Switchover Completed

[Jon, Chub, Koji, Gautam]

Summary

Today we carried out the first pumpdown with the new vacuum controls system in place. It performed well. The only problem encountered was with software interlocks spuriously closing valves as the Pirani gauges crossed 1E-4 torr. At that point their readback changes from a number to "L OE-04, " which the system interpreted as a gauge failure instead of "<1E-4." This posed no danger and was fixed on the spot. The main volume was pumped to ~10 torr using roughing pumps 1 and 3. We were limited only by time, as we didn't get started pumping the main volume until after 1pm. The three turbo pumps were also run and tested in parallel, but were isolated to the pumpspool volume. At the end of the day, we closed every pneumatic valve and shut down all five pumps. The main volume is sealed off at ~10 torr, and the pumpspool volume is at ~1e-6 torr. We are leaving the system parked in this state for the holidays. 

Main Volume Pumpdown Procedure

In pumping down the main volume, we carried out the following procedure.

  1. Initially: All valves closed (including manual valves RV1 and VV1); all pumps OFF.
  2. Manually connected roughing pump line to pumpspool via KF joint.
  3. Turned ON RP1 and RP2.
  4. Waited until roughing pump line pressure (PRP) < 0.5 torr.
  5. Opened V3.
  6. Waited until roughing pump line pressure (PRP) < 0.5 torr.
  7. Manually opened RV1 throttling valve to main volume until pumpdown rate reached ~3 torr/min (~3 hours on roughing pumps).
  8. Waited until main volume pressure (P1a/P1b) < 0.5 torr.

We didn't quite reach the end of step 8 by the time we had to stop. The next step would be to valve out the roughing pumps and to valve in the turbo pumps.

Hardware & Channel Assignments

All of the new hardware is now permanently installed in the vacuum rack. This includes the SuperMicro rack server (c1vac), the IOLAN serial device server, a vacuum subnet switch, and the Acromag chassis. Every valve/pump signal cable that formerly connected to the VME bus through terminal blocks has been refitted with a D-sub connector and screwed directly onto feedthroughs on the Acromag chassis.

The attached pdf contains the master list of assigned Acromag channels and their wiring.

Attachment 1: 40m_vacuum_acromag_channels.pdf
40m_vacuum_acromag_channels.pdf 40m_vacuum_acromag_channels.pdf 40m_vacuum_acromag_channels.pdf
  14376   Fri Dec 21 11:11:51 2018 gautamUpdateCDSLogging of new Vacuum channels

The N2 pressure channel name was also wrong in C0EDCU.ini, so I updated it this morning to the correct name and units:

[C1:Vac-N2_pressure]
units=psi

Now it too is being recorded to frames.

  14377   Fri Dec 21 11:13:13 2018 gautamOmnistructureVACN2 line valved off

Per the discussion yesterday, I valved off the N2 line in the drill press room at 11 am PST today morning so as to avoid any accidental software induced gate-valve actuation during the holidays. The line pressure is steadily dropping...

Attachment #1 shows that while the main volume pressure was stable overnight, the the pumpspool pressure has been steadily rising. I think this is to be expected as the turbo pumps aren't running and the valves can't preserve the <1mtorr pressure over long timescales?

Attachment #2 shows the current VacOverview MEDM screen status.

Attachment 1: VacGauges.png
VacGauges.png
Attachment 2: Screenshot_from_2018-12-21_13-02-06.png
Screenshot_from_2018-12-21_13-02-06.png
  14379   Fri Dec 21 12:57:10 2018 KojiOmnistructureVACN2 line valved off

Independent question: Are all the turbo forelines vented automatically? We manually did it for the main roughing line.

 

  14380   Thu Jan 3 15:08:37 2019 gautamOmnistructureVACVac status unknown

Larry W came by the 40m, and reported that there was a campus-wide power glitch (he was here to check if our networking infrastructure was affected). I thought I'd check the status of the vacuum.

  • Attachment #1 is a screenshot of the Vac overview MEDM screen. Clearly something has gone wrong with the modbus process(es). Only the PTP2 and PTP3 gauges seem to be communicative.
  • Attachment #2 shows the minute trend of the pressure gauges for a 12 day period - it looks like there is some issue with the frame builder clock, perhaps this issue resurfaced? But checking the system time on FB doesn't suggest anything is wrong.. I double checked with dataviewer as well that the trends don't exist... But checking the status of the individual daqd processes indeed showed that the dates were off by 1 year, so I just restarted all of them and now the time seems correct. How can we fix this problem more permanently? Also, the P1b readout looks suspicious - why are there periods where it seems like we are reading values better than the LSB of the device?

I decided to check the systemctl process status on c1vac:

controls@c1vac:~$ sudo systemctl status modbusIOC.service
● modbusIOC.service - ModbusIOC Service via procServ
   Loaded: loaded (/etc/systemd/system/modbusIOC.service; enabled)
   Active: active (running) since Thu 2019-01-03 14:53:49 PST; 11min ago
 Main PID: 16533 (procServ)
   CGroup: /system.slice/modbusIOC.service
           ├─16533 /usr/bin/procServ -f -L /opt/target/modbusIOC.log -p /run/...
           ├─16534 /opt/epics/modules/modbus/bin/linux-x86_64/modbusApp /opt/...
           └─16582 caRepeater

Jan 03 14:53:49 c1vac systemd[1]: Started ModbusIOC Service via procServ.

Warning: Unit file changed on disk, 'systemctl daemon-reload' recommended.

So something did happen today that required restart of the modbus processes. But clearly not everything has come back up gracefully. A few lines of dmesg (there are many more segfaults):

[1706033.718061] python[23971]: segfault at 8 ip 000000000049b37d sp 00007fbae2b5fa10 error 4 in python2.7[400000+31d000]
[1706252.225984] python[24183]: segfault at 8 ip 000000000049b37d sp 00007fd3fa365a10 error 4 in python2.7[400000+31d000]
[1720961.451787] systemd-udevd[4076]: starting version 215
[1782064.269844] audit: type=1702 audit(1546540443.159:38): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.269866] audit: type=1302 audit(1546540443.159:39): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/85/tmp_obj_uAXhPg" inode=173019272 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.365240] audit: type=1702 audit(1546540443.255:40): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.365271] audit: type=1302 audit(1546540443.255:41): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/58/tmp_obj_KekHsn" inode=173019274 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.460620] audit: type=1702 audit(1546540443.347:42): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.460652] audit: type=1302 audit(1546540443.347:43): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/cb/tmp_obj_q62Pdr" inode=173019276 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.545449] audit: type=1702 audit(1546540443.435:44): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.545480] audit: type=1302 audit(1546540443.435:45): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/e3/tmp_obj_gPI4qy" inode=173019277 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.640756] audit: type=1702 audit(1546540443.527:46): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1783440.878997] systemd[1]: Unit serial_TP3.service entered failed state.
[1784682.147280] systemd[1]: Unit serial_TP2.service entered failed state.
[1786407.752386] systemd[1]: Unit serial_MKS937b.service entered failed state.
[1792371.508317] systemd[1]: serial_GP316a.service failed to run 'start' task: No such file or directory
[1795550.281623] systemd[1]: Unit serial_GP316b.service entered failed state.
[1796216.213269] systemd[1]: Unit serial_TP3.service entered failed state.
[1796518.976841] systemd[1]: Unit serial_GP307.service entered failed state.
[1796670.328649] systemd[1]: serial_Hornet.service failed to run 'start' task: No such file or directory
[1797723.446084] systemd[1]: Unit serial_MKS937b.service entered failed state.

 

I don't know enough about the new system so I'm leaving this for Jon to debug. Attachment #3 shows that the analog readout of the P1 pressure gauge suggests that the IFO is still under vacuum, so no random valve openings were effected (as expected, since we valved off the N2 line for this very purpose).

Attachment 1: Screenshot_from_2019-01-03_15-19-51.png
Screenshot_from_2019-01-03_15-19-51.png
Attachment 2: Screenshot_from_2019-01-03_15-14-14.png
Screenshot_from_2019-01-03_15-14-14.png
Attachment 3: 997B13A9-CAAF-409C-A6C2-00414D30A141.jpeg
997B13A9-CAAF-409C-A6C2-00414D30A141.jpeg
  14382   Thu Jan 3 21:17:49 2019 ranaConfigurationComputersWorkstation Upgrade: Donatella -> Scientific Linux 7.2

donatella was one of our last workstations running ubuntu12. we installed SL7 on there today

  1. had to use a DVD; wouldn't boot from USB stick
  2. made sure to use userID=1001 and groupID=1001 at the initial install part
  3. went to the Keith Thorne LLO wiki on SL7
  4. The 'yum update' command failed due to a gstreamer conflict. I did "yum remove gstreamer1-plugins-ugly-free-1.10.4-3.el7.x86_64" and then it continued a bit more.
  5. Then there are ~20 errors related to gds-crtools that look like this:Error: Package: gds-crtools-2.18.12-1.el7.x86_64 (lscsoft-production) Requires: libMatrix.so.6.14()(64bit)

  6. I re-ran the yum install .... command using the --skip-broken command and that seemed to complete, although I guess the GDS stuff will not work.
  7. Installed: terminator, inconsolata-fonts, 
  8. Installed XFCE desktop as per K Thorne:  yum groupinstall "Xfce" -y
  9.  
Attachment 1: IMG_20190103_205158.jpg
IMG_20190103_205158.jpg
  14383   Fri Jan 4 10:25:19 2019 JonOmnistructureVACN2 line valved off

Yes, for TP2 and TP3. They both have a small vent valve that opens automatically on shutdown.

Quote:

Independent question: Are all the turbo forelines vented automatically? We manually did it for the main roughing line.

 

 

  14384   Fri Jan 4 11:06:16 2019 JonOmnistructureUpgradeVac System Punchlist

The base Acromag vacuum system is running and performing nicely. Here is a list of remaining questions and to-do items we still need to address.

Safety Issues

  • Interlock for HV supplies. The vac system hosts a binary EPICS channel that is the interlock signal for the in-vacuum HV supplies. The channel value is OFF when the main volume pressure is in the arcing range, 3 mtorr - 500 torr, and ON otherwise. Is there something outside the vacuum system monitoring this channel and toggling the HV supplies?
  • Exposed 30-amp supply terminals. The 30-amp output terminals on the back of the Sorensen in the vac rack are exposed. We need a cover for those.
  • Interlock for AC power loss. The current vac system is protected only from transient power glitches, not an extended loss. The digital system should sense an outage and put the IFO into a safe state (pumps spun down and critical valves closed) before the UPS battery is fully drained. However, it presently has no way of sensing when power has been lost---the system just continues running normally on UPS power until the battery dies, at which point there is a sudden, uncontrolled shutdown. Is it possible for the digital system to communicate directly with the UPS to poll its activation state?

Infrastructure Improvements

  • Install the new N2 tank regulator and high-pressure transducers (we have the parts; on desk across from electronics bench). Run the transducer signal wires to the Acromag chassis in the vacuum rack.
  • Replace the kludged connectors to the Hornet and SuperBee serial outputs with permanent ones (we need to order the parts).
  • Wire the position indicator readback on the manual TP1 valve to the Acromag chassis.
  • Add cable tension relief to the back of the vac rack.
  • Add the TP1 analog readback signals (rotation speed and current) to the digital system.  Digital temperature, current, voltage, and rotation speed signals have already been added for TP2 and TP3.
  • Set up a local vacuum controls terminal on the desk by the vac rack.
  • Remove gauges from the EPICS database/MEDM screens that are no longer installed or functional. Potential candidates for removal: PAN, PTP1, IG1, CC2, CC3, CC4.
  • Although it appeared on the MEDM screen, the RGA was never interfaced to the old vac system. Should it be connected to c1vac now?
  14385   Fri Jan 4 15:18:15 2019 KojiUpdateGeneralChiara disk clean up and internally mounted

[Koji Gautam]

Took the opprtunity of the power glitch to take care of the disk situation of chiara.

- Unmounted /cvs/cds from nodus. This did not affect the services on nodus as they don't use /cvs/cds

- Go to chiara, shut it down, and physically checked the labels of the drives.

root = 0.5TB
/cvs/cds = 4TB HGST
backup of /cvs/cds= 6TB HGST

- These three disks are internally mounted and connected with SATA. Previously, 6TB was on USB.

- There were two other drives (2TB and 3TB) but they seemed logically or physically broken. These two disks were removed from chiara. (they came back online after reformatting on mac. So they seem still physically alive).

controls@chiara|~> df
df: `/var/lib/lightdm/.gvfs': Permission denied
Filesystem      1K-blocks       Used  Available Use% Mounted on
/dev/sda1       461229088   10690932  427109088   3% /
udev             15915020          4   15915016   1% /dev
tmpfs             3185412        848    3184564   1% /run
none                 5120          0       5120   0% /run/lock
none             15927044        144   15926900   1% /run/shm
/dev/sdb1      5814346836 1783407788 3737912972  33% /media/40mBackup
/dev/sdc1      3845709644 1884187232 1766171536  52% /home/cds
controls@chiara|~> lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 465.8G  0 disk
├─sda1   8:1    0 446.9G  0 part /
├─sda2   8:2    0     1K  0 part
└─sda5   8:5    0  18.9G  0 part [SWAP]
sdb      8:16   0   5.5T  0 disk
└─sdb1   8:17   0   5.5T  0 part /media/40mBackup
sdc      8:32   0   3.7T  0 disk
└─sdc1   8:33   0   3.7T  0 part /home/cds
sr0     11:0    1  1024M  0 rom

- Rebooted the machine and just came back without any error. This time the control room machines were not shutdown, but they just recovered the NFS once chiara got back.

Attachment 1: P_20190104_143336.jpg
P_20190104_143336.jpg
Attachment 2: P_20190104_143357.jpg
P_20190104_143357.jpg
  14386   Fri Jan 4 17:43:24 2019 gautamUpdateCDSTiming issues

[J Hanks (remote), koji, gautam]

Summary:

The problem stems from the way GPS timing signals are handled by the FEs and FB. We effected a partial fix:

  • Now, old frame data is no longer being overwritten
  • For the channels that are indeed being recorded now, the correct time stamp is being applied so they can be found in /frames by looking for the appropriate gpstime.

Details:

  • The usual FE/FB power cycling did not fix the problem.
  • The gps time used by FB and associated RT processes may be found by using  cat /proc/gps (i.e. this is different from the system time found by using date, or gpstime).
  • This was off by 2 years.
  • The way this worked up till now was by adding a fixed offset to this time.
    • This offset can be found as a line saying set symm_gps_offset=31622382 in daqdrc.fw (for example)
    • There were similar lines in daqdrc.rcv and daqdrc.dc - however, they were not all the same offset! We couldn't figure out why.
    • All these files live in /opt/rtcds/caltech/c1/target/daqd/.

Changes effected:

  1. First, we tried changing the offset in the daqdrc.fw file only.
    • Incremented it by 24*60*60*365 = number of seconds in a year with no leap seconds/days.
    • This did not fix the problem.
  2. So J Hanks decided to rebuild the Spectracom driver - these commands may not be comprehensive, but I think I got everything).
    • The relevant file is spectracomGPS.c (made a copy of /usr/src/symmetricom-3.3~rc1, called symmetricom-3.3~rc1-patched, this file is in /usr/src/symmetricom-3.3~rc1-patched/include/drv)
    • Added the following lines:
      /* 2018 had no leap seconds or leap days, so adjust for that */
             pHardware->gpsOffset += 31536000;
    • re-built and installed the modified symmetricom driver.
    • Checked that cat /proc/gps now yields the correct time.
    • Reset the gps time offsets in daqdrc.fw, daqdrc.rcv and daqdrc.dc to 0
    • With these steps, the frames were being written to /frames with the correct timestamp.
  3. Next, we checked the timing on the FEs
    • Basically, J Hanks rebuilt the version of the symmetricom driver that is used by the rtcds models to mimic the changes made for FB.
    • This did the trick for c1lsc and c1ioo - cat /proc/gps now returns the correct time on those FEs.
    • However, c1sus remains problematic (it initially reported a GPS time from 2 years ago, and even after the re-installed driver, is 4 days behind) - he suspects that this is because c1sus is the only FE with a Symmetricom/Spectracom card installed in the I/O chassis. So c1sus reports a gpstime that is ~4 days behind the "correct" gpstime.
    • He is going to check with Rolf/other CDS experts to figure out if it's okay for us to simply remove the card and run the models, or if we need to take other steps.
    • As part of this work, the c1x02 IOP model was recompiled, re-installed and re-started.

The realtime models were not restarted (although all the vertex FEs are running) - we can regroup next week and decide what is the correct course of action.

Quote:
 
  • Attachment #2 shows the minute trend of the pressure gauges for a 12 day period - it looks like there is some issue with the frame builder clock, perhaps this issue resurfaced? But checking the system time on FB doesn't suggest anything is wrong.. I double checked with dataviewer as well that the trends don't exist... But checking the status of the individual daqd processes indeed showed that the dates were off by 1 year, so I just restarted all of them and now the time seems correct. How can we fix this problem more permanently? Also, the P1b readout looks suspicious - why are there periods where it seems like we are reading values better than the LSB of the device?
  14387   Mon Jan 7 11:54:12 2019 JonConfigurationComputer Scripts / ProgramsVac system shutdown

I'm making a controlled shutdown of the vac controls to add new ADC channels. Will advise when it's back up.

  14388   Mon Jan 7 19:21:45 2019 JonConfigurationComputer Scripts / ProgramsVac system shutdown

ADC work finished for the day. The vac controls are back up, with all valves CLOSED and all pumps OFF.

Quote:

I'm making a controlled shutdown of the vac controls to add new ADC channels. Will advise when it's back up.

 

  14389   Tue Jan 8 10:27:27 2019 gautamUpdateGeneralNear-term in-chamber work

Here is a list of tasks I think we should prioritize for the next two weeks. The idea is to get back to the previous state of being able to do single arm, PRMI-on-carrier and DRMI locking, before making further changes.

Once the new folding mirrors arrive, I'd like to modify the SRC length to allow locking in the signal-recycled config as opposed to RSE. Still need to do the detailed layout, but I think the in-vacuum layout will work. In that case, I'd like to also move the OMC and OMMT to the IY table, and also move the in-air AS photodiodes to the IY in-air optical table. This is why I've omitted the OMC alignment from this near-term list, but if we want to not move the OMC, then we probably should add alignment of the AS beam to the OMC to this list.

List of in-chamber tasks for 1/2019

Chamber Task(s)
EY
  • Clean ETMY optic and suspension
  • Put ETMY suspension back in place, recover Y-arm cavity alignment
  • Remove any residual hardware from unused heater setup
  • Restore parabolic heater setup, center radiation pattern as best as possible on ETMY
  • Check beam position on IPANG steering mirror
IY
  • Clean ITMY optic and suspension cage
  • Restore ITMY suspension, recover Y arm cavity alignment.
  • Check position of AS beam on OM1/OM2
BS/PRM (if we decide to open it)
  • Replace BS/PRM Oplev HeNe, bring the beam in and out of vacuum with beam well centerd on in-vacuum mirrors (can take this opportunity to fix the in-air layout as well to minimize un-necessary steering mirrors)
  • Check position of AS beam on OM3/OM4, adjust if necessary
  • Check position of IPPOS and IPANG beams on their respective steering optics
OMC (if we decide to open it)
  • Check position of AS beam on OM5/OM6
  • Ensure AS beam exits the vacuum cleanly
  14390   Tue Jan 8 19:13:39 2019 JonUpdateUpgradeReady for pumpdown tomorrow

Everything is set for a second pumpdown tomorrow. We'll plan to start pumping after the 1pm meeting. Since the main volume is already at 12 torr, the roughing phase won't take nearly as long this time.

I've added new channels for the TP1 analog readings (current and speed) and for the two N2 tank pressure readings. Chub finished installing the new regulator and has run the transducer signal cable to the vacuum rack. In the morning he will terminate the cable and make the final connection to the Acromag.

Gautam and I updated the framebuilder config file, adding the newly-added channels to the list of those to be logged. We also set up a git repo containing all of the Python interlock/interfacing code: https://git.ligo.org/40m/vacpython. The idea is to use the issue tracker to systematically document any changes to the interlock code.

  14391   Wed Jan 9 11:07:09 2019 gautamUpdateVACNew Vac channel logging

Looks like I didn't restart all the daqd processes last night, so the data was not in fact being recorded to frames. I just restarted everything, and looks like the data for the last 3 minutes are being recorded yes. Is it reasonable that the TP1 current channel is reporting 0.75A of current draw now, when the pump is off? Also the temperature readback of TP3 seems a lot jumpier than that of TP2, probably has to do with the old controller having fewer ADC bits or something, but perhaps the SMOO needs to be adjusted.

Quote:
 

Gautam and I updated the framebuilder config file, adding the newly-added channels to the list of those to be logged.

Attachment 1: Screenshot_from_2019-01-09_11-08-28.png
Screenshot_from_2019-01-09_11-08-28.png
  14392   Wed Jan 9 11:33:35 2019 gautamUpdateCDSTiming issues still persist

Summary:

The gps time mismatch between /proc/gps and gpstime seems to be resolved. However, the 0x4000 DC errors still persist. It is not clear to me why.

Details:

On the phone with J Hanks on Friday, he reminded me that c1sus seems to be the only machine with an IRIG-B timing card installed. I can't find the elog but I remembered that Jamie, ericq and I had done this work in 2016 (?), and I also remembered Jamie saying it wasn't working exactly as expected. Since the DAQ was working fine before this card was installed, and since there are no problems with the recording of channels from the other four FE machines without this card installed, I decided to simply pull out the card from the expansion chassis. The card has been stored in the CDS/FE cabinet along the Y arm for now. There was also a cable that interfaces to the card which brings over the 1pps from the GPS unit, which has also been stored in the CDS/FE cabinet.

This seems to have resolved the mismatch between the gpstime reported by cat /proc/gps and the gpstime commands - Attachment #1 (the <1 second mismatch is presumably due to the deadtime between commands). However, the 0x4000 DC errors still persist. I'll try the full power cycle of FEs and FB which has fixed this kind of error in the past, but apart from that, I'm out of ideas.

Update 1215:

Following the instructions in this elog did not fix the problem. The problem seems to be with the daqd_fw service, which reports the following:

controls@fb1:~ 0$ sudo systemctl status daqd_fw.service 
● daqd_fw.service - Advanced LIGO RTS daqd frame writer
   Loaded: loaded (/etc/systemd/system/daqd_fw.service; enabled)
   Active: failed (Result: start-limit) since Wed 2019-01-09 12:17:12 PST; 2min 0s ago
  Process: 2120 ExecStart=/usr/bin/daqd_fw -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.fw (code=killed, signal=ABRT)
 Main PID: 2120 (code=killed, signal=ABRT)

Jan 09 12:17:12 fb1 systemd[1]: Unit daqd_fw.service entered failed state.
Jan 09 12:17:12 fb1 systemd[1]: daqd_fw.service holdoff time over, scheduling restart.
Jan 09 12:17:12 fb1 systemd[1]: Stopping Advanced LIGO RTS daqd frame writer...
Jan 09 12:17:12 fb1 systemd[1]: Starting Advanced LIGO RTS daqd frame writer...
Jan 09 12:17:12 fb1 systemd[1]: daqd_fw.service start request repeated too quickly, refusing to start.
Jan 09 12:17:12 fb1 systemd[1]: Failed to start Advanced LIGO RTS daqd frame writer.
Jan 09 12:17:12 fb1 systemd[1]: Unit daqd_fw.service entered failed state.
                                                     

Update 1530:

The frame-writer error was tracked down to a C0EDCU issue. Jon told me that the Hornet CC1 pressure gauge channel was renamed to . C1:Vac-CC1_pressure, and I made the change in the C0EDCU file. However, it returns a value of 9990000000.0, which the frame writer is not happy about... Keeping the old channel name makes the frame-writer run again (although the actual data is bunk).

Update 1755:

J Hanks suggested adding a 1 second offset to the daqdrc config files. This has now fixed the 0x4000 errors, and we are back to the "nominal" RTCDS status screen now - Attachment #2.

Attachment 1: gpstimeSync.png
gpstimeSync.png
Attachment 2: Screenshot_from_2019-01-09_17-56-58.png
Screenshot_from_2019-01-09_17-56-58.png
  14393   Wed Jan 9 20:01:25 2019 JonUpdateVACSecond pumpdown completed

[Jon, Koji, Chub, Gautam]

Summary

The second pumpdown with the new vacuum system was completed successfully today. A time history is attached below.

We started with the main volume still at 12 torr from the Dec. pumpdown. Roughing from 12 to 0.5 torr took approximately two hours, at which point we valved out RP1 and RP3 and valved in TP1 backed by TP2 and TP3. We additionally used the AUX dry pump connected to the backing lines of TP2 and TP3, which we found to boost the overall pump rate by a factor of ~3. The manual hand-crank valve directly in front of TP1 was used to throttle the pump rate, to avoid tripping software interlocks. If the crank valve is opened too quickly, the pressure differential between the main volume (TP1 intake) and TP1 exhaust becomes >1 torr, tripping the V1 valve-close interlock. Once the main volume pressure reached 1e-2 torr, the crank valve could be opened fully.

We allowed the pumpdown to continue until reaching 9e-4 torr in the main volume. At this point we valved off the main volume, valved off TP2 and TP3, and then shut down all turbo pumps/dry pumps. We will continue pumping tomorrow under the supervision of an operator. If the system continues to perform problem-free, we will likely leave the turbos pumping on the main volume and annuli after tomorrow.

New Vac Control Station

We installed a local controls terminal for the vacuum system on the desk in front of the vacuum rack (pictured below). This console is connected directly to c1vac and can be used to monitor/control the system even during a network outage or power failure. The entire pumpdown was run from this station today.

To open a controls MEDM screen, open a terminal and execute the alias

$control

Similarly, to open a monitor-only MEDM screen, execute the alias

$monitor
Attachment 1: Screenshot_from_2019-01-09_20-00-39.png
Screenshot_from_2019-01-09_20-00-39.png
Attachment 2: IMG_3088.jpg
IMG_3088.jpg
  14394   Thu Jan 10 10:23:46 2019 gautamUpdateVACovernight leak rate

Overnight, the pressure increased from 247 uTorr to 264 uTorr over a period of 30000 seconds. Assuming an IFO volume of 33,000 liters, this corresponds to an average leak rate of ~20 uTorr L / s. It'd be interesting to see how this compares with the spec'd leak rates of the Viton O-ring seals and valves/ outgassing rates. The two channels in the screenshot are monitoring the same pressure from the same sensor, top pane is a digital readout while the bottom is a calibrated analog readout that is subsequently digitized into the CDS system.

Quote:
 

We allowed the pumpdown to continue until reaching 9e-4 torr in the main volume. At this point we valved off the main volume, valved off TP2 and TP3, and then shut down all turbo pumps/dry pumps. We will continue pumping tomorrow under the supervision of an operator. If the system continues to perform problem-free, we will likely leave the turbos pumping on the main volume and annuli after tomorrow.

Attachment 1: OvernightLeak.png
OvernightLeak.png
  14395   Thu Jan 10 11:32:40 2019 ChubUpdateVACManual valve interfaced with CDS

Connected the manual gate valve status indicator to the Acromag box this morning.  Labeled the temporary cable (a 50' 9p DSUB, will order a proper sized cable shortly) and the panel RV2.  

  14396   Thu Jan 10 19:59:08 2019 JonUpdateVACVac System Running Normally on Turbo Pumps

[Jon, Gautam, Chub]

Summary

We continued the pumpdown of the IFO today. The main volume pressure has reached 1.9e-5 torr and is continuing to fall. The system has performed without issue all day, so we'll leave the turbos continuously running from here on in the normal pumping configuration. Both TP2 and TP3 are currently backing for TP1. Once the main volume reaches operating pressure, we can transition TP3 to pump the annuli. They have already been roughed to ~0.1 torr. At that point the speed of all three turbo pumps can also be reduced. I've finished final edits/cleanup of the interlock code and MEDM screens.

Python Code

All the python code running on c1vac is archived to the git repo: 

https://git.ligo.org/40m/vacpython

This includes both the interlock code and the serial device clients for interfacing with gauges and pumps.

MEDM Monitor/Control

We're still using the same base MEDM monitor/control screens, but they have been much improved. Improvements:

  • Valves now light up in red when they are open. This makes it much easier to see at a glance what is valved in/out.
  • Every pump in the system (except CP1) is now digitally controlled from the MEDM control screen. No more need to physically push any buttons in the vaccum rack. 👍
  • The turbo pumps now show additional diagnostic readouts: speed (TP1/2/3), temperature (TP2/3), current draw (TP1/2/3), and voltage (TP2/3).
  • The foreline pressure gauge readouts for TP2/3 have been added to the digital system.
  • The two new main volume gauges, Hornet and SuperBee, have been added to the digital system as well.
  • New transducers have been added to read back the two N2 tank pressures.
  • The interlock code generates a log file of all its actions. A field in the MEDM screens specifies the location of the log file.
  • A tripped interlock (appearing as a message in the "Error message" field) must be manually cleared via the "Clear error message" button on the control screen before the system will accept any more manual valve input.

Note: The apparent glitches in the pressure and TP diagnostic channels are due to the interlock system being taken down to implement some of these changes.

Attachment 1: Screen_Shot_2019-01-10_at_7.58.24_PM.png
Screen_Shot_2019-01-10_at_7.58.24_PM.png
Attachment 2: CCs.png
CCs.png
Attachment 3: TPs.png
TPs.png
  14397   Fri Jan 11 16:38:57 2019 gautamUpdateGeneralSome alignment checks

The pumpdown seems to be progressing smoothly, so I think we are going to stick with the plan decided on Wednesday, and vent the IFO on Monday at 8am. I decided to do some checks of the IFO alignment.

I turned on the PSL again (so goggles are advisable again inside the VEA until this work is done), re-locked the PMC, and opened the PSL shutter into the vacuum (still low power 100 mW beam going into vacuum). The IMC alignment required minor tweaking, but I recovered ~1300 cts transmission which is what it was --> so we didn't macroscopically change the input pointing into the IMC while working on the IOO table.

Centering the ITMY oplev spot, there is a spot on the AS camera roughly centered on the control room monitor, so the TT pointing must also be pretty close.

Then I centered the ITMY oplev spot to check how well-aligned or otherwise the Michelson was - the BS has no Oplev so there was considerable angular motion of the Michelson spot, but it looked like on average, it was swinging around through a well aligned place. I saved the slow bias voltages for the ITMs and BS in this config.

Then I re-aligned ETMX and checked the green transmission - it was okay, ~0.3, and I was able to increase it to ~0.4 using the EX green PZT mirrors. So far so good.

Finally, I tried to lock the X-arm on IR - after zeroing the offsets on the transmission QPD, there seems to be a few flashes as the cavity swings through resonances, but no discernible PDH error signal. Moreover the input pointing of the IR into the X arm is controlled by the BS which is swinging around all over the place right now, so perhaps locking is hopeless, but the overall alignment of the IFO seems not too bad. Once ETMY is cleaned and put back in place, perhaps the Y arm can be locked.

I shuttered the PSL and inserted a manual beam block, and also turned off the EX laser so that we can vent on Monday without laser goggles.

*Not directly related to this work: we still have to implement the vacuum interlock condition that closes the PSL shutter in the event of a vacuum failure. It's probably fine now while the PSL power is attenuated, but once we have the high power beam going in, it'd be a good to revert to the old standard.

Attachment 1: pd82.png
pd82.png
Attachment 2: LSC_X.png
LSC_X.png
  14398   Mon Jan 14 10:06:53 2019 gautamUpdateVACVent 82 complete

[chub, gautam]

  • IFO pressure was ~2e-4 torr when we started, on account of the interlock code closing all valves because the N2 line pressure dropped below threshold (<65 psi)
  • Chub fixed the problem on the regulator in the drill-press area where the N2 tanks are, the N2 line is now at ~75 psi so that we have the ability to actuate valves if we so desire
  • We decided that there is no need to vent the pumpspool this time - avoiding an unnecessary turbo landing, so the pumpspool is completely valved off from the main volume and TPs 1-3 are left running
  • Went through the pre-vent checklist:
    • Chub measured particle count, deemed it to be okay (I think we should re-locate the particle counter to near 1X8 because that is where the air enters the IFO anyways, and that way, we can hook it up to the serial device server and have a computerized record of this number as we had in the past, instead of writing it down in a notebook)
    • Checked that the PSL was manually blocked from entering the IFO
    • Walked through the lab, visually inspected Jam Nuts and window covers, all was deemed okay
  • Moved 2 tanks of N2 into the lab on account of the rain
  • Started the vent at ~930am PST
    • There were a couple of short bursty increases in the pressure as we figured out the right valve settings but on average, things are rising at approx the same rate as we had in vent 81...
    • There was a rattling noise coming from the drypump that is the forepump for TP2 (Agilent) - turned out to be the plastic shell/casing on the drypump, moreover, the TP2 diagnostics (temperature, current etc) are all normal.
    • The CC1 gauge (Hornet) is supposed to have an auto-shutoff of its High Voltage when the pressure exceeds 10 mTorr, but it was reporting pressures in the 1 mTorr range even when the adjacent Pirani was at 25 torr. To avoid risk of damage, we manually turned the HV off. There needs to be a python script that can be executed to transition control between the remote and local control modes for the hornet, we had to Power Cycle the gauge because it wouldn't give us local control over the HV.
    • Transitioned from N2 to dry air at P1a=25 torr. We had some trouble finding the correct regulator (left-handed thread) for the dry air cylinders, it was stored in a cabinet labelled green optics no
    • Disconnected dry air from VV1 intake once P1b reached 700 torr, to let lab air flow into the IFO and avoid overpressuring.
    • VA* and VAV* valves were opened so as to vent the annuli as we anticipate multiple chamber openings for this vent.

As of 8pm local time, the IFO seems to have equilibriated to atmospheric pressure (I don't hear the hiss of in-rushing air near 1X8 and P1a reports 760 torr). The pumpspool looks healthy and there are no signs in the TP diagnostics channels that anything bad happened to the pumps. Chub is working on getting the N2 setup more robust, we plan to take the EY door off at 9am tomorrow morning with Bob's help.

* I took this opportunity to follow instructions on pg 29 of the manual and set the calibration for the SuperBee pirani gauge to 760 torr so that it is in better agreement with our existing P1a Pirani gauge. The correction was ~8% (820-->760).

Attachment 1: Vent82Summary.png
Vent82Summary.png
  14399   Tue Jan 15 10:52:38 2019 gautamUpdateSUSEY door opened

[chub, bob, gautam]

We took the heavy door off the EY chamber at ~930am.

Chamber work:

  • ETMY suspension cage was returned to its nominal position.
  • Unused hardware from the annular heater setup was removed.
  • The unused heater had its leads snipped close to the heater crimp point, and the exposed part of the bare wires was covered with Kapton tape (we should remove the source leads as well in air to avoid any accidental shorting)

Waiting for the table to level off now. Plan for later today / tomorrow is as follows:

  1. Lock the Y arm, recover good cavity alignment.
  2. Position parabolic heater such that clipping issue is resolved.
  3. Move optic to edge of table for FC cleaning
  4. Clean optic
  5. Return suspension cage to nominal position.
  14400   Tue Jan 15 15:27:36 2019 gautamUpdateGeneralLasers and other stuff turned back on

VEA is now a laser hazard area as usual, several 1064nm lasers in the lab have been turned back on. Apart from this

  • the IFR was reset to the nominal modulation settings of +13dBm output at 11.066209 MHz (this has to be done manually following each power failure).
  • The temeprature control unit for the EY doubling oven PID control was turned back on.
  • The EY Oplev HeNe was turned back on.
  • EY green PZT HV Kepco was turned back on.
  14401   Tue Jan 15 15:49:47 2019 gautamUpdateSUSEY door opened

While restoring OSEMs on ETMY, I noticed that the open voltages for the UR and LL OSEMs had significantly (>30%) changed from their values from ~2 years ago. The fact that it only occurred in 2 coils seemed to rule out gradual wear and tear, so I looked up the trends from Nov 25 - Nov 28 (Sundance visited on Nov 26 which is when we removed the cage). Not surprisingly, these are the exact two OSEMs that show a decrease in sensor voltage when the OSEMs were pulled out. I suspect that when I placed them in their little Al foil boats, I shorted out some contacts on the rear (this is reminiscent of the problem we had on PRM in 2016). I hope the problem is with the current buffer IC in the satellite box and not the physical diode, I'll test with the tester box and evaluate the problem further.


Chamber work by Chub and gautam:

  1. Table leveling was checked with a clean spirit level
    • Leveling was substantially off in two orthogonal directions, along the beam axis as well as perpendicular to it.
    • We moved almost all the weights available on the table.
    • Managed to get the leveling correct to within 1 tick on the level.
    • We are not too worried about this for now, the final leveling will be after heater repositioning, ETMY cleaning etc.
  2. ETMY OSEM re-insertion
    • OSEMs were re-inserted till their mean voltage was ~ half the open values.
    • Local damping seems to work just fine.
Attachment 1: EY_OSEMs.png
EY_OSEMs.png
  14402   Tue Jan 15 18:16:00 2019 gautamUpdateOptical LeversETMY Oplev HeNe needs replacement

Perhaps the ETMY Oplev HeNe is also giving up - the power has fallen by ~30% over 1 year (Attachment #2), nearly twice as much as ETMX but the RIN spectrum (Attachment #1, didn't even need to rotate it!) certainly seems suspicious. Some "nominal" RIN levels for HeNes can be found earlier in this thread. I can't close any of the EY Oplev loops in this condition. I'll double check to make sure I'm routing the right beam onto the QPD, but if the problem persists, I'll replace the HeNe. ITMX HeNe also looks to be near EOL.

Quote:

Finally I reallized what is killing the ETMY oplev laser. Wrong  power supply, it  was driving the HeNe laser by 600V higher voltage than recommended. Power supply 101T-2300Vdc replaced by 101T-1700Vdc ( Uniphase model 1201-1, sn 2712420 )

The laser head 1103P, sn P947049 lived for 120 days and it was replaced by sn P964431   New laser output 2.8 mW,  quadrant sum 19,750 counts

Attachment 1: OLRIN.pdf
OLRIN.pdf
Attachment 2: OLsums.png
OLsums.png
  14403   Wed Jan 16 16:25:25 2019 gautamUpdateSUSYarm locked

[chub, gautam]

Summary:

Y arm was locked at low power in air.

Details:

  1. ITMY chamber door was removed at ~10am with Bob's help.
  2. ETMY table leveling was found to have drifted significantly (3 ticks on the spirit level, while it was more or less level yesterday, should look up the calib of the spirit level into mrad). Chub moved some weights around on the table, we will check the leveling again tomorrow.
  3. IMC was locked.
  4. TT2 and brass alignemnt tool was used to center beam on ETMY.
  5. TT1 and brass alignment tool was used to center beam on ITMY. We had to do a c1susaux reboot to be able to move ITMY. Usual precautions were taken to avoid ITMX getting stuck.
  6. ETMY was used to make return beam from the ETM overlap with the in-going beam near ITMY, using a holey IR card.
  7. At this point, I was confident we would see IR flashes so I decided to do the fine alignment in the control room.

We are operating with 1/10th the input power we normally have, so we expect the IR transmission of the Y arm to max out at 1 when well aligned. However, it is hovering around 0.05 right now, and the dominant source of instability is the angular motion of ETMY due to the Oplev loop being non-functional. I am hesitant to do in-chamber work without an extra pair of eyes/hands around, so I'll defer that for tomorrow morning when Chub gets in. With the cavity axis well defined, I plan to align the green beam to this axis, and use the two to confirm that we are well clear of the Parabola.

* Paola, our vertex laptop, and indeed, most of the laptops inside the VEA, are not ideal to work on this kind of alignmment procedure, it would be good to set up some workstations on which we can easily interact with multiple MEDM screens,

Attachment 1: Yarm_locked.png
Yarm_locked.png
  14404   Fri Jan 18 12:52:07 2019 gautamUpdateOptical LeversBS/PRM Oplev HeNe replaced

I replaced the BS/PRM Oplev HeNe with one of the heads from the SP table where Steve was setting up the OL RIN/pointing noise experiment. The old one was dead. The new one outputs 3.2 mW of power, I've labelled it with this number, serial number and date of replacement. The beam comes out of the vacuum chamber for both the BS and PRM, and the RIN spectra (Attachment #1) look alright. The calibration into urad and loop gains possibly have to be tweaked. Since the beam comes out of vacuum, I say that we shouldn't open the BS/PRM chamber for this vent - we don't have a proper plan for the in-air layout yet, so we can add this to the list of to-dos for the next vent.

I think we are down to our last spare HeNe head in the lab - @Chub, please look into ordering some more, the ITMX HeNe is going to need replacement soon.

Attachment 1: OLRIN_20190118.pdf
OLRIN_20190118.pdf
  14405   Fri Jan 18 15:34:37 2019 gautamUpdateThermal CompensationElliptical reflector part number

Nobody documented this, but here is the part number with mechanical drawings of the elliptical reflector installed at EY: Optiforms E180. Heater is from Induceramics, but I can't find the exact part which matches the dimensions of the heater we have (diameter = 3.8mm, length = 30mm), perhaps it's a custom part?

The geometry dictates that if we want the heater to be at one focus and the ETM to be at the other, the separation has to be 7.1 inches. It certainly wasn't arranged this way before. It seems unrealistic to do this without clipping the main beam, I propose we leave sufficient clearane between the main beam and the reflector, and accept the reduced heating efficiency. 

Thanks to Steve for digging this up from his secret stash.

  14406   Fri Jan 18 17:44:14 2019 gautamUpdateVACPumping on RGA volume

Steve came by the lab today, and looked at the status of the upgraded vacuum system. He recommended pumping on the RGA volume, since it has not been pumped on for ~3 months on account of the vacuum upgrade. The procedure (so we may script this operation in the future) was:

  1. Start with the pumpspool completely isolated from the main IFO volume.
  2. Open V5, pump down the section between V5 and VM3. Keep an eye on PTP3.
  3. Open VM3, keep an eye on P4. It was reporting ~10 mtorr, went to "LO".
  4. Close VM3 and V5, transition pumping of the RGA volume to TP1 which is backed by TP2 (we had to open V4 as all valves were closed due to an N2 pressure drop event).
  5. Open VM2.
  6. Watch CC4.

CC4 pressure has been steadily falling. Steve recommends leaving things in this state over the weekend. He recommends also turning the RGA unit on so that the temperature rises and there is a bakeout of the RGA. The temperature may be read off manually using a probe attached to it.

Attachment 1: CC4.png
CC4.png
  14407   Fri Jan 18 21:34:18 2019 gautamUpdateSUSUnused optic on EY table

Does anyone know what the purpose of the indicated optic in Attachment #1 is? Can we remove it? It will allow a little more space around the elliptical reflector...

Attachment 1: IMG_5408.JPG
IMG_5408.JPG
  14408   Sat Jan 19 05:07:45 2019 KojiUpdateSUSUnused optic on EY table

I don't think it was used. It is not on the diagram too. You can remove it.

  14409   Sat Jan 19 15:33:18 2019 gautamUpdateSUSETMY OSEMs faulty

After diagnosis with the tester box, as I suspected, the fully open DC voltages on the two problematic channels, LL and UR, were restored once I replaced the LM6321 ICs in those two channel paths. However, I've been puzzled by the inability to turn on the Oplev loops on ETMY. Furthermore, the DC bias voltages required to get ETMY to line up with the cavity axis seemed excessively large, particularly since we seemed to have improved the table levelling.

I suspected that the problem with the OSEMs hasn't been fully resolved, so on Thursday night, I turned off the ETMY watchdog, kicked the optic, and let it ringdown. Then I looked at the time-series (Attachment #1) and spectra (Attachment #2) of the ringdowns. Clearly, the LL channel seems to saturate at the lower end at ~440 counts. Moreover, in the time domain, it looks like the other channels see the ringdown cleanly, but I don't see the various suspension eigenmodes in any of the sensor signals. I confirmed that all the magnets are still attached to the optic, and that the EQ stops are well clear of the optic, so I'm inclined to think that this behavior is due to an electrical fault rather than a mechanical one.

For now, I'll start by repeating the ringdown with a switched out Satellite Box (SRM) and see if that fixes the problem. 

Quote:

While restoring OSEMs on ETMY, I noticed that the open voltages for the UR and LL OSEMs had significantly (>30%) changed from their values from ~2 years ago. The fact that it only occurred in 2 coils seemed to rule out gradual wear and tear, so I looked up the trends from Nov 25 - Nov 28 (Sundance visited on Nov 26 which is when we removed the cage). Not surprisingly, these are the exact two OSEMs that show a decrease in sensor voltage when the OSEMs were pulled out. I suspect that when I placed them in their little Al foil boats, I shorted out some contacts on the rear (this is reminiscent of the problem we had on PRM in 2016). I hope the problem is with the current buffer IC in the satellite box and not the physical diode, I'll test with the tester box and evaluate the problem further.

Attachment 1: Screen_Shot_2019-01-19_at_3.32.35_PM.png
Screen_Shot_2019-01-19_at_3.32.35_PM.png
Attachment 2: ETMY_sensors_1231832635.pdf
ETMY_sensors_1231832635.pdf
  14410   Sun Jan 20 23:41:00 2019 JonOmnistructureVACNotes on vac serial comm, adapter wiring

I've attached my handwritten notes covering all the serial communications in the vac system, and the relevant wiring for all the adapters, etc. I'll work with Chub to produce a final documentation, but in the meantime this may be a useful reference.

Attachment 1: Jon_wiring_notes.tar.gz
  14411   Tue Jan 22 20:36:53 2019 gautamUpdateSUSETMY OSEMs faulty

Short update on latest Satellite box woes.

  1. I checked the resistance of all 5 OSEM coils on ETMY using a DB25 breakout board and a multimeter - all were between 16-17 ohms (mesured from the cable to the Vacuum flange), which I think is consistent with the expected value.
  2. Checked the bias voltage (aka slow path) from the coil driver board was reaching the coils
    • The voltages were indeed being sent out of the coil driver board - I confirmed by driving a slow sine wave and measuring at the output of the coil driver board, with all the fast outputs disabled.
    • The voltage is arriving at the 64 pin IDC connector at the Satellite box - Chub and I verified this using some mini-grabbers and leads from wirewound resistors (we don't have a breakout board for this kind of connector, would be handy to get some!)
    • However, the voltages are not being sent out through the DB25 connectors on the side of the Satellite box, at least for the LL and UR channels. UL seems to work okay.
    • This behavior is consistent with the observation that we had to apply way larger bias voltages to get the cavity axis to line up than was the nominal values - if one or more coils weren't getting their signals, it would also explain the large PIT->YAW coupling I observed using the Oplev spot and the slow bias alignment EPICS sliders.
    • This behavior is puzzling - the Sat box is just supposed to be a feed-through for the coil driver signals, and we measured resistances between the 64 pin IDC connector and the corresponding DB25 pins, and measured in the range of 0.2-0.3 ohms. However, the voltage fails to make it through - not sure what's going on here.. We will investigate further on the electronics bench.

What's more - I did some Sat box switcheroo, swapping the SRM and ETM boxes back and forth in combination with the tester box. In the process, I seem to have broken the SRM sat box - all the shadow sensors are reporting close to 0 volts, and this was confirmed to be an electronic problem as opposed to some magnet skullduggery using the tester box. Once we get to the bottom of the ETMY sat box, we will look at SRM. This is more or less the last thing to look at for this vent - once we are happy the cavity axis can be recovered reliably, we can freeze the position of the elliptical reflector and begin the F.C.ing.

  14412   Tue Jan 22 20:45:21 2019 gautamUpdateVACNew N2 setup

The N2 ran out this weekend (again no reminder email, but I haven't found the time to setup the Python mailer yet). So all the valves Steve and I had opened, closed (rightly so, that's what the interlocks are supposed to do). Chub will post an elog about the new N2 valve setup in the Drill-press room, but we now have sufficient line pressure in the N2 line again. So Chub and I re-opened the valves to keep pumping on the RGA.

  14413   Wed Jan 23 12:39:18 2019 gautamUpdateSUSEY chamber work

While Chub is making new cables for the EY satellite box...

  1. I removed the unused optic on the NW corner of the EY table. It is stored in a clean Al-foil lined plastic box, and will be moved to the clean hardware section of the lab (along the South arm, south of MC2 chamber).
  2. Checked table leveling - Attachment #1, looked good, and has been stable over the weekend.
  3. I moved the two oversized washers on the reflector, which I believe are only used because the screw is long and wouldn't go in all the way otherwise. As shown in Attachment #2, this reduces the risk of clipping the main IFO beam axis.
  4. Yesterday, I pulled up the 40m CAD drawing, and played around with a rectangular box that approximates the extents of the elliptical reflector, to see what would be a good place to put it. I chose to go ahead with Attachment #3. Also shown is the eventually realized layout. Note that we'd actually like the dimension marked ~7.6 inches to be more like 7.1 inches, so the optic is actually ~0.5 inch ahead of the second focus of the ellipse, but I think this is good enough. 
  5. Attachment #4 shows the view of the optic as seen from the aperture on the back of the elliptical reflector. Looks good to me.
  6. Having positioned the reflector, I then inserted the heater into the aperture such that it is ~2/3rds the way in, which was the best position found by Annalisa last summer. I then ran 0.9 A of current through the heater for ~ 5 minutes. Attachment #5 shows the optic as seen with the FLIR with no heating, and after 5 minutes of heating. I'd say this is pretty unambiguous evidence that we are indeed heating the mirror. The gradient shown is significantly less pronounced than in Annalisa's simulations (~3K as opposed to 10K), but maybe the FLIR calibration isn't so great.
  7. For completeness, Attachment #6 shows the leveling of the table after this work. Nothing has chanegd significantly.

While the position of the reflector could possibly be optimized further, since we are already seeing a temperature gradient on the optic, I propose pushing on with other vent activities. I'm almost certain the current positioning places the optic closer to the second focus, and we already saw shifts of the HOM resonances with the old configuration, so I'd say we run with this and revisit if needed.

If Chub gives the Sat. Box the green flag, we will work on F.C.ing the mirrors in the evening, with the aim of closing up tomorrow/Friday. 

All raw images in this elog have been uploaded to the 40m google photos.

Attachment 1: leveling.pdf
leveling.pdf
Attachment 2: IMG_5930.jpg
IMG_5930.jpg
Attachment 3: Ellipse_layout.pdf
Ellipse_layout.pdf
Attachment 4: IMG_5932.jpg
IMG_5932.jpg
Attachment 5: hotMirror.pdf
hotMirror.pdf
Attachment 6: EY_leveling_after.pdf
EY_leveling_after.pdf
  14414   Wed Jan 23 18:11:56 2019 gautamUpdateElectronicsEthernet Power Strip IP conflict

For the last week, I noticed that I was unable to turn the EY chamber illuminator on using the remote python scripts. This was turning out to be really annoying, having to turn the light on/off manually. Today, I looked into the problem and found that there is a conflict in the IP addresses of the EY Ethernet Strip (which Chas assigned a static IP but did not include detailed procedures forno) and the vertex area laptop, paola. The failure of the python control of the power strip coincided exactly with when Chub and I turned on paola for working at the IY chamber - but how was I supposed to know these events are correlated? I tried shutting down paola , power cycling the Ethernet power strip, and restarting the bind9 services on chiara, but remote control of the ethernet power strip remains elusive. I suspect reconfiguring the static IP for the Ethernet switch will require some serial port enabled device...

  14415   Wed Jan 23 23:12:44 2019 gautamUpdateSUSPrep for FC cleaning

In preparation for the FC cleaning, I did the following:

  1. Set up mini-cleanroom at EY - this consists of the mobile HEPA unit put up against the chamber door, with films draped around the setup.
  2. After double-checking the table leveling, I EQ-stopped ETMY and moved it to the NE corner of the EY table, where it will be cleaned.
  3. Checked leveling of IY table - see Attachment #1.
  4. Took pictures of IY table, OSEM arrangement on ITMY.
  5. EQ-stopped ITMY and SRM.
  6. Removed the face OSEMs from ITMY (this required clipping off the copper wire used to hold the OSEM wires against the suspension cage). The side OSEM has not yet been removed because I left the allen key that is compatible with that particular screw inside the EY chamber. 
  7. To position ITMY at the edge of the IY table where we can easily clean it, we will need to move the OSEM cabling tower as we did last time. I've taken photos of its current position for now.

Tomorrow, I will start with the cleaning of ETMY HR. While the FC is drying, I will position ITMY at the edge of the IY cable for cleaning (Chub will setup the mini-cleanroom at the IY table). The plan is to clean both HR surfaces and have the optics back in place by tomorrow evening. By my count, we have done everything listed in the IY and EY chambers. I'd like to minimize the time between cleaning and pumpdown, so if all goes well (Sat Box problems notwithstanding), we will check the table leveling on Friday morning, and put on the heavy doors and at least rough the main volume down to 1 torr on Friday.

Attachment 1: IY_level_before.pdf
IY_level_before.pdf
ELOG V3.1.3-