ID |
Date |
Author |
Type |
Category |
Subject |
14373
|
Thu Dec 20 10:28:43 2018 |
gautam | Update | VAC | Heavy doors back on for pumpdown 82 | [Chub, Koji, Gautam]
We replaced the EY and IOO chamber heavy doors by 10:10 am PST. Torquing was done first oen round at 25 ft-lb, next at 45 ft-lb (we trust the calibration on the torque wrench, but how reliable is this? And how important are these numbers in ensuring a smooth pumpdown?). All went smooth. The interior of the IOO chamber was found to be dirty when Koji ran a wipe along some surfaces.
For this pumpdown, we aren't so concerned with having the IFO in an operating state as we will certainly vent it again early next year. So we didn't follow the full close-up checklist.
Jon and Chub and Koji are working on starting the pumpdown now... In order to not have to wear laser safety goggles while we closed doors and pumped down, I turned off all the 1064nm lasers in the lab. |
14374
|
Thu Dec 20 17:17:41 2018 |
gautam | Update | CDS | Logging of new Vacuum channels | Added the following channels to C0EDCU.ini:
[C1:Vac-P1b_pressure]
units=torr
[C1:Vac-PRP_pressure]
units=torr
[C1:Vac-PTP2_pressure]
units=torr
[C1:Vac-PTP3_pressure]
units=torr
[C1:Vac-TP2_rot]
units=kRPM
[C1:Vac-TP3_rot]
units=kRPM
Also modified the old P1 channel to
[C1:Vac-P1a_pressure]
units=torr
Unfortunately, we realized too late that we don't have these channels in the frames, so we don't have the data from this test pumpdown logged, but we will have future stuff. I say we should also log diagnostics from the pumps, such as temperature, current etc. After making the changes, I restarted the daqd processes.
Things to add to ASA wiki page once the wiki comes back online:
- What is the safe way to clean the cryo pump if we want to use it again?
- What are safe conditions to turn the RGA on?
|
14376
|
Fri Dec 21 11:11:51 2018 |
gautam | Update | CDS | Logging of new Vacuum channels | The N2 pressure channel name was also wrong in C0EDCU.ini, so I updated it this morning to the correct name and units:
[C1:Vac-N2_pressure]
units=psi
Now it too is being recorded to frames. |
14377
|
Fri Dec 21 11:13:13 2018 |
gautam | Omnistructure | VAC | N2 line valved off | Per the discussion yesterday, I valved off the N2 line in the drill press room at 11 am PST today morning so as to avoid any accidental software induced gate-valve actuation during the holidays. The line pressure is steadily dropping...
Attachment #1 shows that while the main volume pressure was stable overnight, the the pumpspool pressure has been steadily rising. I think this is to be expected as the turbo pumps aren't running and the valves can't preserve the <1mtorr pressure over long timescales?
Attachment #2 shows the current VacOverview MEDM screen status. |
Attachment 1: VacGauges.png
|
|
Attachment 2: Screenshot_from_2018-12-21_13-02-06.png
|
|
14380
|
Thu Jan 3 15:08:37 2019 |
gautam | Omnistructure | VAC | Vac status unknown | Larry W came by the 40m, and reported that there was a campus-wide power glitch (he was here to check if our networking infrastructure was affected). I thought I'd check the status of the vacuum.
- Attachment #1 is a screenshot of the Vac overview MEDM screen. Clearly something has gone wrong with the modbus process(es). Only the PTP2 and PTP3 gauges seem to be communicative.
- Attachment #2 shows the minute trend of the pressure gauges for a 12 day period - it looks like there is some issue with the frame builder clock, perhaps this issue resurfaced? But checking the system time on FB doesn't suggest anything is wrong.. I double checked with dataviewer as well that the trends don't exist... But checking the status of the individual daqd processes indeed showed that the dates were off by 1 year, so I just restarted all of them and now the time seems correct. How can we fix this problem more permanently? Also, the P1b readout looks suspicious - why are there periods where it seems like we are reading values better than the LSB of the device?
I decided to check the systemctl process status on c1vac:
controls@c1vac:~$ sudo systemctl status modbusIOC.service
● modbusIOC.service - ModbusIOC Service via procServ
Loaded: loaded (/etc/systemd/system/modbusIOC.service; enabled)
Active: active (running) since Thu 2019-01-03 14:53:49 PST; 11min ago
Main PID: 16533 (procServ)
CGroup: /system.slice/modbusIOC.service
├─16533 /usr/bin/procServ -f -L /opt/target/modbusIOC.log -p /run/...
├─16534 /opt/epics/modules/modbus/bin/linux-x86_64/modbusApp /opt/...
└─16582 caRepeater
Jan 03 14:53:49 c1vac systemd[1]: Started ModbusIOC Service via procServ.
Warning: Unit file changed on disk, 'systemctl daemon-reload' recommended.
So something did happen today that required restart of the modbus processes. But clearly not everything has come back up gracefully. A few lines of dmesg (there are many more segfaults):
[1706033.718061] python[23971]: segfault at 8 ip 000000000049b37d sp 00007fbae2b5fa10 error 4 in python2.7[400000+31d000]
[1706252.225984] python[24183]: segfault at 8 ip 000000000049b37d sp 00007fd3fa365a10 error 4 in python2.7[400000+31d000]
[1720961.451787] systemd-udevd[4076]: starting version 215
[1782064.269844] audit: type=1702 audit(1546540443.159:38): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.269866] audit: type=1302 audit(1546540443.159:39): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/85/tmp_obj_uAXhPg" inode=173019272 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.365240] audit: type=1702 audit(1546540443.255:40): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.365271] audit: type=1302 audit(1546540443.255:41): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/58/tmp_obj_KekHsn" inode=173019274 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.460620] audit: type=1702 audit(1546540443.347:42): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.460652] audit: type=1302 audit(1546540443.347:43): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/cb/tmp_obj_q62Pdr" inode=173019276 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.545449] audit: type=1702 audit(1546540443.435:44): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1782064.545480] audit: type=1302 audit(1546540443.435:45): item=0 name="/cvs/cds/caltech/target/c1vac/.git/objects/e3/tmp_obj_gPI4qy" inode=173019277 dev=00:21 mode=0100444 ouid=1001 ogid=1001 rdev=00:00 nametype=NORMAL
[1782064.640756] audit: type=1702 audit(1546540443.527:46): op=linkat ppid=21820 pid=22823 auid=4294967295 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=4294967295 comm="git" exe="/usr/bin/git" res=0
[1783440.878997] systemd[1]: Unit serial_TP3.service entered failed state.
[1784682.147280] systemd[1]: Unit serial_TP2.service entered failed state.
[1786407.752386] systemd[1]: Unit serial_MKS937b.service entered failed state.
[1792371.508317] systemd[1]: serial_GP316a.service failed to run 'start' task: No such file or directory
[1795550.281623] systemd[1]: Unit serial_GP316b.service entered failed state.
[1796216.213269] systemd[1]: Unit serial_TP3.service entered failed state.
[1796518.976841] systemd[1]: Unit serial_GP307.service entered failed state.
[1796670.328649] systemd[1]: serial_Hornet.service failed to run 'start' task: No such file or directory
[1797723.446084] systemd[1]: Unit serial_MKS937b.service entered failed state.
I don't know enough about the new system so I'm leaving this for Jon to debug. Attachment #3 shows that the analog readout of the P1 pressure gauge suggests that the IFO is still under vacuum, so no random valve openings were effected (as expected, since we valved off the N2 line for this very purpose). |
Attachment 1: Screenshot_from_2019-01-03_15-19-51.png
|
|
Attachment 2: Screenshot_from_2019-01-03_15-14-14.png
|
|
Attachment 3: 997B13A9-CAAF-409C-A6C2-00414D30A141.jpeg
|
|
14386
|
Fri Jan 4 17:43:24 2019 |
gautam | Update | CDS | Timing issues | [J Hanks (remote), koji, gautam]
Summary:
The problem stems from the way GPS timing signals are handled by the FEs and FB. We effected a partial fix:
- Now, old frame data is no longer being overwritten
- For the channels that are indeed being recorded now, the correct time stamp is being applied so they can be found in /frames by looking for the appropriate gpstime.
Details:
- The usual FE/FB power cycling did not fix the problem.
- The gps time used by FB and associated RT processes may be found by using cat /proc/gps (i.e. this is different from the system time found by using date, or gpstime).
- This was off by 2 years.
- The way this worked up till now was by adding a fixed offset to this time.
- This offset can be found as a line saying set symm_gps_offset=31622382 in daqdrc.fw (for example)
- There were similar lines in daqdrc.rcv and daqdrc.dc - however, they were not all the same offset! We couldn't figure out why.
- All these files live in /opt/rtcds/caltech/c1/target/daqd/.
Changes effected:
- First, we tried changing the offset in the daqdrc.fw file only.
- Incremented it by 24*60*60*365 = number of seconds in a year with no leap seconds/days.
- This did not fix the problem.
- So J Hanks decided to rebuild the Spectracom driver - these commands may not be comprehensive, but I think I got everything).
- The relevant file is spectracomGPS.c (made a copy of /usr/src/symmetricom-3.3~rc1, called symmetricom-3.3~rc1-patched, this file is in /usr/src/symmetricom-3.3~rc1-patched/include/drv)
- Added the following lines:
/* 2018 had no leap seconds or leap days, so adjust for that */
pHardware->gpsOffset += 31536000;
- re-built and installed the modified symmetricom driver.
- Checked that cat /proc/gps now yields the correct time.
- Reset the gps time offsets in daqdrc.fw, daqdrc.rcv and daqdrc.dc to 0
- With these steps, the frames were being written to /frames with the correct timestamp.
- Next, we checked the timing on the FEs
- Basically, J Hanks rebuilt the version of the symmetricom driver that is used by the rtcds models to mimic the changes made for FB.
- This did the trick for c1lsc and c1ioo - cat /proc/gps now returns the correct time on those FEs.
- However, c1sus remains problematic (it initially reported a GPS time from 2 years ago, and even after the re-installed driver, is 4 days behind) - he suspects that this is because c1sus is the only FE with a Symmetricom/Spectracom card installed in the I/O chassis. So c1sus reports a gpstime that is ~4 days behind the "correct" gpstime.
- He is going to check with Rolf/other CDS experts to figure out if it's okay for us to simply remove the card and run the models, or if we need to take other steps.
- As part of this work, the c1x02 IOP model was recompiled, re-installed and re-started.
The realtime models were not restarted (although all the vertex FEs are running) - we can regroup next week and decide what is the correct course of action.
Quote: |
- Attachment #2 shows the minute trend of the pressure gauges for a 12 day period - it looks like there is some issue with the frame builder clock, perhaps this issue resurfaced? But checking the system time on FB doesn't suggest anything is wrong.. I double checked with dataviewer as well that the trends don't exist... But checking the status of the individual daqd processes indeed showed that the dates were off by 1 year, so I just restarted all of them and now the time seems correct. How can we fix this problem more permanently? Also, the P1b readout looks suspicious - why are there periods where it seems like we are reading values better than the LSB of the device?
|
|
14389
|
Tue Jan 8 10:27:27 2019 |
gautam | Update | General | Near-term in-chamber work | Here is a list of tasks I think we should prioritize for the next two weeks. The idea is to get back to the previous state of being able to do single arm, PRMI-on-carrier and DRMI locking, before making further changes.
Once the new folding mirrors arrive, I'd like to modify the SRC length to allow locking in the signal-recycled config as opposed to RSE. Still need to do the detailed layout, but I think the in-vacuum layout will work. In that case, I'd like to also move the OMC and OMMT to the IY table, and also move the in-air AS photodiodes to the IY in-air optical table. This is why I've omitted the OMC alignment from this near-term list, but if we want to not move the OMC, then we probably should add alignment of the AS beam to the OMC to this list.
List of in-chamber tasks for 1/2019
Chamber |
Task(s) |
EY |
- Clean ETMY optic and suspension
- Put ETMY suspension back in place, recover Y-arm cavity alignment
- Remove any residual hardware from unused heater setup
- Restore parabolic heater setup, center radiation pattern as best as possible on ETMY
- Check beam position on IPANG steering mirror
|
IY |
- Clean ITMY optic and suspension cage
- Restore ITMY suspension, recover Y arm cavity alignment.
- Check position of AS beam on OM1/OM2
|
BS/PRM (if we decide to open it) |
- Replace BS/PRM Oplev HeNe, bring the beam in and out of vacuum with beam well centerd on in-vacuum mirrors (can take this opportunity to fix the in-air layout as well to minimize un-necessary steering mirrors)
- Check position of AS beam on OM3/OM4, adjust if necessary
- Check position of IPPOS and IPANG beams on their respective steering optics
|
OMC (if we decide to open it) |
- Check position of AS beam on OM5/OM6
- Ensure AS beam exits the vacuum cleanly
|
|
14391
|
Wed Jan 9 11:07:09 2019 |
gautam | Update | VAC | New Vac channel logging | Looks like I didn't restart all the daqd processes last night, so the data was not in fact being recorded to frames. I just restarted everything, and looks like the data for the last 3 minutes are being recorded . Is it reasonable that the TP1 current channel is reporting 0.75A of current draw now, when the pump is off? Also the temperature readback of TP3 seems a lot jumpier than that of TP2, probably has to do with the old controller having fewer ADC bits or something, but perhaps the SMOO needs to be adjusted.
Quote: |
Gautam and I updated the framebuilder config file, adding the newly-added channels to the list of those to be logged.
|
|
Attachment 1: Screenshot_from_2019-01-09_11-08-28.png
|
|
14392
|
Wed Jan 9 11:33:35 2019 |
gautam | Update | CDS | Timing issues still persist | Summary:
The gps time mismatch between /proc/gps and gpstime seems to be resolved. However, the 0x4000 DC errors still persist. It is not clear to me why.
Details:
On the phone with J Hanks on Friday, he reminded me that c1sus seems to be the only machine with an IRIG-B timing card installed. I can't find the elog but I remembered that Jamie, ericq and I had done this work in 2016 (?), and I also remembered Jamie saying it wasn't working exactly as expected. Since the DAQ was working fine before this card was installed, and since there are no problems with the recording of channels from the other four FE machines without this card installed, I decided to simply pull out the card from the expansion chassis. The card has been stored in the CDS/FE cabinet along the Y arm for now. There was also a cable that interfaces to the card which brings over the 1pps from the GPS unit, which has also been stored in the CDS/FE cabinet.
This seems to have resolved the mismatch between the gpstime reported by cat /proc/gps and the gpstime commands - Attachment #1 (the <1 second mismatch is presumably due to the deadtime between commands). However, the 0x4000 DC errors still persist. I'll try the full power cycle of FEs and FB which has fixed this kind of error in the past, but apart from that, I'm out of ideas.
Update 1215:
Following the instructions in this elog did not fix the problem. The problem seems to be with the daqd_fw service, which reports the following:
controls@fb1:~ 0$ sudo systemctl status daqd_fw.service
● daqd_fw.service - Advanced LIGO RTS daqd frame writer
Loaded: loaded (/etc/systemd/system/daqd_fw.service; enabled)
Active: failed (Result: start-limit) since Wed 2019-01-09 12:17:12 PST; 2min 0s ago
Process: 2120 ExecStart=/usr/bin/daqd_fw -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.fw (code=killed, signal=ABRT)
Main PID: 2120 (code=killed, signal=ABRT)
Jan 09 12:17:12 fb1 systemd[1]: Unit daqd_fw.service entered failed state.
Jan 09 12:17:12 fb1 systemd[1]: daqd_fw.service holdoff time over, scheduling restart.
Jan 09 12:17:12 fb1 systemd[1]: Stopping Advanced LIGO RTS daqd frame writer...
Jan 09 12:17:12 fb1 systemd[1]: Starting Advanced LIGO RTS daqd frame writer...
Jan 09 12:17:12 fb1 systemd[1]: daqd_fw.service start request repeated too quickly, refusing to start.
Jan 09 12:17:12 fb1 systemd[1]: Failed to start Advanced LIGO RTS daqd frame writer.
Jan 09 12:17:12 fb1 systemd[1]: Unit daqd_fw.service entered failed state.
Update 1530:
The frame-writer error was tracked down to a C0EDCU issue. Jon told me that the Hornet CC1 pressure gauge channel was renamed to . C1:Vac-CC1_pressure, and I made the change in the C0EDCU file. However, it returns a value of 9990000000.0, which the frame writer is not happy about... Keeping the old channel name makes the frame-writer run again (although the actual data is bunk).
Update 1755:
J Hanks suggested adding a 1 second offset to the daqdrc config files. This has now fixed the 0x4000 errors, and we are back to the "nominal" RTCDS status screen now - Attachment #2. |
Attachment 1: gpstimeSync.png
|
|
Attachment 2: Screenshot_from_2019-01-09_17-56-58.png
|
|
14394
|
Thu Jan 10 10:23:46 2019 |
gautam | Update | VAC | overnight leak rate | Overnight, the pressure increased from 247 uTorr to 264 uTorr over a period of 30000 seconds. Assuming an IFO volume of 33,000 liters, this corresponds to an average leak rate of ~20 uTorr L / s. It'd be interesting to see how this compares with the spec'd leak rates of the Viton O-ring seals and valves/ outgassing rates. The two channels in the screenshot are monitoring the same pressure from the same sensor, top pane is a digital readout while the bottom is a calibrated analog readout that is subsequently digitized into the CDS system.
Quote: |
We allowed the pumpdown to continue until reaching 9e-4 torr in the main volume. At this point we valved off the main volume, valved off TP2 and TP3, and then shut down all turbo pumps/dry pumps. We will continue pumping tomorrow under the supervision of an operator. If the system continues to perform problem-free, we will likely leave the turbos pumping on the main volume and annuli after tomorrow.
|
|
Attachment 1: OvernightLeak.png
|
|
14397
|
Fri Jan 11 16:38:57 2019 |
gautam | Update | General | Some alignment checks | The pumpdown seems to be progressing smoothly, so I think we are going to stick with the plan decided on Wednesday, and vent the IFO on Monday at 8am. I decided to do some checks of the IFO alignment.
I turned on the PSL again (so goggles are advisable again inside the VEA until this work is done), re-locked the PMC, and opened the PSL shutter into the vacuum (still low power 100 mW beam going into vacuum). The IMC alignment required minor tweaking, but I recovered ~1300 cts transmission which is what it was --> so we didn't macroscopically change the input pointing into the IMC while working on the IOO table.
Centering the ITMY oplev spot, there is a spot on the AS camera roughly centered on the control room monitor, so the TT pointing must also be pretty close.
Then I centered the ITMY oplev spot to check how well-aligned or otherwise the Michelson was - the BS has no Oplev so there was considerable angular motion of the Michelson spot, but it looked like on average, it was swinging around through a well aligned place. I saved the slow bias voltages for the ITMs and BS in this config.
Then I re-aligned ETMX and checked the green transmission - it was okay, ~0.3, and I was able to increase it to ~0.4 using the EX green PZT mirrors. So far so good.
Finally, I tried to lock the X-arm on IR - after zeroing the offsets on the transmission QPD, there seems to be a few flashes as the cavity swings through resonances, but no discernible PDH error signal. Moreover the input pointing of the IR into the X arm is controlled by the BS which is swinging around all over the place right now, so perhaps locking is hopeless, but the overall alignment of the IFO seems not too bad. Once ETMY is cleaned and put back in place, perhaps the Y arm can be locked.
I shuttered the PSL and inserted a manual beam block, and also turned off the EX laser so that we can vent on Monday without laser goggles.
*Not directly related to this work: we still have to implement the vacuum interlock condition that closes the PSL shutter in the event of a vacuum failure. It's probably fine now while the PSL power is attenuated, but once we have the high power beam going in, it'd be a good to revert to the old standard. |
Attachment 1: pd82.png
|
|
Attachment 2: LSC_X.png
|
|
14398
|
Mon Jan 14 10:06:53 2019 |
gautam | Update | VAC | Vent 82 complete | [chub, gautam]
- IFO pressure was ~2e-4 torr when we started, on account of the interlock code closing all valves because the N2 line pressure dropped below threshold (<65 psi)
- Chub fixed the problem on the regulator in the drill-press area where the N2 tanks are, the N2 line is now at ~75 psi so that we have the ability to actuate valves if we so desire
- We decided that there is no need to vent the pumpspool this time - avoiding an unnecessary turbo landing, so the pumpspool is completely valved off from the main volume and TPs 1-3 are left running
- Went through the pre-vent checklist:
- Chub measured particle count, deemed it to be okay (I think we should re-locate the particle counter to near 1X8 because that is where the air enters the IFO anyways, and that way, we can hook it up to the serial device server and have a computerized record of this number as we had in the past, instead of writing it down in a notebook)
- Checked that the PSL was manually blocked from entering the IFO
- Walked through the lab, visually inspected Jam Nuts and window covers, all was deemed okay
- Moved 2 tanks of N2 into the lab on account of the rain
- Started the vent at ~930am PST
- There were a couple of short bursty increases in the pressure as we figured out the right valve settings but on average, things are rising at approx the same rate as we had in vent 81...
- There was a rattling noise coming from the drypump that is the forepump for TP2 (Agilent) - turned out to be the plastic shell/casing on the drypump, moreover, the TP2 diagnostics (temperature, current etc) are all normal.
- The CC1 gauge (Hornet) is supposed to have an auto-shutoff of its High Voltage when the pressure exceeds 10 mTorr, but it was reporting pressures in the 1 mTorr range even when the adjacent Pirani was at 25 torr. To avoid risk of damage, we manually turned the HV off. There needs to be a python script that can be executed to transition control between the remote and local control modes for the hornet, we had to Power Cycle the gauge because it wouldn't give us local control over the HV.
- Transitioned from N2 to dry air at P1a=25 torr. We had some trouble finding the correct regulator (left-handed thread) for the dry air cylinders, it was stored in a cabinet labelled green optics

- Disconnected dry air from VV1 intake once P1b reached 700 torr, to let lab air flow into the IFO and avoid overpressuring.
- VA* and VAV* valves were opened so as to vent the annuli as we anticipate multiple chamber openings for this vent.
As of 8pm local time, the IFO seems to have equilibriated to atmospheric pressure (I don't hear the hiss of in-rushing air near 1X8 and P1a reports 760 torr). The pumpspool looks healthy and there are no signs in the TP diagnostics channels that anything bad happened to the pumps. Chub is working on getting the N2 setup more robust, we plan to take the EY door off at 9am tomorrow morning with Bob's help.
* I took this opportunity to follow instructions on pg 29 of the manual and set the calibration for the SuperBee pirani gauge to 760 torr so that it is in better agreement with our existing P1a Pirani gauge. The correction was ~8% (820-->760). |
Attachment 1: Vent82Summary.png
|
|
14399
|
Tue Jan 15 10:52:38 2019 |
gautam | Update | SUS | EY door opened | [chub, bob, gautam]
We took the heavy door off the EY chamber at ~930am.
Chamber work:
- ETMY suspension cage was returned to its nominal position.
- Unused hardware from the annular heater setup was removed.
- The unused heater had its leads snipped close to the heater crimp point, and the exposed part of the bare wires was covered with Kapton tape (we should remove the source leads as well in air to avoid any accidental shorting)
Waiting for the table to level off now. Plan for later today / tomorrow is as follows:
- Lock the Y arm, recover good cavity alignment.
- Position parabolic heater such that clipping issue is resolved.
- Move optic to edge of table for FC cleaning
- Clean optic
- Return suspension cage to nominal position.
|
14400
|
Tue Jan 15 15:27:36 2019 |
gautam | Update | General | Lasers and other stuff turned back on | VEA is now a laser hazard area as usual, several 1064nm lasers in the lab have been turned back on. Apart from this
- the IFR was reset to the nominal modulation settings of +13dBm output at 11.066209 MHz (this has to be done manually following each power failure).
- The temeprature control unit for the EY doubling oven PID control was turned back on.
- The EY Oplev HeNe was turned back on.
- EY green PZT HV Kepco was turned back on.
|
14401
|
Tue Jan 15 15:49:47 2019 |
gautam | Update | SUS | EY door opened | While restoring OSEMs on ETMY, I noticed that the open voltages for the UR and LL OSEMs had significantly (>30%) changed from their values from ~2 years ago. The fact that it only occurred in 2 coils seemed to rule out gradual wear and tear, so I looked up the trends from Nov 25 - Nov 28 (Sundance visited on Nov 26 which is when we removed the cage). Not surprisingly, these are the exact two OSEMs that show a decrease in sensor voltage when the OSEMs were pulled out. I suspect that when I placed them in their little Al foil boats, I shorted out some contacts on the rear (this is reminiscent of the problem we had on PRM in 2016). I hope the problem is with the current buffer IC in the satellite box and not the physical diode, I'll test with the tester box and evaluate the problem further.
Chamber work by Chub and gautam:
- Table leveling was checked with a clean spirit level
- Leveling was substantially off in two orthogonal directions, along the beam axis as well as perpendicular to it.
- We moved almost all the weights available on the table.
- Managed to get the leveling correct to within 1 tick on the level.
- We are not too worried about this for now, the final leveling will be after heater repositioning, ETMY cleaning etc.
- ETMY OSEM re-insertion
- OSEMs were re-inserted till their mean voltage was ~ half the open values.
- Local damping seems to work just fine.
|
Attachment 1: EY_OSEMs.png
|
|
14402
|
Tue Jan 15 18:16:00 2019 |
gautam | Update | Optical Levers | ETMY Oplev HeNe needs replacement | Perhaps the ETMY Oplev HeNe is also giving up - the power has fallen by ~30% over 1 year (Attachment #2), nearly twice as much as ETMX but the RIN spectrum (Attachment #1, didn't even need to rotate it!) certainly seems suspicious. Some "nominal" RIN levels for HeNes can be found earlier in this thread. I can't close any of the EY Oplev loops in this condition. I'll double check to make sure I'm routing the right beam onto the QPD, but if the problem persists, I'll replace the HeNe. ITMX HeNe also looks to be near EOL.
Quote: |
Finally I reallized what is killing the ETMY oplev laser. Wrong power supply, it was driving the HeNe laser by 600V higher voltage than recommended. Power supply 101T-2300Vdc replaced by 101T-1700Vdc ( Uniphase model 1201-1, sn 2712420 )
The laser head 1103P, sn P947049 lived for 120 days and it was replaced by sn P964431 New laser output 2.8 mW, quadrant sum 19,750 counts
|
|
Attachment 1: OLRIN.pdf
|
|
Attachment 2: OLsums.png
|
|
14403
|
Wed Jan 16 16:25:25 2019 |
gautam | Update | SUS | Yarm locked | [chub, gautam]
Summary:
Y arm was locked at low power in air.
Details:
- ITMY chamber door was removed at ~10am with Bob's help.
- ETMY table leveling was found to have drifted significantly (3 ticks on the spirit level, while it was more or less level yesterday, should look up the calib of the spirit level into mrad). Chub moved some weights around on the table, we will check the leveling again tomorrow.
- IMC was locked.
- TT2 and brass alignemnt tool was used to center beam on ETMY.
- TT1 and brass alignment tool was used to center beam on ITMY. We had to do a c1susaux reboot to be able to move ITMY. Usual precautions were taken to avoid ITMX getting stuck.
- ETMY was used to make return beam from the ETM overlap with the in-going beam near ITMY, using a holey IR card.
- At this point, I was confident we would see IR flashes so I decided to do the fine alignment in the control room.
We are operating with 1/10th the input power we normally have, so we expect the IR transmission of the Y arm to max out at 1 when well aligned. However, it is hovering around 0.05 right now, and the dominant source of instability is the angular motion of ETMY due to the Oplev loop being non-functional. I am hesitant to do in-chamber work without an extra pair of eyes/hands around, so I'll defer that for tomorrow morning when Chub gets in. With the cavity axis well defined, I plan to align the green beam to this axis, and use the two to confirm that we are well clear of the Parabola.
* Paola, our vertex laptop, and indeed, most of the laptops inside the VEA, are not ideal to work on this kind of alignmment procedure, it would be good to set up some workstations on which we can easily interact with multiple MEDM screens, |
Attachment 1: Yarm_locked.png
|
|
14404
|
Fri Jan 18 12:52:07 2019 |
gautam | Update | Optical Levers | BS/PRM Oplev HeNe replaced | I replaced the BS/PRM Oplev HeNe with one of the heads from the SP table where Steve was setting up the OL RIN/pointing noise experiment. The old one was dead. The new one outputs 3.2 mW of power, I've labelled it with this number, serial number and date of replacement. The beam comes out of the vacuum chamber for both the BS and PRM, and the RIN spectra (Attachment #1) look alright. The calibration into urad and loop gains possibly have to be tweaked. Since the beam comes out of vacuum, I say that we shouldn't open the BS/PRM chamber for this vent - we don't have a proper plan for the in-air layout yet, so we can add this to the list of to-dos for the next vent.
I think we are down to our last spare HeNe head in the lab - @Chub, please look into ordering some more, the ITMX HeNe is going to need replacement soon. |
Attachment 1: OLRIN_20190118.pdf
|
|
14405
|
Fri Jan 18 15:34:37 2019 |
gautam | Update | Thermal Compensation | Elliptical reflector part number | Nobody documented this, but here is the part number with mechanical drawings of the elliptical reflector installed at EY: Optiforms E180. Heater is from Induceramics, but I can't find the exact part which matches the dimensions of the heater we have (diameter = 3.8mm, length = 30mm), perhaps it's a custom part?
The geometry dictates that if we want the heater to be at one focus and the ETM to be at the other, the separation has to be 7.1 inches. It certainly wasn't arranged this way before. It seems unrealistic to do this without clipping the main beam, I propose we leave sufficient clearane between the main beam and the reflector, and accept the reduced heating efficiency.
Thanks to Steve for digging this up from his secret stash. |
14406
|
Fri Jan 18 17:44:14 2019 |
gautam | Update | VAC | Pumping on RGA volume | Steve came by the lab today, and looked at the status of the upgraded vacuum system. He recommended pumping on the RGA volume, since it has not been pumped on for ~3 months on account of the vacuum upgrade. The procedure (so we may script this operation in the future) was:
- Start with the pumpspool completely isolated from the main IFO volume.
- Open V5, pump down the section between V5 and VM3. Keep an eye on PTP3.
- Open VM3, keep an eye on P4. It was reporting ~10 mtorr, went to "LO".
- Close VM3 and V5, transition pumping of the RGA volume to TP1 which is backed by TP2 (we had to open V4 as all valves were closed due to an N2 pressure drop event).
- Open VM2.
- Watch CC4.
CC4 pressure has been steadily falling. Steve recommends leaving things in this state over the weekend. He recommends also turning the RGA unit on so that the temperature rises and there is a bakeout of the RGA. The temperature may be read off manually using a probe attached to it. |
Attachment 1: CC4.png
|
|
14407
|
Fri Jan 18 21:34:18 2019 |
gautam | Update | SUS | Unused optic on EY table | Does anyone know what the purpose of the indicated optic in Attachment #1 is? Can we remove it? It will allow a little more space around the elliptical reflector... |
Attachment 1: IMG_5408.JPG
|
|
14409
|
Sat Jan 19 15:33:18 2019 |
gautam | Update | SUS | ETMY OSEMs faulty | After diagnosis with the tester box, as I suspected, the fully open DC voltages on the two problematic channels, LL and UR, were restored once I replaced the LM6321 ICs in those two channel paths. However, I've been puzzled by the inability to turn on the Oplev loops on ETMY. Furthermore, the DC bias voltages required to get ETMY to line up with the cavity axis seemed excessively large, particularly since we seemed to have improved the table levelling.
I suspected that the problem with the OSEMs hasn't been fully resolved, so on Thursday night, I turned off the ETMY watchdog, kicked the optic, and let it ringdown. Then I looked at the time-series (Attachment #1) and spectra (Attachment #2) of the ringdowns. Clearly, the LL channel seems to saturate at the lower end at ~440 counts. Moreover, in the time domain, it looks like the other channels see the ringdown cleanly, but I don't see the various suspension eigenmodes in any of the sensor signals. I confirmed that all the magnets are still attached to the optic, and that the EQ stops are well clear of the optic, so I'm inclined to think that this behavior is due to an electrical fault rather than a mechanical one.
For now, I'll start by repeating the ringdown with a switched out Satellite Box (SRM) and see if that fixes the problem.
Quote: |
While restoring OSEMs on ETMY, I noticed that the open voltages for the UR and LL OSEMs had significantly (>30%) changed from their values from ~2 years ago. The fact that it only occurred in 2 coils seemed to rule out gradual wear and tear, so I looked up the trends from Nov 25 - Nov 28 (Sundance visited on Nov 26 which is when we removed the cage). Not surprisingly, these are the exact two OSEMs that show a decrease in sensor voltage when the OSEMs were pulled out. I suspect that when I placed them in their little Al foil boats, I shorted out some contacts on the rear (this is reminiscent of the problem we had on PRM in 2016). I hope the problem is with the current buffer IC in the satellite box and not the physical diode, I'll test with the tester box and evaluate the problem further.
|
|
Attachment 1: Screen_Shot_2019-01-19_at_3.32.35_PM.png
|
|
Attachment 2: ETMY_sensors_1231832635.pdf
|
|
14411
|
Tue Jan 22 20:36:53 2019 |
gautam | Update | SUS | ETMY OSEMs faulty | Short update on latest Satellite box woes.
- I checked the resistance of all 5 OSEM coils on ETMY using a DB25 breakout board and a multimeter - all were between 16-17 ohms (mesured from the cable to the Vacuum flange), which I think is consistent with the expected value.
- Checked the bias voltage (aka slow path) from the coil driver board was reaching the coils
- The voltages were indeed being sent out of the coil driver board - I confirmed by driving a slow sine wave and measuring at the output of the coil driver board, with all the fast outputs disabled.
- The voltage is arriving at the 64 pin IDC connector at the Satellite box - Chub and I verified this using some mini-grabbers and leads from wirewound resistors (we don't have a breakout board for this kind of connector, would be handy to get some!)
- However, the voltages are not being sent out through the DB25 connectors on the side of the Satellite box, at least for the LL and UR channels. UL seems to work okay.
- This behavior is consistent with the observation that we had to apply way larger bias voltages to get the cavity axis to line up than was the nominal values - if one or more coils weren't getting their signals, it would also explain the large PIT->YAW coupling I observed using the Oplev spot and the slow bias alignment EPICS sliders.
- This behavior is puzzling - the Sat box is just supposed to be a feed-through for the coil driver signals, and we measured resistances between the 64 pin IDC connector and the corresponding DB25 pins, and measured in the range of 0.2-0.3 ohms. However, the voltage fails to make it through - not sure what's going on here.. We will investigate further on the electronics bench.
What's more - I did some Sat box switcheroo, swapping the SRM and ETM boxes back and forth in combination with the tester box. In the process, I seem to have broken the SRM sat box - all the shadow sensors are reporting close to 0 volts, and this was confirmed to be an electronic problem as opposed to some magnet skullduggery using the tester box. Once we get to the bottom of the ETMY sat box, we will look at SRM. This is more or less the last thing to look at for this vent - once we are happy the cavity axis can be recovered reliably, we can freeze the position of the elliptical reflector and begin the F.C.ing. |
14412
|
Tue Jan 22 20:45:21 2019 |
gautam | Update | VAC | New N2 setup | The N2 ran out this weekend (again no reminder email, but I haven't found the time to setup the Python mailer yet). So all the valves Steve and I had opened, closed (rightly so, that's what the interlocks are supposed to do). Chub will post an elog about the new N2 valve setup in the Drill-press room, but we now have sufficient line pressure in the N2 line again. So Chub and I re-opened the valves to keep pumping on the RGA. |
14413
|
Wed Jan 23 12:39:18 2019 |
gautam | Update | SUS | EY chamber work | While Chub is making new cables for the EY satellite box...
- I removed the unused optic on the NW corner of the EY table. It is stored in a clean Al-foil lined plastic box, and will be moved to the clean hardware section of the lab (along the South arm, south of MC2 chamber).
- Checked table leveling - Attachment #1, looked good, and has been stable over the weekend.
- I moved the two oversized washers on the reflector, which I believe are only used because the screw is long and wouldn't go in all the way otherwise. As shown in Attachment #2, this reduces the risk of clipping the main IFO beam axis.
- Yesterday, I pulled up the 40m CAD drawing, and played around with a rectangular box that approximates the extents of the elliptical reflector, to see what would be a good place to put it. I chose to go ahead with Attachment #3. Also shown is the eventually realized layout. Note that we'd actually like the dimension marked ~7.6 inches to be more like 7.1 inches, so the optic is actually ~0.5 inch ahead of the second focus of the ellipse, but I think this is good enough.
- Attachment #4 shows the view of the optic as seen from the aperture on the back of the elliptical reflector. Looks good to me.
- Having positioned the reflector, I then inserted the heater into the aperture such that it is ~2/3rds the way in, which was the best position found by Annalisa last summer. I then ran 0.9 A of current through the heater for ~ 5 minutes. Attachment #5 shows the optic as seen with the FLIR with no heating, and after 5 minutes of heating. I'd say this is pretty unambiguous evidence that we are indeed heating the mirror. The gradient shown is significantly less pronounced than in Annalisa's simulations (~3K as opposed to 10K), but maybe the FLIR calibration isn't so great.
- For completeness, Attachment #6 shows the leveling of the table after this work. Nothing has chanegd significantly.
While the position of the reflector could possibly be optimized further, since we are already seeing a temperature gradient on the optic, I propose pushing on with other vent activities. I'm almost certain the current positioning places the optic closer to the second focus, and we already saw shifts of the HOM resonances with the old configuration, so I'd say we run with this and revisit if needed.
If Chub gives the Sat. Box the green flag, we will work on F.C.ing the mirrors in the evening, with the aim of closing up tomorrow/Friday.
All raw images in this elog have been uploaded to the 40m google photos. |
Attachment 1: leveling.pdf
|
|
Attachment 2: IMG_5930.jpg
|
|
Attachment 3: Ellipse_layout.pdf
|
|
Attachment 4: IMG_5932.jpg
|
|
Attachment 5: hotMirror.pdf
|
|
Attachment 6: EY_leveling_after.pdf
|
|
14414
|
Wed Jan 23 18:11:56 2019 |
gautam | Update | Electronics | Ethernet Power Strip IP conflict | For the last week, I noticed that I was unable to turn the EY chamber illuminator on using the remote python scripts. This was turning out to be really annoying, having to turn the light on/off manually. Today, I looked into the problem and found that there is a conflict in the IP addresses of the EY Ethernet Strip (which Chas assigned a static IP but did not include detailed procedures for ) and the vertex area laptop, paola. The failure of the python control of the power strip coincided exactly with when Chub and I turned on paola for working at the IY chamber - but how was I supposed to know these events are correlated? I tried shutting down paola , power cycling the Ethernet power strip, and restarting the bind9 services on chiara, but remote control of the ethernet power strip remains elusive. I suspect reconfiguring the static IP for the Ethernet switch will require some serial port enabled device... |
14415
|
Wed Jan 23 23:12:44 2019 |
gautam | Update | SUS | Prep for FC cleaning | In preparation for the FC cleaning, I did the following:
- Set up mini-cleanroom at EY - this consists of the mobile HEPA unit put up against the chamber door, with films draped around the setup.
- After double-checking the table leveling, I EQ-stopped ETMY and moved it to the NE corner of the EY table, where it will be cleaned.
- Checked leveling of IY table - see Attachment #1.
- Took pictures of IY table, OSEM arrangement on ITMY.
- EQ-stopped ITMY and SRM.
- Removed the face OSEMs from ITMY (this required clipping off the copper wire used to hold the OSEM wires against the suspension cage). The side OSEM has not yet been removed because I left the allen key that is compatible with that particular screw inside the EY chamber.
- To position ITMY at the edge of the IY table where we can easily clean it, we will need to move the OSEM cabling tower as we did last time. I've taken photos of its current position for now.
Tomorrow, I will start with the cleaning of ETMY HR. While the FC is drying, I will position ITMY at the edge of the IY cable for cleaning (Chub will setup the mini-cleanroom at the IY table). The plan is to clean both HR surfaces and have the optics back in place by tomorrow evening. By my count, we have done everything listed in the IY and EY chambers. I'd like to minimize the time between cleaning and pumpdown, so if all goes well (Sat Box problems notwithstanding), we will check the table leveling on Friday morning, and put on the heavy doors and at least rough the main volume down to 1 torr on Friday. |
Attachment 1: IY_level_before.pdf
|
|
14416
|
Thu Jan 24 15:32:31 2019 |
gautam | Update | SUS | Y arm cavity side first contact applied | EY:
- A clean cart was setup adjacent to the HEPA-enclosed mini cleanroom area (it cannot be inside the mini cleanroom, because of lack of space).
- The FC tools (first contact, acetone, beakers, brushes, PEEK mesh, clean scissors, clean tweezers, Canon camera, green flashlight) were laid out on this cart for easy access.
- I inspected the optic - the barrel had a few specks of dust, and the outer 1.5" annular region of the HR face looked to have some streak marks
- I was advised not to pre-wipe the HR side with any solvents
- The FC was only applied to the centran ~1-1.5" of the optic
- After applying the FC, I spent a few minutes inspecting the status of the OSEMs
- Three out of the four face OSEMs, as well as the side OSEM, did not have a filter in
- I inserted filters into them.
- Closed up the chamber with light door, left HEPA unit on and the mini cleanroom setup intact for now. We will dismantle everything after the pumpdown.
IY:
- Similar setup to EY was implemented
- Removed side OSEM from ITMY.
- Double-checked that EQ stops were engaged.
- Moved the OSEM cable tower to free up some space for accommodating ITMY.
- Undid the clamps of ITMY, moved it to the NE corner of the IY table.
- Inspected the optic - it was much cleaner than the 2016 inspection, although the barrel did have some specks of dust.
- Once again, I applied first contact to the central ~1.5" of the HR surface.
- Checked status of filters on OSEMs - this time, only the UL coil required a filter.
- Attachment #3 shows the sensor voltage DC level before and after the insertion of the filter. There is ~0.1% change.
- The filters were found in a box that suggests they were made in 2002 - but Steve tells me that it is just stored in a box with that label, and that since there are >100 filters inside that box, he thinks they are the new ones we procured in 2016. The coating specs and type of glass used are different between the two versions.
The attached photo shows the two optics with FC applied.
My original plan was to attempt to close up tomorrow. However, we are still struggling with Satellite box issues. So rather than rush it, we will attempt to recover the Y arm cavity alignment on Monday, satellite box permitting. The main motivation is to reduce the deadtime between peeling off the F.C and starting the pumpdown. We will start working on recovering the cavity alignment once the Sat box issues are solved. |
Attachment 1: Yarm_FC.pdf
|
|
Attachment 2: OSEMfilter.png
|
|
14417
|
Thu Jan 24 22:55:50 2019 |
gautam | Update | Electronics | Satellite box S/N 102 investigation | I had taken Satellite box S/N 102, from the SRM suspension, down to the Y-end as part of debugging. However, at some point, I stopped getting readbacks from the shadow sensor PDs, even with the Sat. Box tester hooked up (so as to rule out anything funky with the actual OSEMs). Today evening, I did a more systematic investigation. Schematic with component references is here.
- Used mini-grabbers and a bench power supply to connect +/-24V to C57 and C58.
- Checked that all ICs were getting +/- 15 V to the supply pins.
- Debugged individual channels, checking voltages at various nodes
- Found that the "PD K" bias voltage was anomalosly low.
- Found that the inverting input of U3C wasn't ground.
- The above findings are summarized in Attachment #2.
- This suggested something was wrong with the Quad OpAmp LT1125 IC, so I elected to switch it out.
- During the desoldering process, the pads for the "NC" pins came off (Attachment #1) - this has happened to me before on these old boards. I don't think I applied excess heat during the desoldering (I used 650F).
- Replaced the IC, and measured the expected 10V at the "PD K" node.
- I then connected the tester box and verified all the shadow sensor channels (LED + PD) work as expected, using the front panel J3 and the "octopus cable".
- It remains to verify that the coil driver signals get correctly routed through the Satellite box before giving this box a pass certification.
The question remains as to what caused this failure mode - I can't think of why that particular IC was damaged during the Satellite box swapping process - is this indicative of some problem elsewhere in the ETMY OSEM/coil driver electronics chain? |
Attachment 1: IMG_7294.JPG
|
|
Attachment 2: D961289-B2.pdf
|
|
14418
|
Fri Jan 25 12:49:53 2019 |
gautam | Update | Electronics | Ethernet Power Strip IP conflict resolved | To avoid the annoying excercise of having to manually toggle the illuminators, I solved the IP conflict. Made a wiki page for the ethernet power strips since the documentation was woeful (the way the power strips are mounted in the racks, you can't even see the manufacturer/model/make). All chamber illuminators can now be turned on/off by the MEDM scripts . Note that there is a web interface available too, which can be useful in case of some python socket issues. The main lesson is: avoid using the "reset" button on the power strips, it destroys the static IP config.
Unrelated to this work: The EY laptop, asia, won't boot up anymore, with a "Fan Error" message being the red flag. I've temporarily recommissioned the vacuum rack laptop, belladonna, to be the EY machine for this vent. Can we get 3 netbooks that actually work and don't need to be tethered to a power strip for the VEA? |
14419
|
Fri Jan 25 16:14:51 2019 |
gautam | Update | VAC | Vacuum interlock code, N2 warning | I reset the remote of this git repo to the 40m version instead of Jon's personal one, to ensure consistency between what's on the vacuum machine and in the git repo. There is now a N2 checker python mailer that will email the 40m list if all the tank pressures are below 600 PSI (>12 hours left for someone to react before the main N2 line pressure drops and the interlocks kick in). For now, the script just runs as a cron job every 3 hours, but perhaps we should integrate it with the interlock process?
|
14421
|
Tue Jan 29 17:19:16 2019 |
gautam | Update | Electronics | Satellite box S/N 105 repaired | [chub, koji, gautam]
Attachment #1 shows the signal routing near the Satellite box. Somehow, the female 64 pin IDC connector that brings the signals from the coil driver board wasn't mating well with the mail connector on the Satellite box front panel. This is a connector specific problem - plugging the female end into one of the male connectors inside the Satellite box yielded signal continuity. The problem was resolved by re-making both connections -by driving the EPICS bias slider through its full range, we were able to see the full voltage swing at the DB connectors going to the flange
This kind of flakiness could be all around the lab, and could be responsible for many of the suspension "mysteries". To re-iterate, the problem seems to be the way the female sockets of the connector mates with the male pins - while the actual crimping points may look secure, there may not be signal continuity.
Now that this problem is resolved, tomorrow we will recover the cavity alignment and possibly start a pumpdown.
Unrelated to this work - the spare satellite box (S/N #100), which had a note on it that said "low voltages", was tested. The "low voltages" referred to the OSEM shadow sensor voltages being low when the LED was completely unobscured. The reason was that the mod to increase the drive current to 25 mA had not yet been implemented on this unit. I added the appropriate 806 ohm resistors, and verified that the voltages were correct, so now we have a working spare. It is stored in the "photodiode" cabinet along the east arm, together with the tester boxes. |
Attachment 1: IMG_7301.JPG
|
|
14422
|
Tue Jan 29 22:12:40 2019 |
gautam | Update | SUS | Alignment prep | Since we may want to close up tomorrow, I did the following prep work:
- Cleaned up Y-end suspension eleoctronics setup, connected the Sat Box back to the flange
- The OSEMs are just sitting on the table right now, so they are just seeing the fully open voltage
- Post filter insertion, the four face OSEMs report ~3-4% lower open-voltage values compared to before, which is compatible with the transmission spec for the filters (T>95%)
- The side OSEM is reporting ~10% lower - perhaps I just didn't put the filter on right, something to be looked at inside the chamber
- Suspension watchdog restoration
- I'd shutdown all the watchdogs during the Satellite box debacle
- However, I left ITMY, ETMY and SRM tripped as these optics are EQ-stopped / don't have the OSEMs inserted.
- Checked IMC alignment
- After some hand-alignment of the IMC, it was locked, transmission is ~1200 counts which is what I remember it being
- Checked X-arm alignment
- Strictly speaking, this has to be done after setting the Y-arm alignment as that dictates the input pointing of the IMC transmission to the IFO, but I decided to have a quick look nevertheless
- Surprisingly, ITMX damping isn't working very well it seems - the optic is clearly swinging around a lot, and the shadow sensor RMS voltage is ~10s of mV, whereas for all the other optics, it is ~1mV.
- I'll try the usual cable squishing voodoo
Rather than try and rush and close up tomorrow, I propose spending the day tomorrow cleaning the peripheral areas of the optic, suspension cage, and chamber. Then on Thursday morning, we can replace the Y-arm optics, try and recover the cavity alignment, and then aim for a Thursday afternoon pumpdown. The main motivation is to reduce the time the optics spend in air after F.C. peeling and going to vacuum. |
14423
|
Wed Jan 30 11:54:24 2019 |
gautam | Update | SUS | More alignment prep | [chub, gautam]
- ETMY cage was wiped down
- Targeted potential areas where dust could drift off from and get attracted to a charged HR surface
- These areas were surprisingly dusty, even left a grey mark on the wipe [Attachment #1] - we think we did a sufficiently thorough job, but unclear if this helps the loss numbers
- More pictures are on gPhoto
- Filters on SD and LR OSEMs were replaced - the open shadow sensor voltages with filters in/out are consistent with the T>95% coating spec.
- IPANG beam position was checked
- It is already too high, missing the first steering optic by ~0.5 inch, not the greatest photo but conclusion holds [Attachment #2].
- I think we shouldn't worry about it for this pumpdown, we can fix it when we put in the new PR3.
- Cage wiping procedure was repeated on ITMY
- The cage was much dustier than ETMY
- However, the optic itself (barrel and edge of HR face) was cleaner
- All accessible areas were wiped with isopropanol
- Before/after pics are on gPhoto (even after cleaning, there are some marks on the suspension that looks like dust, but these are machining marks)
Procedure tomorrow [comments / suggestions welcome]:
- Start with IY chamber
- Peel first contact with TopGun jet flowing
- Inspect optic face with green flashlight to check for residual First Contact
- Replace ITMY suspension cage in its position, clamp it down
- Release ITMY from its EQ stops
- Replace OSEMs in ITMY cage, best effort to recover previous alignment of OSEMs in their holders (I have a photo before removal of OSEMs), which supposedly minimized the coupling of the B-R modes into the shadow sensor signals
- Best effort to have shadow sensor PD outputs at half their fully open voltages (with DC bias voltage applied)
- Quick check that we are hitting the center of the ITM with the alignment tool
- Check that the Oplev HeNe is reasonably centered on steering mirrors
- Tie down OSEM cabling to the ITMY cage with clean copper wire
- Replace the OSEM wiring tower
- Release the SRM from its EQ stops
- Check table leveling
- Take pictures of everything, check that we have not left any tools inside the chamber
- Heavy doors on
- Next, EY chamber
- Repeat first seven bullets from the IY chamber, :%s/ITMY/ETMY/g
- Confirm sufficient clearance between IFO beam axis and the elliptical reflector
- Check Oplev beam path
- Check table leveling
- Take pictures of everything, check that we have not left any tools inside the chamber
- Heavy doors on
- IFO alignment checks - basically follow the wiki, we want to be able to lock both arms (or at least see TEM00 resonances), and see that the PRC and SRC mode flashes look reasonable.
- Tighten all heavy doors up
- Pump down
All photos have been uploaded to google photos. |
Attachment 1: IMG_5958.JPG
|
|
Attachment 2: IMG_5962.JPG
|
|
14424
|
Wed Jan 30 19:25:40 2019 |
gautam | Update | SUS | Xarm cavity alignment | Squishing cables at the ITMX satellite box seems to have fixed the wandering ITM that I observed yesterday - the sooner we are rid of these evil connectors the better.
I had changed the input pointing of the green injection from EX to mark a "good" alignment of the cavity axis, so I used the green beam to try and recover the X arm alignment. After some tweaking of the ITM and ETM angle bias voltages, I was able to get good GTRX values [Attachment #1], and also see clear evidence of (admittedly weak) IR resonances in TRX [Attachment #2]. I can't see the reflection from ITMX on the AS camera, but I suspect this is because the ITMY cage is in the way. This will likely have to be redone tomorrow after setting the input pointing for the Y arm cavity axis, but hopefully things will converge faster and we can close up sooner. Closing the PSL shutter for now...
I also rebooted the unresponsive c1susaux to facilitate the alignment work tomorrow. |
Attachment 1: Xarm.png
|
|
Attachment 2: Xarm_IR.png
|
|
14425
|
Fri Feb 1 01:24:06 2019 |
gautam | Update | SUS | Almost ready for pumpdown tomorrow | [koji, chub, jon, rana, gautam]
Full story tomorrow, but we went through most of the required pre close-up checks/tasks (i.e. both arms were locked, PRC and SRC cavity flashes were observed). Tomorrow, it remains to
- Confirm clearance between elliptical reflector and ETMY
- Confirm leveling of ETMY table
- Take pics of ETMY table
- Put heavy door on ETMY chamber
- Pump down
The ETMY suspension chain needs to be re-characterized (neither the old settings, nor a +/- 1 gain setting worked well for us tonight), but this can be done once we are back under vacuum. |
14426
|
Fri Feb 1 13:16:50 2019 |
gautam | Update | SUS | Pumpdown 83 underway | [chub, bob, gautam]
- Steps described in previous elog were carried out
- EY heavy door was put on at about 1130am.
- Pumpdown commenced at ~noon. We are going down at ~3 torr/min.
|
14427
|
Fri Feb 1 14:44:14 2019 |
gautam | Update | SUS | Y arm FC cleaning and reinstall | [Attachment #1]: ITMY HR face after cleaning. I determined this to be sufficiently clean and re-installed the optic.
[Attachment #2]: ETMY HR face after cleaning. This is what the HR face looks like after 3 rounds of First-Contact application. After the first round, we noticed some arc-shaped lines near the center of the optic's clear aperture. We were worried this was a scratch, but we now believe it to be First-Contact residue, because we were able to remove it after drag wiping with acetone and isopropanol. However, we mistrust the quality of the solvents used - they are not any special dehydrated kind, and we are looking into acquiring some dehydrated solvents for future cleaning efforts.
[Attachment #3]: Top view of ETMY cage meant to show increased clearance between the IFO axis and the elliptical reflector.
Many more photos (including table leveling checks) on the google-photos page for this vent. The estimated time between F.C. peeling and pumpdown is ~24 hours for ITMY and ~15 hours for ETMY, but for the former, the heavy doors were put on ~1 hour after the peeling.
The first task is to fix the damping of ETMY. |
Attachment 1: IMG_5974.JPG
|
|
Attachment 2: IMG_5986.JPG
|
|
Attachment 3: IMG_5992.JPG
|
|
14428
|
Fri Feb 1 21:52:57 2019 |
gautam | Update | SUS | Pumpdown 83 underway | [jon, koji, gautam]
- IFO is at ~1 mtorr, but pressure is slowly rising because of outgassing presumably (we valved off the turbos from the main volume)
- Everything went smooth -
- 760 torr to 500 mtorr took ~7 hours (we deliberately kept a slow pump rate)
- TP3 current was found to rise above 1 A easily as we opened RV2 during the turbo pumping phase, particularly in going from 500 mtorr to 10 mtorr, so we just ran TP2 more aggressively rather than change the interlock condition.
- The pumpspool is isolated from the main volume - TP1-3 are running (TP2 and TP3 are on Standby mode) but are only exposed to the small pumpspool volume and RGA volume).
- RP1 and RP3 were turned off, and the manual roughing line was disconnected.
- We will resume the pumping on Monday.
I'm leaving all suspension watchdogs tripped over the weekend as part of the suspension diagonalization campaign... |
14430
|
Sun Feb 3 15:15:21 2019 |
gautam | Update | VAC | overnight leak rate | I looked into this a bit today. Did a walkthrough of the lab, didn't hear any obvious hissing (makes sense, that presumably would signal a much larger leak rate).
Attachment #1: Data from the 30 ksec we had the main vol valved off on Jan 10, but from the gauges we have running right now (the CC gauges have not had their HV enabled yet so we don't have that readback).
Attachment #2: Data from ~150 ksec from Friday night till now.
Interpretation: The number quoted from Jan 10 is from the cold-cathode gauge (~20 utorr increase). In the same period, the Pirani gauge reports a increase of ~5 mtorr (=250x the number reported by the cold-cathode gauge). So which gauge do we trust in this regime more? Additionally, the rate at which the annuli pressures are increasing seem consistent between Jan 10 and now, at ~100 mtorr every 30 ksec.
I don't think this is conclusive, but at least the leak rates between Jan 10 and now don't seem that different for the annuli pressures. Moreover, for the Jan 10 pumpdown, we had the IFO at low pressure for several days over the chirstmas break, which presumably gave time for some outgassing which was cleaned up by the TPs on Jan 10, whereas for this current pumpdown, we don't have that luxury.
Do we want to do a systematic leak check before resuming the pumpdown on Monday? The main differences in vacuum I can think of are
- Two pieces of Kapton tape are now in the EY chamber.
- Possible resiudue from cleaning solvents in IY and EY chambers are still outgassing.
This entry by Steve says that the "expected" outgassing rate is 3-5 mtorr per day, which doesn't match either the current observation or that from Jan 10. |
Attachment 1: Jan10_data.png
|
|
Attachment 2: Feb1_data.png
|
|
14432
|
Mon Feb 4 12:23:24 2019 |
gautam | Update | VAC | pumpdown 83 - leak tests | [koji, gautam]
As planned, we valved off the main volume and the annuli from the turbo-pumps at ~730 PM PST. At this time, the main volume pressure was 30 uTorr. It started rising at a rate of ~200 uTorr/hr, which translates to ~5 mtorr/day, which is in the ballpark of what Steve said is "normal". However, the calibration of the Hornet gauge seems to be piecewise-linear (see Attachment #1), so we will have to observe overnight to get a better handle on this number.
We decided to vent the IY and EY chamber annular volumes, and check if this made anu dramatic changes in the main volume pressure increase rate, presumably signalling a leak from the outside. However, we saw no such increase - so right now, the working hypothesis is still that the main volume pressure increase is being driven by outgassing of something from the vacuum.
Let's leave things in this state overnight - V1 and V5 closed so that neither the main volume nor the annuli are being pumped, and get some baseline numbers for what the outgassing rate is. |
Attachment 1: PD83.png
|
|
14433
|
Mon Feb 4 20:13:39 2019 |
gautam | Update | SUS | ETMY suspension oddness | I looked at the free-swinging sensor data from two nights ago, and am struggling with the interpretation.
[Attachment #1] - Fine resolution spectral densities of the 5 shadow sensor signals (y-axis assumes 1ct ~1um). The puzzling feature is that there are only 3 resonant peaks visible around the 1 Hz region, whereas we would expect 4 (PIT, YAW, POS and SIDE). afaik, Lydia looked into the ETMY suspension diagonalization last, in 2016. Compared to her plots (which are in the Euler basis while mine are in the OSEM basis), the ~0.73 Hz peak is nowhere to be seen. I also think the frequency resolution (<1 mHz) is good enough to be able to resolve two closely spaced peaks, so it looks like due to some reason (mechanical or otherwise), there are only 3 independent modes being sensed around 1 Hz.
[Attachment #2] - Koji arrived and we looked at some transfer functions to see if we could make sense of all this. During this investigation, we also think that the UL coil actuator electronics chain has some problem. This test was done by driving the individual coils and looking for the 1/f^2 pendulum transfer function shape in the Oplev error signals. The ~ 4dB difference between UR/LL and LR is due to a gain imbalance in the coil output filter bank, once we have solved the other problems, we can reset the individual coil balancing using this measurement technique.
[Attachment #3] - Downsampled time-series of the data used to make Attachment #1. The ringdown looks pretty clean, I don't see any evidence of any stuck magnets looking at these signals. The X-axis is in kilo-seconds.
We found that the POS and SIDE local damping loops do not result in instability building up. So one option is to use only Oplevs for angular control, while using shadow-sensor damping for POS and SIDE. |
Attachment 1: ETMY_sensors_1_Feb_2019_2230_PST.pdf
|
|
Attachment 2: ETMY_UL.pdf
|
|
Attachment 3: ETMY_sensors_timeDomain.pdf
|
|
14434
|
Tue Feb 5 10:11:30 2019 |
gautam | Update | VAC | leak tests complete, pumpdown 83 resumed | I guess we forgot to close V5, so we were indeed pumping on the ITMY and ETMY annuli, but the other three were isolated suggest a leak rate of ~200-300 mtorr/day, see Attachment #1 (consistent with my earlier post).
As for the main volume - according to CC1, the pressure saturates at ~250 uTorr and is stable, while the Pirani P1a reports ~100x that pressure. I guess the cold-cathode gauge is supposed to be more accurate at low pressures, but how well do we believe the calibration on either gauge? Either ways, based on last night's test (see Attachment #2), we can set an upper limit of 12 mtorr/day. This is 2-3x the number Steve said is normal, but perhaps this is down to the fact that the outgassing from the main volume is higher immediately after a vent and in-chamber work. It is also 5x lower rate of pressure increase than what was observed on Feb 2.
I am resuming the pumping down with the turbo-pumps, let's see how long we take to get down to the nominal operating pressure of 8e-6 torr, it ususally takes ~ 1 week. V1, VASV, VASE and VABS were opened at 1030am PST. Per Chub's request (see #14435), I ran RP1 and RP3 for ~30 seconds, he will check if the oil level has changed.
Quote: |
Let's leave things in this state overnight - V1 and V5 closed so that neither the main volume nor the annuli are being pumped, and get some baseline numbers for what the outgassing rate is.
|
|
Attachment 1: Annuli.png
|
|
Attachment 2: MainVol.png
|
|
14436
|
Tue Feb 5 19:30:14 2019 |
gautam | Update | VAC | Main volume at 20 uTorr | Pumpdown looks healthy, so I'm leaving the TPs on overnight. At some point, we should probably get the RGA going again. I don't know that we have a "reference" RGA trace that we can compare the scan to, should check with Steve. The high power (1 W) beam has not yet been sent into the vacuum, we should probably add the interlock condition that shuts off the PSL shutter before that. |
Attachment 1: PD83.png
|
|
14438
|
Thu Feb 7 13:55:25 2019 |
gautam | Update | VAC | RGA turned on | [chub, steve, gautam]
Steve came by the lab today. He advised us to turn the RGA on again, now that the main volume pressure is < 20 uTorr. I did this by running the RGAset.py script on c0rga - the temperature of the unit was 22C in the morning, after ~3 hours of the filament being turned on, the temperature has already risen to 34 C. Steve says this is normal. We also opened VM1 (I had to edit the interlocks.yaml to allow VM1 to open when CC1 < 20uTorr instead of 10uTorr), so that the RGA volume is exposed to the main volume. So the nightly scans should run now, Steve suggests ignoring the first few while the pumpdown is still reaching nominal pressure. Note that we probably want to migrate all the RGA stuff to the new c1vac machine.
Other notes from Steve:
- RP1 and RP3 should have their oil fully changed (as opposed to just topped up)
- VABSSCI adn VABSSCO are NOT vent valves, they are isolating the annuli of the IOO and OMC chambers from the BS chamber annuli. So next time we vent, we should fix this!
- Leak rate of 3-5 mTorr/day is "normal" once the system has been pumped for a few days. Steve agrees that our observations of the main volume pressure increase is expected, given that we were at atmosphere.
- Regarding the upcoming CES construction
- Steve recommends keeping the door along the east arm, as it is useful for bringing equipment into the lab (end door access is limited because of end optical tables)
- Particle counter data logging should be resumed before the construction starts, so that we can monitor if the lab is getting dirtier
- OSEM filters (new ones, i.e. made according to spects in D000209) are in the Clean Cabinet (EX). They are individually packaged in little capsules, see Attachment #1. So the ones I installed were actually a 2002 vintage. We have 50pcs, enough to install new ones on all the core optics + spares.
|
14440
|
Thu Feb 7 19:28:46 2019 |
gautam | Update | VAC | IFO recovery | [rana, gautam]
The full 1 W is again being sent into the IMC. We have left the PBS+HWP combo installed as Rana pointed out that it is good to have polarization control after the PMC but before the EOM. The G&H mirror setup used to route a pickoff of the post-EOM beam along the east edge of the PSL table to the AUX laser beat setup was deemed too flaky and has been bypassed. Centering on the steering mirror and subsequently the IMC REFL photodiode was done using an IR viewer - this technique allows one to geometrically center the beam on the steering mirror and PD, to the resolution of the eye, whereas the voltage maximization technique using the monitor port and an o'scope doesn't allow the former. Nominal IMC transmission of ~15,000 counts has been recovered, and the IMC REFL level is also around 0.12, consistent with the pre-vent levels. |
14441
|
Thu Feb 7 19:34:18 2019 |
gautam | Update | SUS | ETMY suspension oddness | I did some tests of the electronics chain today.
- Drove a sine-wave using awggui to the UL-EXC channel, and monitored using an o'scope and a DB25 breakout board at J1 of the satellite box, with the flange cable disconnected - while driving 3000 cts amplitude signal, I saw a 2 Vpp signal on the scope, which is consistent with expectations.
- Checked resistances of the pin pairs corresponding to the OSEMs at the flange end using a breakout board - all 5 pairs read out ~16-17 ohms.
- Rana pointed out that the inductance is the unambiguous FoM here: all coils measured between 3.19 and 3.3 mH according to the LCR meter...
Hypothesising a bad connection between the sat box output J1 and the flange connection cable. Indeed, measuring the OSEM inductance from the DSUB end at the coil-driver board, the UL coil pins showed no inductance reading on the LCR meter, whereas the other 4 coils showed numbers between 3.2-3.3 mH. Suspecting the satellite box, I swapped it out for the spare (S/N 100). This seemed to do the trick, all 5 coil channels read out ~3.3 mH on the LCR meter when measured from the Coil driver board end. What's more, the damping behavior seemed more predictable - in fact, Rana found that all the loops were heavily overdamped. For our suspensions, I guess we want the damping to be critically damped - overdamping imparts excess displacement noise to the optic, while underdamping doesn't work either - in past elogs, I've seen a directive to aim for Q~5 for the pendulum resonances, so when someone does a systematic investigation of the suspensions, this will be something to look out for.. These flaky connectors are proving pretty troublesome, let's start testing out some prototype new Sat Boxes with a better connector solution - I think it's equally important to have a properly thought out monitoring connector scheme, so that we don't have to frequently plug-unplug connectors in the main electronics chain, which may lead to wear and tear.
The input and output matrices were reset to their "naive" values - unfortunately, two eigenmodes still seem to be degenerate to within 1 mHz, as can be seen from the below spectra (Attachment #1). Next step is to identify which modes these peaks actually correspond to, but if I can lock the arm cavities in a stable way and run the dither alignment, I may prioritize measurement of the loss. At least all the coils show the expected 1/f**2 response at the Oplev error point now. The coil output filter gains varied by ~ factor of 2 among the 4 coils, but after balancing the gains, they show identical responses in the Oplev - Attachment #2. |
Attachment 1: ETMY_sensors.pdf
|
|
Attachment 2: postDiag.pdf
|
|
14442
|
Fri Feb 8 00:20:56 2019 |
gautam | Summary | Tip-TIlt | Five FiveNine Optics Optics delivered | They have been stored on the 3rd shelf from top in the clean optics cabinet at the south end. EX
|
14443
|
Fri Feb 8 02:00:34 2019 |
gautam | Update | SUS | ITMY has tendency of getting stuck | As it turns out, now ITMY has a tendency to get stuck. I found it MUCH more difficult to release the optic using the bias jiggling technique, it took me ~ 2 hours. Best to avoid c1susaux reboots, and if it has to be done, take precautions that were listed for ITMX - better yet, let's swap out the new Acromag chassis ASAP. I will do the arm locking tests tomorrow. |
Attachment 1: Screenshot_from_2019-02-08_02-04-22.png
|
|
14444
|
Fri Feb 8 20:35:57 2019 |
gautam | Summary | Tip-TIlt | Coating spec | [Attachment #1]: Computed spectral power transmissivity (according to my model) for the coating design at a few angles of incidence. Behavior lines up well with what FNO measured, although I get a transmission that is slightly lower than measured at 45 degrees. I suspect this is because of slight changes in the dispersion relation assumed and what was used for the coating in reality.
[Attachment #2]: Similar information as Attachment #1, but with the angle of incidence as the independent parameter in a continuous sweep.
Conclusion: The coating behaves in a way that is in reasonable agreement with our model. At 41.1 degrees, which is what the PR3 angle of incidence will be, T<50 ppm, which was what we specified. The larger range of angles was included because originally, we thought of using this optic as a substitute for SR3 as well. But I claim that for the shorter SRC (signal recycling as opposed to RSE), we will not end up using the new optic, but rather go for the G&H mirror. In any case, as Koji pointed out, ~50 ppm extra loss in the RC will not severely limit the recycling gain. Such large variation was not seen in the MC analysis because we only varied the angle of incidence by +/- 0.5 degrees about the nominal design value of 41.1 degrees. |
Attachment 1: specRefl.pdf
|
|
Attachment 2: AoIscan.pdf
|
|
|