40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 38 of 344  Not logged in ELOG logo
ID Date Author Type Category Subject
  15448   Thu Jul 2 16:51:23 2020 JordanUpdateGeneralBathroom Science

As part of an ongoing effort to improve airflow in workspaces/bathrooms on campus, I have installed an air scrubber unit in each of the bathrooms at the 40m lab.

Attachment 1: AirScrubber40m.jpg
AirScrubber40m.jpg
  15447   Wed Jul 1 18:16:09 2020 gautamUpdateComputersrossa re-re-revival

In an effort to make a second usable workstation, I did the following (remotely) on rossa today (not necessarily in this order, I wasn't maintaining a live log so I forgot):

  1. Fixed /etc/resolv.conf, so that the other martian machines can be found.
  2. Copied over .bashrc file, and the appropriate lines from /etc/fstab from pianosa to rossa.
  3. Ran sudo apt install nfs-common. Then ran sudo mount -a to get /cvs/cds mounted.
  4. Made symlinks for /users and /opt/rtcds , and /ligo. All of these are used by various environment-setting scripts and I chose to preserve the structure, though why we need so many symlinks, I don't know...
  5. Set up the shell variable $NDSSERVER using export NDSSERVER=fb:8088. I'm not sure how, but I believe DTT, awggui etc use this on startup to get the channel list (any
  6. Followed instructions from Erik von Reis at LHO to install the cds workstation packages and dependencies. Worked like a charm 🎃
  7. As a test, I plotted the accelerometer spectra in DTT, see Attachment #1. I also launched foton from inside awggui, and confirmed that the sample rate is inherited and I could designate a filter. But I haven't yet run the noise injection to test it, I'll do that the next time I'm in the lab.
  8. Also checked that medm, StripTool and ndscope, and anaconda python all seem to work 👍🏾.

So, in summary, rossa is now all set up for use during lock acquisition. However, until this machine has undergone a few months of testing, we should freeze the pianosa config and not mess with it.

Note that this version of the "crtools" is rather new. Please, use them and if there is an issue, report the errors! I am going to occassionally try lock acquisition using rossa. 

Quote:

wiped and install Debian 10 on rossa today

still to be done: config it as CDS workstation

please don't try to "fix" it in the meantime

Attachment 1: MCacc.pdf
MCacc.pdf
  15446   Wed Jul 1 18:03:04 2020 JonConfigurationVACUPS replacements

​I looked into how the new UPS devices suggested by Chub would communicate with the vac interlocks. There are several possible ways, listed in order of preference:

  • Python interlock service directly queries the UPS via a USB link using the (unofficial) tripplite package. Direct communication would be ideal because it avoids introducing a dependency on third-party software outside the monitoring/control capability of the interlock manager. However the documentation warns this package does not work for all models...
  • Configure Tripp Lite's proprietary software (PowerAlert Local) to send SYSLOG event messages (UDP packets) to a socket monitored by the Python interlock manager.
  • Configure the proprietary software to execute a custom script upon an event occurring. The script would, e.g., set an EPICS flag channel which the interlock manager is continually monitoring.

I recommend we proceed with ordering the Tripp Lite 36HW20 for TP1 and Tripp Lite 1AYA6 for TP2 and TP3 (and other 120V electronics). As far as I can tell, the only difference between the two 120V options is that the 6FXN4 model is TAA-compliant.

  15445   Wed Jul 1 12:50:40 2020 gautamUpdatePEMMC1 accelerometers plugged in

I re-connected the 3 accelerometers located near the MC1/MC3 chamber. It was a bit tedious to get the cabling sorted - I estimate the cable is ~80m long, and the excess length had to be wound around a spool (see Attachment #1), which wasn't really a 1 person job. It's neat-ish for now, but I'm not entirely satisfied. I think we should get shorter cables (~20m), and also mount the pre-amp/power units in a rack instead of leaving it on the floor. The pre-amp settings are x100 for all three channels. The MC2 channels are powered, but are unconnected to the seismometers - it was too tedious to unroll the other spool yesterday. Apart from this, the cable for the "Z" channel had to be re-seated in the strain relief clamp.

I did not enable any of the CDS filters that convert the raw signal into physical units, so for now, these channels are just recording raw counts.

Update 7pm: the spectra in the current config are here - not sure what to make of the MC2_Z channel appearing to show lower noise?

Update July 13 2020 430pm: This afternoon, I hooked up the MC2 accelerometer channels too...

Attachment 1: IMG_8617.JPG
IMG_8617.JPG
Attachment 2: IMG_8616.JPG
IMG_8616.JPG
  15444   Wed Jul 1 08:51:52 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab from 9am to 4pm today.

  15443   Tue Jun 30 22:00:04 2020 gautamUpdateElectronicsGlitchy POX resurfaces

This problem reared its ugly head again. I am inclined to believe the problem is electronic and not on the light, since the POY channels seem immune to this issue (see Attachment #1). I will investigate in the daytime tomorrow. Note that while the POX photodiode head has ~twice the transimpedance than POY (per measurement), the POY signal gets amplified by a ZHL-500-HLN amplifier before heading to the demod electronics (nominal gain is 19dB = x9). There is also some imbalance in the light level at the photodiodes I guess, because overall, the PDH fringe is ~twice as large for the Y arm as the X arm. Basically, the y-axes of the attached plot cannot be directly compared between POX and POY.

Mostly this is an annoyance - right now, the POX signal is only used for locking and dither aligning the X arm cavity, and so once that is done, the locking can proceed (as long as the other channels, e.g. REFL11, aren't glitching as well...)

Attachment 1: glitchyPOX.jpg
glitchyPOX.jpg
  15442   Tue Jun 30 10:59:16 2020 gautamUpdateLSCThree sensing matrices

Summary:

I injected some sensing lines and measured their responses in the various photodiodes, with the interferometer in a few different configurations. The results are summarized in Attachments #1 - #3. Even with the PRMI (no arm cavities) locked on 1f error signals, the MICH and PRCL signals show up in nearly the same quadrature in the REFL port photodiodes, except REFL165. I am now thinking if the output (actuation) matrix has something to do with this - part of the MICH control signal is fed back to the PRM in order to minimize the appearance of the MICH dither in the PRCL error signal, but maybe this matrix element is somehow horribly mistuned?

Details:

Attachment #1:

  • ETMs were misaligned and the PRMI was locked with the carrier resonant in the cavity (i.e. sidebands reflected).
  • The locking scheme was AS55_Q --> MICH and REFL11_I --> PRCL.

Attachment #2:

  • The PRFPMI was locked. The vertex DoFs were still under control using 3f error signals (REFL165_I for PRCL and REFL165_Q for MICH).
  • Still, the MICH/PRCL degeneracy in all photodiodes except REFL165 persists.

Attachment #3:

  • Nearly identical configuration to Attachment #2.
  • The main difference here is that I applied some offsets to the MICH and PRCL error points.
  • The offsets were chosen so that the appearance of a ~300 Hz dither in the length of MICH/PRCL was nulled in the AS110_Q / POP22_I signals respectively.
  • For the latter, the appearance of this peak in the POP110_I signal was also nulled, as it should be if our macroscopic PRC length is set correctly.
  • The offsets that best nulled the peak were 110 cts for PRCL, 25 cts for MICH. The measured sensing response is 1e12 cts/m for PRCL in REFL165_I and 9.2e11 cts/m for MICH in REFL165_Q. So these offsets, in physical units, are: 110 pm for PRCL and 27 pm for MICH. They seem like reasonable numbers to me - the PRC linewidth is ~7.5 nm, so the detuning without any digital offset applied is only 1.5% of the linewidth.
  • Note that I changed the POP22/POP110 demod phases to maximize the signal in the I quadrature. The final numbers were -124 degrees / -10 degrees respectively.
  • Yet another piece of evidence suggesting these were the correct offsets is that the DC value of POX and POY were zero on average after these offsets were applied.
  • However, the MICH/PRCL responses in the 1f REFL port photodiodes remain nearly degenerate.

Some other mysteries that I will investigate further:

  1. While POP22 indicated stable buildup of 11 MHz power in the PRC, I couldn't make any sense of the AS110 signals at the dark port - there was large variation of the signal content in the two quadratures, so unlike the POP signals, I couldn't find a digital demod phase that consistently had all the signal in one of the two quadratures. This is all due to angular fluctuations?
  2. My ASC simulations suggest that the POP QPD is a poor sensor of PRM motion when the PRFPMI is locked. However, I find that turning on a feedback loop with the POP QPD as a sensor and the PRM as the actuator dramatically reduces the low-frequency fluctuations of the arm cavity carrier buildup. 🤔

I blew the long lock last night because I forgot to not clear the ASS offsets when trying to find the right settings for running the ASS system at high power. Will try again tonight...

Quote:

Lock the PRMI on carrier and measure the sensing matrix, see if the MICH and PRCL signals look sensible in 1f and 3f photodiodes.

Attachment 1: PRMI_1f_20200625sensMat.pdf
PRMI_1f_20200625sensMat.pdf
Attachment 2: PRFPMI_20200629sensMat.pdf
PRFPMI_20200629sensMat.pdf
Attachment 3: PRFPMI_20200629sensMat_wOffset.pdf
PRFPMI_20200629sensMat_wOffset.pdf
  15441   Tue Jun 30 08:50:12 2020 JordanUpdateGeneralPresence at 40m

I will be in the clean and bake lab today from 9am to 4pm.

  15440   Mon Jun 29 20:30:53 2020 KojiUpdateSUSMC1 sat-box de-lidded

Sigh. Do we have a spare sat box?

  15439   Mon Jun 29 15:56:02 2020 gautamUpdateElectronicsRFPD characterization

A more comprehensive report has been uploaded here. I'll zip the data files and add them there too. In summary:

  1. There are several problems with the WFS heads
    • Some attenuators don't seem to work. This could be a problem with the Acromag BIO, or with the relay on the head itself.
    • The measured transimpedance at 29.5 MHz is much lower than expected. We expect ~50 kohms with no attenuation, and 5 kohms without. I measure 100 ohm - 2 kohm with the attenuation disabled, and ~200 ohms with it enabled.
    • Quadrant #3 on both WFS heads behaves differently from the others. There is also evidence of a 200 MHz oscillation for quadrant 3.
    • For some reason, there is a relative minus sign between the TFs measured for the WFS and for the RFPDs. I don't understand where this is coming from - all the OpAmps in the LSC PDs and WFS heads are configured as non-inverting, so why should there be a minus sign? Is this indicative of the polarity of the LEMO output being somehow flipped?
  2. POX 11 photodiode does not have a notch at 22 MHz.
  3. AS55 resonance appears to have shifted closer to 60 MHz, would benefit from a retuning. But the notches appear fine.
  4. PDA10CF photodiode used as the POP22/POP110 readback appears broken in some strange way. As shown in the linked document, a spare PDA10CF in the lab has a much more reasonable response, so I am going to switch out the POP22/POP110 diode with this spare.

I'll upload the data and analysis notebook + liso fit files to the wiki as well shortly. The data, a Jupyter notebook making the plots, and the LISO fit files have been uploaded here.

I didn't do it this time but it'd be nice to also do the noise measurement and get an estimate for the shot-noise intercept current.

Quote:

While I have the data, I will fit this and post a more complete report on the wiki.

  15438   Mon Jun 29 11:55:46 2020 gautamUpdateSUSMC1 sat-box de-lidded

There was no improvement to the situation overnight. So, I did the following today:

  1. Ramped bias voltages for SRM and MC1 to 0, shutdown watchdogs.
  2. Switched SRM and MC1 satellite boxes. The SRM satellite box lid was opened, while the MC1 lid was left open. The boxes have also been re-labelled lest there be some confusion about which box belongs where.
  3. Restored watchdogs and bias voltages. Curiously, the MC1 optic now only requires half the bias voltages it did before to have the correct DC alignment for the optic. The Satellite box is just supposed to be a passive conduit for the drive current, so this is indicative of some PCB traces/cabling being damaged inside what was previously the MC1 satellite box?

IMC is now locked again, I will monitor for glitching/stability.

Update 6pm PDT: as shown in Attachment #1, there is a huge difference in the stability of the lock after the sat box swap. Let's hope it stays this way for a while...

Quote:

I'll leave the MC1 box open overnight and see if that improves the situation, and if not, I'll switch in the SRM satellite box tomorrow.

Attachment 1: SatBoxSwap.jpg
SatBoxSwap.jpg
  15437   Mon Jun 29 11:41:04 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 11:30am to 4pm

  15436   Sun Jun 28 17:36:35 2020 gautamUpdateSUSMC1 sat-box de-lidded

Hmm I can't seem to export with the colorbar, might be just my phone though. I tried to add some "cursors" with the temperature at a few spots, but the font color contrast is poor so you have to squint really hard to see the temperatures in the photo I attached.

I'll leave the MC1 box open overnight and see if that improves the situation, and if not, I'll switch in the SRM satellite box tomorrow.

Quote:

does the FLIR have an option to export image with a colorbar?

How about just leave the lid open? or more open? I don't know what else can be done in the near term. Maybe swap with the SRM sat box to see if that helps?

  15435   Sun Jun 28 16:29:58 2020 ranaUpdateSUSMC1 sat-box de-lidded

does the FLIR have an option to export image with a colorbar?

How about just leave the lid open? or more open? I don't know what else can be done in the near term. Maybe swap with the SRM sat box to see if that helps?

  15434   Sun Jun 28 15:30:52 2020 gautamUpdateSUSMC1 sat-box de-lidded

Judging by the summary pages, some 18 hours after this change was made and the board re-installed, the MC1 shadow sensors began to report frequent glitches. I can't think of a plausible causal connection, especially given the 18 hour time lag, but also hard to believe there isn't one? As a result, the IMC is no longer able to stay locked for extended periods of time. I did the usual cable squishing, and also took off the lid to see if that helps the situation.

While the reduced series resistance means there is more current flowing through the slow path

  1. There isn't actually an increase in the net current flowing through the satellite box - this change just re-allocates the current from the fast path to the slow path, but by the time it reaches the satellite box, the current is flowing through the same conductor.
  2. afaik, the current buffers on the coil driver aren't overdriven - they are rated for 300 mA. No individual coil is drawing more than 30 mA.
  3. the resistors themselves should be running sufficiently below their rated power of 3W (I estimate 2.5 V ^2 / 100 ohms ~ 60 mW).
  4. The highest current should be through the UL and LR coils according to the voltage outputs from the Acromag. But the UL coil doesn't show significant glitching, and the LL one does despite drawing negligible DC current.

The attached FLIR camera image re-inforces what we already know, that the thermal environment inside the satellite box is horrible. The absolute temperature calibration may be off, but it was difficult to touch the components with a bare finger, so I'd say its definitely > 70 C.

Quote:

I implemented this change today. We only had 100 ohm, 3W resistors in stock (no 200 ohm with adequate power rating). Assuming 10 V is dropped across this resistor, the power dissipation is V^2/R ~ 1 W, so we should have sufficient margin. DCC entry has been updated with new schematic and photo of the component side of the board. Note that the series resistance of the fast actuation path was untouched.

Attachment 1: 20200628T144138.jpg
20200628T144138.jpg
  15433   Fri Jun 26 16:53:38 2020 gautamUpdateElectronicsRFPD characterization

Summary:

While the vacuum system was knocked out, I measured the RF transimpedance (using the AM laser setup, didn't do the shot noise intercept current measurement for now) of all the RFPDs (except PMC REFL). At the very least, the following photodiodes are suspect:

  1. WFS heads - expected transimpedance is 50 kohm unattenuated, and 5 kohm attenuated. I measure values that are x10 lower than this, and the segments are significantly imbalanced. Morover, the attenuators for some quadrants appear to do nothing. This could be a problem with the Acromag system I guess, but the measured transimpedance is nowhere close to the "expected" value. See Attachments #1 and #2. You can also see that the response at 55 MHz is significantly attenuated, so I'm guessing trying to measure the AS port ASC sensing response is going to be difficult.

    Note that I assumed a 1kohm DC transimpedance, which is what I expect from the schematic and also is consistent with the DC voltage I measured, knowing the approximate optical power incident on the photodiode.
  2. POP 22/ POP 110 - this is a Thorlabs PDA10CF diode. It should have a flat gain profile out to ~100 MHz, but I measure some weird features. The other PDA10CF we use, at AS110, shows a more reasonable response. See Attachment #3. I don't know what kind of failure mode this is? Anyway I'll try testing another PDA10CF and if it looks more reasonable, I'll switch out this diode. FWIW, the measured AS110 gain is ~3kohms, whereas the datasheet tells us to expect 5 kohms.

For the remaining photodiodes, I measure a transimpedance that is within ~20% of what is on the wiki page. The notches may benefit from some retuning. While I have the data, I will fit this and post a more complete report on the wiki.

Update July 6 1145am: WFS response plots now have legends mapping quadrants, and I've also added the response of a spare PDA10CF (which is now the new POP22/POP110 photodiode).

Attachment 1: WFS1.pdf
WFS1.pdf
Attachment 2: WFS2.pdf
WFS2.pdf
Attachment 3: buildupMons.pdf
buildupMons.pdf
  15432   Fri Jun 26 11:00:52 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 11am to 4pm.

  15431   Thu Jun 25 15:11:00 2020 gautamUpdateSUSMC1 coil driver resistance quartered

I implemented this change today. We only had 100 ohm, 3W resistors in stock (no 200 ohm with adequate power rating). Assuming 10 V is dropped across this resistor, the power dissipation is V^2/R ~ 1 W, so we should have sufficient margin. DCC entry has been updated with new schematic and photo of the component side of the board. Note that the series resistance of the fast actuation path was untouched.

As expected, the requested voltage no longer exceeds the Acromag DAC range, it is now more like 2.5 V. However, I still notice that the MC REFL spot moves somewhat diagonally on the camera image - so maybe the coil gains are seriously imbalanced? Anyway, the WFS control signals can once again be safely offloaded to the slow bias voltages once again, preserving the fast ADC range for other actuation.

The Johnson noise of the series resistor has now increased by a factor of 2, from ~6.4 pA/rtHz to 12.8 pA/rtHz. Assuming a current to force coefficient of 1.6 mN/A per coil, the length noise of the cavity is expected to be 12.8e-12 * 0.064/0.25/(2*pi*100)^2 ~ 8e-18 m/rtHz at 100 Hz. In frequency units, this is 80 uHz/rtHz. I think our IMC noise is at least 10 times higher than this at 100 Hz (in any case, the noise of the coil driver is NOT dominated by the series resistance). Attachment #1 confirms that there isn't any significant MCF noise increase, and I will check with the arm cavity too. Nevertheless, we should, if possible, align the optic better and use as high a series resistance as possible.

The watchdog for MC1 was disabled and the board was pulled out for this work. After it was replaced, the IMC re-locks readily.

Quote:

But this does not solve the MC1 issue. Only we can do right now is to make the output resister half, for example.

Attachment 1: MCF.pdf
MCF.pdf
  15430   Thu Jun 25 11:09:01 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab from 11pm to 4pm

  15429   Wed Jun 24 22:47:21 2020 YehonathanUpdateWikiUpdated phase maps webpage
I uploaded the new phase maps measurements made by GariLynn to nodus and updated the optics phase maps page.
I also added MetroPro and Matlab analysis for these phase maps.
  15428   Wed Jun 24 22:33:44 2020 gautamUpdateSUSEQ tripped all suspensions

This earthquake tripped all suspensions and ITMX got stuck. The watchdogs were restored and the stuck optic was released. The IFO was re-aligned, POX/POY and PRMI on carrier locking all work okay.

  15427   Wed Jun 24 17:20:16 2020 gautamUpdateLSCWhat should the short-term commissioning goals be?

Per the discussion at the meeting today, the plan of action is:

  1. Lock the PRMI on carrier and measure the sensing matrix, see if the MICH and PRCL signals look sensible in 1f and 3f photodiodes.
  2. Try locking CARM on POP55 (since there is currently no POP55 photodiode, can we use POX/POY as an intermediary?).
  3. For the ASC, can we hijack one of the IMC WFS heads to study what the AS port WFS signals would look like, and maybe close a feedback loop on the ETMs?
    • My guess is no, because currently, the L2A is so poorly tuned on MC2 that the CARM length control messes with the alignment of the IMC significantly.
    • So we need the IMC WFS loops to maintain the pointing.
    • Of course, the MC2 L2A can be tuned to mitigate this problem. 
    • I also believe there is something funky going on with the WFS heads. More to follow on that in a later elog.
    • Apart from these issues, for this scheme to be tested, some mods to the c1ioo model will have to be made so that we can route the servo output to the ETMs (as opposed to the IMC mirrors as is the usual case).

If I missed something, please add here.

Quote:

Summary:

I want some input about what the short-term (next two weeks) commissioning goals should be.

  15426   Wed Jun 24 10:14:56 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 10am to 4pm.

  15425   Tue Jun 23 17:54:56 2020 ranaConfigurationVACVac maintenance complete

I propose we go for all CAPS for all channel names. The lower case names is just a holdover from Steve/Alan from the 90's. All other systems are all CAPS.

It avoids us having to force them all to UPPER in the scripts and channel lists.

  15424   Mon Jun 22 20:06:06 2020 JonConfigurationVACVac maintenance complete

This work is finally complete. The dry pump replacement was finished quickly but the controls updates required some substantial debugging.

For one, the mailer code I had been given to install would not run against Python 3.4 on c1vac, the version run by the vac controls since about a year ago. There were some missing dependencies that proved difficult to install (related to Debian Jessie becoming unsupported). I ultimately solved the problem by migrating the whole system to Python 3.5. Getting the Python keyring working within systemd (for email account authentication) also took some time.

Edit: The new interlock flag channel is named C1:Vac-interlock_flag.

Along the way, I discovered why the interlocks had been failing to auto-close the PSL shutter: The interlock was pointed to the channel C1:AUX-PSL_ShutterRqst. During the recent c1psl upgrade, we renamed this channel C1:PSL-PSL_ShutterRqst. This has been fixed.

The main volume is being pumped down, for now still in a TP3-backed configuration. As of 8:30 pm the pressure had fallen back to the upper 1E-6 range. The interlock protection is fully restored. Any time an interlock is triggered in the future, the system will send an immediate notification to 40m mailing list. 👍

Quote:

The vac system is going down at 11 am today for planned maintenance:

  • Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417]
  • Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413]
Attachment 1: Pumpdown-6-22-20.png
Pumpdown-6-22-20.png
  15423   Mon Jun 22 17:51:50 2020 gautamUpdateCDSc1iscaux was down

The machine needed a hard reboot as it was un-ssh-able. 

The exact time that the machine went down is unknown because the blinkys were not DQ-ed. I've now added these to the EDCU to make these channels actually useful, and we may look back on the reliability (or otherwise) of the Acromag system. To my memory, this is the ~5th time one of the new Acromag servers has needed a hard reboot. While this may be less frequent (?) than the VME machines, perhaps there is some other reason for these dropouts. Maybe something to do with the martian network?

Anyway the machine is back up and running now.

  15422   Mon Jun 22 13:16:38 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m today from 11am to 4pm.

  15421   Mon Jun 22 10:43:25 2020 JonConfigurationVACVac maintenance at 11 am

The vac system is going down at 11 am today for planned maintenance:

  • Re-install the repaired TP2 and TP3 dry pumps [ELOG 15417]
  • Incorporate an auto-mailer and flag channel into the controls code for signaling tripped interlocks [ELOG 15413]

We will advise when the work is completed.

  15420   Fri Jun 19 19:21:25 2020 gautamUpdateGeneralPSL shutter re-opened

The PSL shutter was closed from the vacuum interlock trip. Today, I did the following:

  • Re-aligned input beam to PMC to recover high transmission / low reflection.
  • Re-set the LSC offsets.
  • ETMX watchdog was tripped. Reset it.
  • Opened the PSL shutter, IMC autolocker was able to lock the cavity almost immediately.
  • Tested POX/POY locking, ran the ASS to maximize single arm transmission.

All looks good for now. I will probably get back to PRFPMI locking Monday.

  15419   Fri Jun 19 17:06:50 2020 gautamUpdateLSCWhat should the short-term commissioning goals be?

Summary:

I want some input about what the short-term (next two weeks) commissioning goals should be.

Details:

Before the vacuum fracas, the locking was pretty robust. With some human servoing of the input beam, I could maintain locks for ~1 hour. My primary goals were:

  1. Transition the vertex length DoF control from 3f signals to 1f signals.
  2. Turn on some MICH-->DARM feedforward cancellation, because the noise between ~100 Hz and ~1kHz is dominated by this cross-coupling.

I didn't succeed in either so far.

  1. I find that there is poor separation of the length DoFs in the 1f sensors, which makes this transition hopeless.
    • Why should this be? I can't get any sensing matrix in Finesse to line up with what I measure in-lock.
    • One hypothesis I came up with (but haven't yet tested) is that the offsets from the 3f photodiodes are changing from time to time, which somehow changes the projection of the various DoFs onto the photodiode quadratures. 
    • The attached GIF shows the variation in the measured sensing matrix on two days - while the sensing of MICH/PRCL in the 3f photodiodes have hardly changed, they are significantly different in the 1f photodiodes. Note that the I and Q have changed for REFL11 and REFL55 between the two days because I changed the demod phase.
    • I also thought that maybe the CARM suppression isn't sufficient for REFL11 to be used as a PRCL sensor - but even after engaging a CM board SuperBoost, I was unable to realize the PRCL 3f-->1f transition, even though the CARM-->REFL11 coupling did get smaller in the measured sensing matrix (red line in the GIF). I don't think we can juice up the CARM gain much more without modifying the CM board boosts, see Attachment #1.
  2. I was able to measure the MICH CTRL --> DARM ERR transfer function with somewhat high coherence (~0.98).
    • I then used the infrastructure available in the LSC model to try and implement some cancellation, but didn't really see any effect.
    • Perhaps the TF needs to be measured with higher coherence.
    • It may also be that if I am able to successfully execute the 3f-->1f transition, the coupling gets smaller because the 1f sensing noise is lower?

I guess apart from this, we want to run the ALS scan to try and infer something about the absorption-induced thermal lens. I guess at this point, the costs outweigh the benefits in trying to bring in the SRC as well, since we will be changing the SRC config?

Attachment 1: CARM_superBoost.pdf
CARM_superBoost.pdf
  15418   Fri Jun 19 16:30:09 2020 gautamUpdateASCSome thoughts about ASC

Summary:

In ELOG 15368, I had claimed that the POP QPD based feedback servo actuating on the PRM stabilized the lock. I now believe this scheme of sensing using the POP QPD and feeding back to the PRM is not a good topology for stabilizing the PRC angular motion.

Details:

  • I was never able to get a measurement of the OLTF of this loop that made sense 
    • the loop was initally commissioned with the PRMI locked on the carrier, and the settings hence inferred to give a ~5 Hz UGF loop were used in the PRFPMI lock.
    • In the PRFPMI configuration, however, the loop gain seemed way too low when I measured using the usual IN1/IN2 method.
    • So it is critical for the lock stability that the angular feedforward works well, which it kind of does now (not that I have changed anything, but the glitches in the seismometer have not resurfaced recently).
    • Hopefully, this becomes less of an issue once we replace the TTs with SOS and OSEM based damping.
  • To get some more insight, I did some finesse modeling
    • Attachment #1 shows the sensing response at the QPDs we have available currently (POP and TR). 
    • I included the telescopes (propagation distances, in-air lenses) to these QPDs as best as I could.
    • A simplified model (3 mirror coupled cavity) is used, so there isn't really a common/differential mode in this picture, but we still get some insight I think.
    • Specifically, once the full lock is realized, the PRC optic motion isn't sensed well with our QPDs, and so it was some fluke that turning on these PRC angular feedback loops worked. 
    • Attachment #2 shows the same info as Attachment #1, but with the pendulum transfer functions (and radiation pressure effects) included. The SOS suspensions are modelled as f0=0.7/0.8 Hz (for P/Y), Q=5, while the tip-tilts have f0~5 Hz, Q~10. The high frequency phase is 0 degrees and not 180 as expected because of the pendulum complex pole pair because of the way the quantity is computed in Finesse.
  • The current scheme I use is:
    • DC couple the ITM oplevs, using their individual Oplev QPDs.
    • Use the TR QPDs, mixed to actuate on the ETMs in a common/differential way.
    • I think the system is under-determined with the sensors we currently have - we wan't to sense the 10 angular modes - PIT and YAW for the PRC, Csoft, Chard, Dsoft and Dhard (using the terminology from Kate's thesis), but we only have 6 sensors of the same field (POP, TRX and TRY QPDs, PIT and YAW from each).
    • So we need more sensors?
  • One thing that can easily be improved I think is to make the ASS system work at high power. 
    • think this should be as simple as scaling the gain for the loops to work for the high power.
    • Then we can counteract the input pointing drift at least.
    • But the ITM Oplev DC coupling would need to be turned OFF and then ON again, I'm not sure if this will introduce some transient that will destroy the lock...

I would also like to bring up the topic of implementing some WFS for the interferometer fields again, there doesn't seem to be any mention of this in the procurement/planning for the BHD. It is not obvious to me yet that we need WFS and not just DC QPDs from a noise point of view, but at least we should discuss this.

Attachment 1: sensingResponse.pdf
sensingResponse.pdf
Attachment 2: sensingResponse_torque.pdf
sensingResponse_torque.pdf
  15417   Fri Jun 19 14:03:50 2020 JordanUpdateVACForepump Tip Seal Replacement

Tip Seals were replaced on the forepumps for TP2 and TP3, and both are ready to be installed back onto the forelines.

TP2 Forepump Ultimate Pressure: 180 mtorr

TP3 Forepump Ultimate Pressure: 120 mtorr

  15416   Fri Jun 19 11:02:10 2020 ChubUpdateGeneralcustom feedthrough flanges are here!

The four 4x25DSUB and single 8x25DSUB feedthrough flanges have arrived and will be picked up from the dock and brought to the 40M lab.

  15415   Fri Jun 19 09:57:35 2020 gautamUpdateVACQuestions/comments on vacuum

For this particular email service, ideally the email should be sent out as soon as the interlock is tripped, so this would require a line of code to be added to the main interlock code. Which I guess would require a restart of the interlock service. So let me know when you guys plan to do the dry-pump tip seal replacement operation (when I presume valves will be closed anyways) so that we can do this in a minimally invasive way.

Quote:

Ok, this can be added pretty easily. Its value will just be toggled between 1 and 0 every time the interlock server raises/clears the existing string channel. Adding the channel will require restarting the whole vac IOC, so I'll do it at a time when Jordan is on hand in case something fails to come back up.

  15414   Fri Jun 19 08:47:10 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m today from 9am to 3pm.

  15413   Fri Jun 19 07:40:49 2020 JonUpdateVACQuestions/comments on vacuum

I think we should discuss interlock possibilities at a 40m meeting. I'm reluctant to make the system more complicated, but perhaps we can find ways to reduce the reliance on the turbo pump readbacks. I agree they've proven to be the least reliable.

While we may be able to improve the tolerance to certain kinds of hardware malfunctions (and if so, we should), I don't see interlocks triggering on abnormal behavior of critical equipment as the root problem. As I see it, our bigger problem is with all the malfunctioning, mostly end-of-lifetime pieces of vacuum equipment still in use. If we can address the hardware problems, as I'm trying to do with replacements [ELOG 15412], I think that in itself will make the interlocking much less of an issue.

Quote:

So why not just have a special mode for the interlock code during pumpdown and venting, and during normal operation we expect the main volume pressure to be <100uTorr so the interlock trips if this condition is violated? These can just be EPICS buttons on the Vac control MEDM screen. Both of these procedures are not "business as usual", and even if we script them in the future, it's likely to have some operator supervising, so I don't think it's unreasonable to have to switch between these modes. I just think the pressure gauges have demonstrated themselves to be much more reliable than these TP serial readbacks (as you say, they worked once upon a time, but that is already evidence of its flakiness?). The Pirani gauges are not ultra-reliable, they have failed in the past, but at least less frequently than this serial comm glitching. In fact, if these readbacks are so flaky, it's not impossible that they don't signal a TP shutdown? I just think the real power of having these multi-channel diagnostics is lost without some AND logic - a turbopump failure is likely to result in an increase in pump current and temperature increase and pump speed decrease, so it's not the individual channel values that should be determining if an interlock is tripped.

Ok, this can be added pretty easily. Its value will just be toggled between 1 and 0 every time the interlock server raises/clears the existing string channel. Adding the channel will require restarting the whole vac IOC, so I'll do it at a time when Jordan is on hand in case something fails to come back up.

Quote:

It would be better to have a flag channel, might be useful for the summary pages too. I will make it if it is too much trouble.

  15412   Thu Jun 18 22:33:57 2020 JonOmnistructureVACVac hardware purchase list

Replacement Hardware Purchase List

I've created a purchase list of hardware needed to restore the aging vacuum system. This wasn't planned as part of the BHD upgrade, but I've added it to the BHD procurement list since hardware replacements have become necessary.

The list proposes replacing the aging TP3 Varian turbo pump with the newer Agilent model which has already replaced TP2. It seems I was mistaken in believing we already had a second Agilent pump on hand. A thorough search of the lab has not turned it up, and Steve himself has told me he doesn't remember ordering a second one. Fortunately Steve did leave us a detailed Agilent parts list [ELOG 14322].

It also proposes replacing the glitching TP2 Agilent controller with a new one. The existing one can be sent back for repair and then retained as a spare. Considering that one of these controllers is already malfunctioning after < 2 years, I think it's a very good idea to have a spare on hand.

Known Hardware Issues

Below is our current list of vacuum hardware issues. Items that this purchase list will address (limited to only the most urgent) are highlighted in yellow.

  • Replace the UPS
    • Need a 240V socket for TP1 (currently TP1 is not protected from power loss)
    • Need RS232/485 comms with the interlock server (current UPS: serial readbacks have failed, battery is failing)
  • Remove/replace the failed pressure gauges (~5)
  • Add more cold cathode sensors to the main volume for sensor redundancy (currently the main-volume interlocks rely on only 1 working sensor)
  • Replace TP3 (controller is failing)
  • Replace TP2 controller (serial interface has failed)
  • Remove RP2
    • Dead and also not needed. We already have to throttle the pumpdown rate with only two roughing pumps
  • Remove/refurbish the cryopump
    • Contamination risk to have it sitting connectable to the main volume
  15411   Thu Jun 18 16:56:34 2020 JordanUpdateVACTP2 and TP3 Forepump removal
Quote:

I removed the backing pumps for TP2 and TP3 today to test ultimate pressure and determine if they need a tip seal replacement. This was done with Jon backing me on Zoom. We closed off TP3 and powered down TP3 and the auxilliary pump, in order to remove the forepumps from the exhaust line.

  1. Close V1
  2. Close V5
  3. Turn off TP3
  4. Turn off aux dry pump (manually)
  5. Once the PTP3 foreline pressure has come up to atmosphere, you can disconnect the TP3 dry pump and cap the exhaust line with a KF blank.
  6. Restore the vac configuration in reverse order: dry pump ON, TP3 ON, open V5, open V1

Once pumps were removed I connected a Pirani gauge to the pump directly and pumped down, results as follows:

TP2 Forepump (Agilent IDP 7):

  • Ultimate Pressure: 123 mtorr
  • Hours: 10903

TP3 Forepump (Varian SH 110):

  • Ultimate pressure: ~70 torr
  • Hours: 60300

TP3 forepump defintely needs a new tip seal, and while the pressure on TP2 Forepump was good there was a significant amount of particulate that came out of the exhaust line, so a new tip seal might not be needed but it is recommeded.

I agree with your assessment, Jordan.  If I'm not mistaken the scroll pump for TP2 is new; we had a very early failure with the last new scroll pump (the forepump for TP3) tip seals at just over 5000 hours.  Glad to see my replacement seals held up for over 60K hours. If this is the trend with these pumps, we can simply run them to  around 60000 hours and replace the seals at that time, rather than waiting for failure! - Chub

  15410   Thu Jun 18 15:46:34 2020 gautamUpdateVACQuestions/comments on vacuum

So why not just have a special mode for the interlock code during pumpdown and venting, and during normal operation we expect the main volume pressure to be <100uTorr so the interlock trips if this condition is violated? These can just be EPICS buttons on the Vac control MEDM screen. Both of these procedures are not "business as usual", and even if we script them in the future, it's likely to have some operator supervising, so I don't think it's unreasonable to have to switch between these modes. I just think the pressure gauges have demonstrated themselves to be much more reliable than these TP serial readbacks (as you say, they worked once upon a time, but that is already evidence of its flakiness?). The Pirani gauges are not ultra-reliable, they have failed in the past, but at least less frequently than this serial comm glitching. In fact, if these readbacks are so flaky, it's not impossible that they don't signal a TP shutdown? I just think the real power of having these multi-channel diagnostics is lost without some AND logic - a turbopump failure is likely to result in an increase in pump current and temperature increase and pump speed decrease, so it's not the individual channel values that should be determining if an interlock is tripped.

I definitely think that protecting the vacuum envelope is a priority - but I don't think it should be at the expense of commissioning time. But if you think these extra interlocks are essential to the safety of the vacuum system, I withdraw my request.

I don't disagree that the pressure gauges would register the change. What I'm not sure about is whether the change would violate any of the existing interlock conditions, triggering a shutdown. Looking at what we have now, the only non-pump-related conditions I see that might catch it are the diffpres conditions:

It would be better to have a flag channel, might be useful for the summary pages too. I will make it if it is too much trouble.

There's already a channel C1:Vac-error_status, where if the value is anything other than an empty string, there is an interlock tripped. Does that work?
  15409   Thu Jun 18 15:25:08 2020 JordanUpdateVACTP2 and TP3 Forepump removal

I removed the backing pumps for TP2 and TP3 today to test ultimate pressure and determine if they need a tip seal replacement. This was done with Jon backing me on Zoom. We closed off TP3 and powered down TP3 and the auxilliary pump, in order to remove the forepumps from the exhaust line.

  1. Close V1
  2. Close V5
  3. Turn off TP3
  4. Turn off aux dry pump (manually)
  5. Once the PTP3 foreline pressure has come up to atmosphere, you can disconnect the TP3 dry pump and cap the exhaust line with a KF blank.
  6. Restore the vac configuration in reverse order: dry pump ON, TP3 ON, open V5, open V1

Once pumps were removed I connected a Pirani gauge to the pump directly and pumped down, results as follows:

TP2 Forepump (Agilent IDP 7):

  • Ultimate Pressure: 123 mtorr
  • Hours: 10903

TP3 Forepump (Varian SH 110):

  • Ultimate pressure: ~70 torr
  • Hours: 60300

TP3 forepump defintely needs a new tip seal, and while the pressure on TP2 Forepump was good there was a significant amount of particulate that came out of the exhaust line, so a new tip seal might not be needed but it is recommeded.

  15408   Thu Jun 18 14:13:03 2020 JonUpdateVACQuestions/comments on vacuum
I agree there were MEDM fields, but I can't find any record of these channels being recorded till 2018 December, so I don't agree that they were being digitally monitored. You can also look back in the elog (e.g. here and here) and see that the display fields are just blank. I would then assume that no interlocks were dependent on these channels, because otherwise the vacuum interlocks would be perpetually tripped.

Right, I doubt they were ever recorded or used for interlocks. But the readbacks did work at one point in the past. There's a photo of the old vac monitor screen on p. 19 of E1500239 (last updated 2017) which shows the fields once alive.

Sorry but I'm having trouble imagining a scenario how the pressure gauges wouldn't register this before the IFO volume is compromised. Is there some back of the envelope calculations I can do to understand this? Since both the pressure gauges and the TP diagnostic channels are being monitored via EPICS, the refresh rate is similar, so I don't see how we can have a pump temperature / speed / current threshold tripped but NOT have this be registered on all the pressure gauges, seems like a bit of a contrived scenario to me. Our thresholds currently seem to be arbitrary numbers anyway, or are they based on some expected backstreaming rate? Isn't this scenario degenerate with a leak elsewhere in the vacuum envelope that would be caught by the differential pressure interlocks?​

I don't disagree that the pressure gauges would register the change. What I'm not sure about is whether the change would violate any of the existing interlock conditions, triggering a shutdown. Looking at what we have now, the only non-pump-related conditions I see that might catch it are the diffpres conditions:

  • abs(P2 - PTP2) > 1 torr (for a TP2 failure)

  • abs(P3 - PTP3) > 1 torr (for a TP3 failure)

  • abs(P1a - P2) > 1 torr (for either a TP2 or TP3 failure)

For the P1a-P2 differential, the threshold of 1 torr is the smallest value that in practice still allows us to pump down the IFO without having to disable the interlocks (P1a-P2 is the TP1 intake/exhaust differential). The purpose of the P2-PTP2/P3-PTP3 differentials is to prevent V4/5 from opening and suddenly exposing the spinning turbo to high pressure. I'm not aware of a real damage threshold calculation that any one has done; I think < 1 torr is lore passed down by Steve.

If a turbo pump fails, the rate it would backstream is unknown (to me, at least) and likely depends on the failure mode. The scenario I'm concerned about is if the backstream rate is slower than the conduction time through the pumspool and into the main volume. In that case, the pressure gauges will rise more or less together all the way up to atmosphere, likely never crossing the 1 torr differential pressure thresholds.

For the email alert, can you expose a soft channel that is a flag - if this flag is not 1, then the service will send out an email.

There's already a channel C1:Vac-error_status, where if the value is anything other than an empty string, there is an interlock tripped. Does that work?

  15407   Thu Jun 18 12:00:36 2020 gautamUpdateVACQuestions/comments on vacuum

I agree there were MEDM fields, but I can't find any record of these channels being recorded till 2018 December, so I don't agree that they were being digitally monitored. You can also look back in the elog (e.g. here and here) and see that the display fields are just blank. I would then assume that no interlocks were dependent on these channels, because otherwise the vacuum interlocks would be perpetually tripped.

Looking at images of the old vac screens, the TP2/3 rotation speed and status string were digitally monitored. However I don't know if there were software interlocks predicated on those.

Sorry but I'm having trouble imagining a scenario how the pressure gauges wouldn't register this before the IFO volume is compromised. Is there some back of the envelope calculations I can do to understand this? Since both the pressure gauges and the TP diagnostic channels are being monitored via EPICS, the refresh rate is similar, so I don't see how we can have a pump temperature / speed / current threshold tripped but NOT have this be registered on all the pressure gauges, seems like a bit of a contrived scenario to me. Our thresholds currently seem to be arbitrary numbers anyway, or are they based on some expected backstreaming rate? Isn't this scenario degenerate with a leak elsewhere in the vacuum envelope that would be caught by the differential pressure interlocks?

The temperature and current interlocks are implemented precisely because the pumps can shut themselves off. The concern is not about damaging the pumps (their internal logic protects against that); it's that a pump could automatically shut down and back-vent the IFO to atmosphere. Another interlock (e.g., the pressure differentials) might catch it, but it would depend on the back-vent rate and the scenario has never been tested. The temperature and current interlocks are set to trip just before the pump reaches its internal shut-down threshold.

For the email alert, can you expose a soft channel that is a flag - if this flag is not 1, then the service will send out an email.

That would be awesome if you're willing to volunteer. I agree this would be great to have.
  15406   Thu Jun 18 11:00:24 2020 JonUpdateVACQuestions/comments on vacuum
Quote:
  • Isn’t it true that we didn’t digitally monitor any of the TP diagnostic channels before 2018 December? I don’t have the full history but certainly there wasn’t any failure of the vacuum system connected to pump current/temp/speed from Sep 2015-Dec2018, whereas we have had 2 interruptions in 6 months because of flaky serial communications.

Looking at images of the old vac screens, the TP2/3 rotation speed and status string were digitally monitored. However I don't know if there were software interlocks predicated on those.

Quote:
  • According to the manuals, the turbo-pumps have their own internal logic to shut off the pump when either bearing temperature exceeds 60C or current exceeds 1.5A. I agree its good to have some redundancy, but do we really expect that our outer interlock loops will function if the internal ones fail?

The temperature and current interlocks are implemented precisely because the pumps can shut themselves off. The concern is not about damaging the pumps (their internal logic protects against that); it's that a pump could automatically shut down and back-vent the IFO to atmosphere. Another interlock (e.g., the pressure differentials) might catch it, but it would depend on the back-vent rate and the scenario has never been tested. The temperature and current interlocks are set to trip just before the pump reaches its internal shut-down threshold.

One way we might be able to reduce our reliance on the flaky serial readbacks is to implement rotation-speed hardware interlocks. The old vac documentation alludes to these, but as far as Chub and I could determine in 2018, they never actually existed. The older turbo controllers, at least, had an analog output proportional to speed which could be used to control a relay to interrupt the V4/5 control signals. I'll look into this for the new controllers. If it could be done, we could likely eliminate the layer of serial-readback interlocks altogether.

 
  • I also think we should finally implement the email alert in the event the vacuum interlock is tripped. I can implement this if no one else volunteers.

That would be awesome if you're willing to volunteer. I agree this would be great to have.

  15405   Thu Jun 18 09:46:03 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m today from 9:30am to 4pm.

  15404   Wed Jun 17 16:27:51 2020 gautamUpdateVACQuestions/comments on vacuum

I missed the vacuum discussion on the call today, but I have some questions/comments:

  • Isn’t it true that we didn’t digitally monitor any of the TP diagnostic channels before 2018 December? I don’t have the full history but certainly there wasn’t any failure of the vacuum system connected to pump current/temp/speed from Sep 2015-Dec2018, whereas we have had 2 interruptions in 6 months because of flaky serial communications.
  • According to the manuals, the turbo-pumps have their own internal logic to shut off the pump when either bearing temperature exceeds 60C or current exceeds 1.5A. I agree its good to have some redundancy, but do we really expect that our outer interlock loops will function if the internal ones fail?
  • In what scenario do we expect that all our pressure gauge readbacks fail, but not the TP readbacks? If so, won’t the differential pressure conditions protect the vacuum envelope, and the TPs internal shutoffs will protect the pumps? Except during the pump down phase perhaps, when we want to give a little more headroom to the small TPs to stress them less?

At the very least, I think we should consider making the interlock code have levels (like interrupts on a micro controller). So if the pressure gauges are communicating and are reporting acceptable pressure readings, we should be able to reject unphysical readbacks from the TP controllers.

I still don’t understand why TP2 can’t back TP1, but we just disable all the software interlock conditions contingent on TP2 readbacks. This pump is far newer than TP3, and unless I’ve misunderstood something major about the vacuum infrastructure, I don’t really see why we should trust this flaky serial readbacks for any actionable interlocks, at least without some AND logic (since temperature, current and speed aren’t really independent variables).

I also think we should finally implement the email alert in the event the vacuum interlock is tripped. I can implement this if no one else volunteers.

This might also be a good reminder to get the documentation in order about the new vacuum system.

  15403   Tue Jun 16 16:05:26 2020 JordanUpdateGeneralN2 Replacement

I replaced an empty N2 cylinder, there are now two empty tanks in the outside rack.

  15402   Tue Jun 16 13:35:03 2020 JonUpdateVACTemporary vac fix / IFO usable again

[Jon, Jordan, Koji]

Today Jordan reconfigured the vac system to allow pumping of the main volume resume, with Jon and Koji remotely advising. All clear to resume normal IFO activities. However, the vac system is operating in a temporary configuration that will have to be reverted as we locate replacement components. Details below.

Procedure

Since serial readback of the TP2 controller seems to be failing, we reconfigured the system with TP3 now backing for TP1. TP2 was valved off (at V4) and shut down until we can replace its controller.

TP3 has its own problems, however. It was valved off in January after its temperature readback began glitching and spuriously triggering the interlocks [ELOG 15140]. However the problem appears to be limited only one readback (rotation speed, current, voltage are fine) and there is enough redundancy in the pump-dependent interlock conditions to safely connect it to the main volume.

We also discovered that sometime since January, the TP3 dry pump has failed. The foreline pressure had risen to 165 torr. Since the TP2 and TP3 dry pumps are not interchangeable (Agilent vs. Varian), we instead valved in the auxiliary dry pump and disconnected the failed dry pump using a KF blank. This is a temporary arrangement until the permanent dry pump can be repaired. Jordan removed it to replace the tip seals and will test it in the bake lab before reinstalling.

With this configuration in place, we proceeded to pump down the main volume without issue (attachment 1). We monitored the pumpdown for about 45 min., until the pressure had reached ~1E-5 torr and TP3 had been transitioned to standby (low-speed) mode.

Summary of topology changes:

  • TP2 valved off and shut down until controller can be replaced
  • TP3 temporarily backing for TP1
  • Auxiliary dry pump temporarily backing for TP3
  • TP3 dry pump has been removed for repairs
Attachment 1: Pumpdown.png
Pumpdown.png
  15401   Tue Jun 16 13:05:36 2020 KojiUpdateCOCITM spares and New PR3 mirrors transported to Downs for phasemap measurement

ITMU01 / ITMU02 as well as the five E1800089 mirrors came back to the 40m. Instead, the two ETM spares (ETMU06 / ETMU08) were delivered to GariLynn.
Jordan worked on transportation.

Note that the E1800089 mirrors are together with the ITM container in the precious optics cabinet.

Attachment 1: 40m_Optics.jpg
40m_Optics.jpg
  15400   Tue Jun 16 08:58:11 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m today at 10am to deliver optics to Downs and to replace the TP2 controller.

  15399   Fri Jun 12 19:33:31 2020 gautamUpdateVACPumpspool UPS needs battery replacement

Didn't mean to sound whiny. I will wait until the vacuum team tells me it is okay.

Quote:

The vacuum safety policy and design are not clear to me, and I don't know what the first and second defense is. Since we had limited time and bandwidth during the remotely-supported recovery work today, we wanted to work step by step.

The pressure rising rate is 20mtorr/day, and turning on TP3 early next week will resume the main-volume pumping without too much hustle. If you need the IFO time now, contact with Jon and use backing with TP3.

ELOG V3.1.3-