40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 115 of 339  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  14978   Fri Oct 18 18:13:55 2019 KojiUpdatesafetyLaser interlock looks OK

I've checked the state of the laser interlock switch and everything looked normal.

  15003   Wed Oct 30 23:12:27 2019 KojiUpdateSUSPRM suspension issues

Sigh... hard loch

  Draft   Wed Nov 6 20:34:08 2019 KojiUpdateIOOEOM resonant box installed

 

Quote:

[Mirko / Kiwamu]

 The resonant box has been installed together with a 3 dB attenuator.

The demodulation phase of the MC lock was readjusted and the MC is now happily locked.

 

(Background)

We needed more modulation depth on each modulation frequency and so for the reason we installed the resonant box to amplify the signal levels.

Since the resonant box isn't impedance matched well, the box creates some amount of the RF reflections (#5339).

In order to reduce somewhat of the RF reflection we decided to put a 3 dB attenuator in between the generation box and the resonant box.

 

(what we did)

 + attached the resonant box directly to the EOM input with a short SMA connector.

 + put stacked black plates underneath the resonant box to support the wight of the box and to relief the strain on the cable between the EOM and the box.

 + put a 3 dB attenuator just after the RF power combiner to reduce RF reflections.

 + readjusted the demodulation phase of the MC lock.

 

(Adjustment of MC demodulation phase)

 The demodulation phase was readjusted by adding more cable length in the local oscillator line.

After some iterations an additional cable length of about 30 cm was inserted to maximize the Q-phase signal.

So for the MC lock we are using the Q signal, which is the same as it had been before.

 

 Before the installation of the resonant box, the amplitude of the MC PDH signal was measured in the demodulation board's monitor pins.

The amplitude was about 500 mV in peak-peak (see the attached pictures of the I-Q projection in an oscilloscope). Then after the installation the amplitude decreased to 400 mV in peak-peak.

Therefore the amplitude of the PDH signal decreased by 20 %, which is not as bad as I expected since the previous measurement indicated 40 % reduction (#2586).

 

 

  15019   Wed Nov 6 20:34:28 2019 KojiUpdateIOOPower combiner loss (EOM resonant box installed)

Gautam and I were talking about some modulation and demodulation and wondered what is the power combining situation for the triple resonant EOM installed 8 years ago. And we noticed that the current setup has additional ~5dB loss associated with the 3-to-1 power combiner. (Figure a)

N-to-1 broadband power combiners have an intrinsic loss of 10 log10(N). You can think about a reciprocal process (power splitting) (Figure b). The 2W input coming to the 2-port power splitter gives us two 1W outputs. The opposite process is power combining as shown in Figure c. This case, the two identical signals are the constructively added in the combiner, but the output is not 20Vpk but 14Vpk. Considering thge linearity, when one of the port is terminated, the output is going to be a half. So we expect 27dBm output for a 30dBm input (Figure d). This fact is frequently oversight particularly when one combines the signals at multiple frequencies (Figrue e). We can avoid this kind of loss by using a frequency-dependent power combiner like a diplexer or a triplexer.

Attachment 1: power_combiner.pdf
power_combiner.pdf
  15043   Thu Nov 21 13:14:33 2019 KojiUpdateLSCCM board study

One of the differences between the direct POY and the CM_SLOW POY is the presence of the CM Servo gain stages. So this might mean that you need to move some of the whitening gain to the CM IN1 gain.

  15063   Tue Dec 3 00:10:15 2019 KojiUpdateALSEY uPDH post mixer LPF

I got confused. Why don't we see that too-high-Q pole in the OLTF? 

  15196   Fri Feb 7 02:41:28 2020 KojiUpdateGeneraloffice area temperature

Not sure what's wrong, but the workstation desk is freezing cold again and the room temp is 18degC (64degF).

  15200   Fri Feb 7 19:39:10 2020 KojiUpdateLSCMore high BW POY experiments

This measurement tells you how the gain balance between the SLOW_CM and AO paths should be. Basically, what you need is to adjust the overall gain before the branch of the paths.

Except for the presence of the additional pole-zero in the optical gain because of the power recycling.

You have compensated this with a filter (z=120Hz, p=5kHz) for the CM path. However, AO path still don't know about it. Does this change the behavior of the cross over?

If the servo is not unconditionally stable when the AO gain is set low, can we just turn on the AO path at the nominal gain? This causes some glitch but if the servo is stable, you have a chance to recover the CARM control before everything explodes, maybe?

  15219   Fri Feb 21 13:02:53 2020 KojiUpdateALSPDH error signals?

Check out this elog: ELOG 4354

If this summing box is still used as is, it is probably giving the demod phase adjustment.

  15252   Wed Mar 4 21:02:49 2020 KojiUpdateElectronicsMore cabling removed

We are going to replace the old Sun c1ioo with a modernized supermicro. At the opportunity, remove the DAC and BIO cards to use them with the new machines. BTW I also have ~4 32ch BIO cards in my office.

  15267   Wed Mar 11 21:03:57 2020 KojiUpdateBHDSOS packages from Syracuse

I opened the packages send from Syracuse.

- The components are not vacuum clean. We need C&B.
- Some large parts are there, but many parts are missing to build complete SOSs.

- No OSEMs.
- Left and right panels for 6 towers
- 3 base blocks
- 1 suspension block
- 8 OSEM plates. (1 SOS needs 2 plates)

- The parts looks like old versions. The side panels needs insert pins to hold the OSEMs in place. We need to check what needs to be inserted there.

- An unrelated tower was also included.

Attachment 1: P_20200311_203449_vHDR_On.jpg
P_20200311_203449_vHDR_On.jpg
  15280   Wed Mar 18 22:10:41 2020 KojiUpdateVACMain vol pressure jump

I was in the lab at the time. But did not notice anything (like turbo sound etc). I was around ETMX/Y (1X9, 1Y4) rack and SUS rack (1X4/5), but did not go into the Vac region.

  15293   Thu Apr 2 22:19:18 2020 KojiUpdateCDSC1AUXEY wiring + channel list

We want to migrate the end shutter controls from c1aux to the end acromags. Could you include them to the list if not yet?

This will let us remove c1aux from the rack, I believe.

 

  15301   Mon Apr 13 15:28:07 2020 KojiUpdateGeneralPower Event and recovery

[Larry (on site), Koji & Gautam (remote)]

Network recovery (Larry/KA)

  • Asked Larry to get into the lab. 

  • 14:30 Larry went to the lab office area. He restarted (power cycled) the edge-switch (on the rack next to the printer). This recovered the ssh-access to nodus. 

  • Also Larry turned on the CAD WS. Koji confirmed the remote access to the CAD WS.

Nodus recovery (KA)

  • Apr 12, 22:43 nodus was restarted.

  • Apache (dokuwiki, svn, etc) recovered along with the systemctl command on wiki

  • ELOG recovered by running the script

Control Machines / RT FE / Acromag server Status

  • Judging by uptime, basically only the machines that are on UPS (all control room workstations + chiara) survived the power outage. All RT FEs are down. Apart from c1susaux, the acromag servers are back up (but the modbus processes have NOT been restarted yet). Vacuum machine is not visible on the network (could just be a networking issue and the local subnet to valves/pumps is connected, but no way to tell remotely).

  • KA imagines that FB took some finite time to come up. However, the RT machines required FB to download the OS. That made the RTs down. If so, what we need is to power cycle them.

  • Acromag: unknown state

The power was lost at Apr 12 22:39:42, according to the vacuum pressure log. The power loss was for a few min.

  15303   Tue Apr 14 23:50:06 2020 KojiUpdateGeneral40m power glitch recovery

[Koji / Gautam (Remote)]

Lab status

  • Gray Panel: The lab AC was off. Turned on all three (N/S, CTRL RM, E/W)
  • The control room AC was running.

Work stations

  • Control Room: All the control machines were running. We knew that nodus/chiara/fb were running
  • 1X6/7:
    • JETSTOR was making beeping sound. “Power #1 failed””power #2 failed”
    • Optimus & megatron were off -> turned on -> up and running now
  • 1X1/2:
    • Power cycled the netgear at the top of the IOO rack (maybe not necessary)
    • Turned on c1ioo -> up and running now
  • 1X4/5: Rebooted c1sus / c1lsc -> up and running now
  • 1X9: Rebooted c1iscex -> up and running now
  • 1Y4: Rebooted c1iscex -> up and running now

Vacuum status

  • Looked like everything was running as if it did not see the power glitch
  • TP1 normal: Set speed 33.6k rpm / Actual speed 33.6k rpm 
  • TP2 normal: 66k rpm / PTP2 16.0 mtorr
  • TP3 normal: 31k rpm / PTP3 45.4mtorr
  • P1 LOW / P2 1.7mtorr / CC2 1.1e-6 / P3 7.6e-2 / P4 LO
  • Annuli: 2.7~3torr
  • CC1 9.6e-6 / SUPER BEE 0.9mtorr

C1VAC recovery

  • c1vac was alive, but was isolated from the martian network
  • Checked the network I/F status with /sbin/ifconfig -a
    • eth0 had no IP
    • eth1 had the vac subnet IP (192.168.114.9)
  • Ran sudo /sbin/ifdown eth0 then  sudo /sbin/ifup eth0
  • The I/F eth0 started running and c1vac became visible from martian
  • Later checked the vacuum screen: The pressure values and valve statuses looked normal.
    The interlock state was “running”. The system state was “unrecognized”.

End RTS recovery 

  • The end slow machines (auxex and auxey) were already running
  • Restarting end RT models:
    • c1iscey -> rtcds start --all
    • c1iscex -> rtcds start --all
  • Confirmed that the models can dump the SUSs

Vertex RTS recovery

  • We wanted to use the reboot script. (/opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh)
  • c1susaux​​
    • To be safe, we wanted to bring c1susaux first.
    • c1susaux does not make the network I/Fs up automatically upon reboot.
      -> Connect an LCD display / keyboard / mouse to c1susaux
      -> Ran sudo /sbin/ifup eth0 and sudo /sbin/ifup eth1
    • Now c1susaux is visible from martian.
    • Login c1susaux and ran:  
      sudo systemctl start modbusIOC.service 
      -> c1susaux epics is up and running now
    • ...Meanwhile c1susaux lost its eth1 somehow. This made the slow values of 8 vertex sus all zero
      -> Ran sudo /sbin/ifdown eth1 and sudo /sbin/ifup eth1 again on c1susaux ->  this resolved the issue
  • c1psl
    • Login c1psl and ran:  
      sudo systemctl start modbusIOC.service 
      -> c1psl epics is up and running now
  • Prepared for the rebooting script
    • Ran /opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh
    • Rebooting was done successfully. All the suspensions looked free and healthy.
    • Burtrestored c1susaux (used Apr 12 21:19 snapshot)

Hardware

  • PSL laser / Xend AUX laser / Yend AUX laser were off -> turned on
  • The PMC was immediately automatically locked.
  • The main marconi was off -> forgot to turn on
  • The end temp controllers for the SHG crystals were on but not enabled -> now enabled

RTS recovery ~ part 2

  • FB: FB status of all the RTS models were still red
  • Timing: c1x01/2/3/5 were 1 sec behind of FB and c1x04 was 2 sec behind
  • -> Remedy:  https://nodus.ligo.caltech.edu:8081/40m/14349
    • Software rebooting of FB
    • Manually start the open-mx and mx services using
    • sudo systemctl start open-mx.service 
    • sudo systemctl start mx.service
    • Check that the system time returned by gpstime matches the gpstime reported by internet sources. e.g. http://leapsecond.com/java/gpsclock.htm
    • Manually start the daqd processes using
      sudo systemctl start daqd_*
  • This made all the FB(FE) indicators green!
  • Ran the reboot script again -> All green!

IMC recovery

  • The IMC status was checked
  • No autolocker, but it could be manually locked. i.e. MC1/2/3 were not so much misalignment
  • Autolocker/Slow FSS recovery along with https://nodus.ligo.caltech.edu:8081/40m/15121
    • sudo systemctl start MCautolocker.service
    • sudo systemctl start FSSSlow.service
  • Both of them failed to run
  • Note by Gautam: The problem with the systemctl commands failing was that the NFS mount points weren’t mounted. Which in turn was because of the familiar /etc/resolv.conf problem. I added chiara to the namespace in this file, and then manually mounted the NFS mount points. This fixed the problem.
    Now the IMC is locked and the autolocker is left running.

Burt restore

  • Used Apr 12 21:19 snapshot
  • c1psl
  • c1alsepics/c1assepics/c1asxepics/c1asyepics
  • c1aux/c1auxex/c1auxey/
  • c1iscaux/c1susaux
  • This made REFL and AS beams back to the CCDs. As has small fringes.
  • Y arm has small IR flashes as well as green flashes.

JETSTOR recovery

  • JETSTOR was beeping. 
  • Shutdown megatron
  • Followed the instruction https://nodus.ligo.caltech.edu:8081/40m/13107
  • This stopped beeping. Waiting for JETSTOR to come up -> In a minute, JETSTOR display became normal and all disks showed green.
  • Bring megatron back up again

N2 bottle

  • The left N2 bottle was empty. The right one had 1500PSI.
  • Replaced the left bottle with the spare one in the room.
  • Now the left one 2680PSI and the right one 1400PSI.

Closing

  • Closed PSL/AUX laser shutters
  • Turned off the lights in the lab, CTRL room, and the office.

Remaining Issues

  • [done] MCAutoLocker / FSSSlow scripts are not running
  • The PRM alignment slider has no effect (although the PRM is aligned…) -> SLOW DAQ frozen???
  • JETSTOR is not mounted on megatron [gautam mounted Jetstor on megatron on 4/18 at 2pm]
  15317   Sat May 2 02:35:18 2020 KojiUpdateALSASY M2 PZT damaged

Yes, we are supposed to have a few spare PI PZTs.

  15327   Tue May 12 20:16:31 2020 KojiUpdateLSCRelative importance of losses in the arm and PRC

Is \eta_A the roundtrip loss for an arm?

Thinking about the PRG=10 you saw:
- What's the current PR2/3 AR? 100ppm? 300ppm? The beam double-passes them. So (AR loss)x4 is added.
- Average arm loss is ~150ppm?

Does this explain PRG=10?

 

  15340   Wed May 20 19:34:58 2020 KojiUpdateGeneralITM spares and New PR3 mirrors transported to Downs for phasemap measurement

Two ITM spares (ITMU01/ITMU02) and five new PR3 mirrors (E1800089 Rev 7-1~Rev7-5) were transported to Downs for phasemap measurement

Attachment 1: container.jpg
container.jpg
  15358   Wed May 27 17:41:57 2020 KojiUpdateLSCPower buildup diagnostics

This is very interesting. Do you have the ASDC vs PRG (~ TRXor TRY) plot? That gives you insight on what is the cause of the low recycling gain.

  15359   Wed May 27 19:36:33 2020 KojiUpdateLSCArm transmission RIN

My speculation for the worse RIN is:

- Unoptimized alignment -> Larger linear coupling of the RIN with the misalignment
- PRC TT misalignment (~3Hz)

Don't can you check the correlation between the POP QPD and the arm RIN?

  15360   Wed May 27 20:14:51 2020 KojiUpdateLSCLock acquisition sequence

I see. At the 40m, we have the direct transition from ALS to RF. But it's hard to compare them as the storage time is very different.

  15369   Wed Jun 3 03:29:26 2020 KojiUpdateLSCLock acquisition update portal

Woo hoo!

Which 1f signals are you going to use? PRCL has sign flipping at the carrier critical coupling. So if the IFO is close to that condition, 1f PRCL suffers from the sign flipping or large gain variation.

  15374   Thu Jun 4 00:21:28 2020 KojiSummaryCOCITM spares and New PR3 mirrors transported to Downs for phasemap measurement

GariLynn worked on the measurement of E1800089 mirrros.

The result of the data analysis, as well as the data and the codes, have been summarized here:
https://nodus.ligo.caltech.edu:30889/40m_phasemap/#E1800089
 

  15377   Thu Jun 4 21:32:00 2020 KojiUpdateSUSMC1 Slow Bias issues

We can limit the EPICS values giving some parameters to the channels. cf https://epics.anl.gov/tech-talk/2012/msg00147.php

But this does not solve the MC1 issue. Only we can do right now is to make the output resister half, for example.

  15381   Mon Jun 8 12:49:07 2020 KojiUpdateBHDAstigmatism and scattering plots

Can you describe the mode matching  in terms of the total MM? Is MM_total = sqrt(MM_vert * MM_horiz)?

  15394   Fri Jun 12 01:23:32 2020 KojiUpdateVACPumpspool UPS needs battery replacement

1. I agree that it's likely that it was the temp signal glitch.
Recom #2: I approve to reopen the valves to pump down the main volume. As long as there is no frequent glitch, we can just bring the vacuum back to normal with the current software setup.

2. Recom #1 is also reasonable. You can use simple logic like if we register 10 consecutive samples that exceed the threshold, we can activate the interlock. I feel we should still keep the temp interlock. Switching between pumping mode and the normal operation may cause unexpected omission of the interlocks when it is necessary.

3. We should purchase the UPS battery / replacement rotary TIP seal. Once they are in hand, we can stop the vacuum and execute the replacement. Can one person (who?) accomplish everything with some remote help?

4. The lab temp: you mean, 12degC swing with the AC on!?

 

  15396   Fri Jun 12 17:32:40 2020 KojiUpdateVACPumpspool UPS needs battery replacement

Jon and Koji remotely supported Jordan's resetting the TP2 controller.

Here is the instruction by Jon
From the operator's console in front of the vac rack:
  1. Open a terminal window (click the LXTerminal icon on the desktop)
  2. Type "control" + enter to open the vac controls screen
  3. Toggle all the open valves closed (edit by KA: and manually close RV2 by rotating the gate valve handle )
  4. Turn OFF TP2 by clicking the "Off' button. Make sure the status changes and the rotation speed falls to zero (you'll also hear the pump spinning down) 
  5. The other pumps (TP1, TP3) can be left running
  6. Once TP2 has stopped spinning, go to the back of the rack and locate the ethernet cable running from the back of the TP2 controller to the IOLAN server (near the top of the rack). Disconnect and reconnect the cable at each end, verifying it is firmly locked in place.
  7. From the front of the rack, power down the TP2 controller (I don't quite remember for the Agilent, but you might have to move the slider on the front from "Remote" to "Local" first)
  8. Wait about 30 seconds, then power it back on. If you had to move the slider to shut it down, revert it back to the "Remote" position.
  9. Go back to the controls screen on the console. If the pump came back up and is communicating serially again, its status will say something other than "NO COMM"
  10. Turn TP2 back on. Verify that it spins up to its nominal speed (66 kRPM)
  11. At this point you can reopen any valves you initially closed (any that were already closed before, leave closed)

TP2 was stopped and at this moment the glitches were gone. Jordan powercycled the TP2 controller and we brought up the TP2 back at the full speed.
However, the glitches came back as before. Obviously we can't go on from here, and we've decided to stop the recovery process here today.


- We left TP1/2/3 running while the valves including RV2 were closed.

- When Jordan is back in the lab next week, we'll try to use TP3 as the backing of TP1 so that we can resume the main volume pumping.

- Currently, TP3 does not have interlocking and that is a risk. Jon is going to implement it.

- Meanwhile, we will try to replace the controller of TP2. We are supposed to have this in the lab. Ask Chub about the location.

- Once we confirm the stability of the diagnostic signals for TP2, we will come back to the nominal pumping scheme.

Attachment 1: Screen_Shot_2020-06-12_at_17.22.23.png
Screen_Shot_2020-06-12_at_17.22.23.png
  15398   Fri Jun 12 19:23:56 2020 KojiUpdateVACPumpspool UPS needs battery replacement

The vacuum safety policy and design are not clear to me, and I don't know what the first and second defense is. Since we had limited time and bandwidth during the remotely-supported recovery work today, we wanted to work step by step.

The pressure rising rate is 20mtorr/day, and turning on TP3 early next week will resume the main-volume pumping without too much hustle. If you need the IFO time now, contact with Jon and use backing with TP3.

  15401   Tue Jun 16 13:05:36 2020 KojiUpdateCOCITM spares and New PR3 mirrors transported to Downs for phasemap measurement

ITMU01 / ITMU02 as well as the five E1800089 mirrors came back to the 40m. Instead, the two ETM spares (ETMU06 / ETMU08) were delivered to GariLynn.
Jordan worked on transportation.

Note that the E1800089 mirrors are together with the ITM container in the precious optics cabinet.

Attachment 1: 40m_Optics.jpg
40m_Optics.jpg
  15440   Mon Jun 29 20:30:53 2020 KojiUpdateSUSMC1 sat-box de-lidded

Sigh. Do we have a spare sat box?

  15470   Sat Jul 11 18:24:30 2020 KojiUpdateLSCMC2 coils need DC balancing?

> Can't we offload this DC signal to the laser crystal temperature servo?
No. PSL already follows the MC length. So this offset is coming from the difference between the MC length and the CARM length.
What you can do is to offload the MC length to the CARM DC if this helps.

  15477   Tue Jul 14 01:55:03 2020 KojiUpdateLSCLocking with POX for CARM

The usual technique is that keeping the IFO locked with the old set of the signals and the relative gain/TF between the conventional and new signals are measured in-lock so that you can calibrate the new gain/demod-phase setting.

  15486   Wed Jul 15 19:51:51 2020 KojiUpdateGeneralEmergency light on in control room

It happened before too. Doesn't it say it has occasional self-testing or something?

  15518   Wed Aug 12 20:14:06 2020 KojiUpdateCDSTiming distribution slot availability

I believe we will use two new chassis at most. We'll replace c1ioo from Sun to Supermicro, but we recycle the existing timing system.

  15519   Wed Aug 12 20:15:42 2020 KojiUpdateElectronicsNumber of the beast

Grrr. Let's repair the unit. Let's get a help from Chub & Jordan.

Do you have a second unit in the lab to survive for a while?

  15520   Wed Aug 12 20:16:52 2020 KojiUpdateElectronicsPhotodiode inventory

When I tested Q3000 for aLIGO, the failure rate was pretty high. Let's get 10pcs.

  15522   Thu Aug 13 13:35:13 2020 KojiUpdateCDSTiming distribution slot availability

The new dolphin eventually helps us. But the installation is an invasive change to the existing system and should be done at the installation stage of the 40m BHD.

  15551   Tue Sep 1 01:49:49 2020 KojiUpdateElectronicsTeledyne AP1053 etc were transported

Teledyne AP1053 etc were transported from Rich's office to the 40m. The box is placed on the shelf at the entrance.

My record tells that there are 7 AP1053 in the box. I did not check the number this time.

Attachment 1: 20200831203756_IMG_9931.jpg
20200831203756_IMG_9931.jpg
Attachment 2: 20200831203826_IMG_9932.jpg
20200831203826_IMG_9932.jpg
Attachment 3: 20200831205126_IMG_9934.jpg
20200831205126_IMG_9934.jpg
  15559   Sat Sep 5 14:28:03 2020 KojiUpdateGeneralLO beam: Fiber coupling work

2PM: Arrived at the 40m. Started the work for the coupling of the RF modulated LO beam into a fiber. -> I left the lab at 10:30 PM.

The fiber coupling setup for the phase-modulated beam was made right next to the PSL injection path. (See attachment 1)

  • For the alignment of the beam, the main PSL path, including the alignment of the 2" PO mirror, has not been touched.
  • There are two PO beams with the optical power of 0.8mW (left) and 1.6mW (right). Both had been blocked but the right one was designed to be used for PSL POS and ANG. For the fiber coupling, the right beam was used.
  • The alignment/mode-matching work has been done with a short (2m?) fiber patch cable from Thorlabs. The fiber is the same as the one used for LO delivery.
  • I tried to have a mode-matching telescope in the LO path. I ended up having no lens for the best result. The resulting transmitted power is 1.21mW out of 1.64mW incident (~74%). These powers were measured with the Ophir power meter. (Note that Thorlabs' fiber power meter indicated 1.0mW transmission.)

Some notes

  • After the PSL activity, the IMC locking was checked to see if I messed up the PSL alignment. It locks fine and looks fine.
    • The input shutter (left closed after Jon's vacuum work?) was opened.
    • The alignment was not optimal and had some pitch misalignment (e.g. TEM03).
    • After some MC SUS alignment, the automatic locking of TEM00 was recovered. Mainly MC3 pitch was moved (+0.17).
    • I've consulted with Gautam and he thinks this is with the level of regular drift. The AS beam was visible.
  • The IMC and MI were moving so much, but this seemed just the usual Saturday night Millikan shake.
  • During the activity, the PSL HEPA was turned up to 100 and it was reverted to 33 after the work.
  • I have been wearing a mask and gloves throughout the work there.
Attachment 1: 20200905212254_IMG_9938.JPG
20200905212254_IMG_9938.JPG
  15563   Tue Sep 8 01:31:43 2020 KojiUpdateBHDA first look at RF44 scheme

- Loose fiber coupler: Sorry about that. I could not detect something was loose there, although some of the locks were not tightened.

- S incident instead of P: Sorry about that too. I completely missed that the IMC takes S-pol.

  15568   Thu Sep 10 15:56:08 2020 KojiUpdateGeneralHEPA & Particle Level Status

15:30
- PSL HEPA was running at 33% and is now at 100%
- South End HEPA was not on and is now running
- Yarm Portable HEPA was not running and is now running at max speed: the power was taken beneath the ITMY table. It is better to unplug it when one uses the IFO.
- Yend Portable HEPA was not running and is now running (presumably) at max speed

Particle Levels: (Not sure about the unit. The convention here is to multiply x10 of the reading)

Before running the HEPAs at their maximum
9/10/2020 15:30 / 0.3um 292180 / 0.5um 14420

(cf 9/5/2020 / 0.3um 94990 / 0.5um 6210)
==>
After running the HEPAs at their maximum
The number gradually went down and now became constant at about half of the initial values
9/10/2020 19:30 / 0.3um 124400 / 0.5um 7410

  15580   Sat Sep 19 01:49:52 2020 KojiUpdateGeneralM4.5 EQ in LA

M4.5 EQ in LA 2020-09-19 06:38:46 (UTC) / -1d 23:38:46 (PDT) https://earthquake.usgs.gov/earthquakes/eventpage/ci38695658/executive

I only checked the watchdogs. All watchdogs were tripped. ITMX and ETMY seemed stuck (or have the OSEM magnet issue). They were left tripped. The watchdogs for the other SUSs were reloaded.

  15582   Sat Sep 19 18:07:35 2020 KojiUpdateVACTP3 RP failure

I came to the campus and Gautam notified that he just had received the alert from the vac watchdog.

I checked the vac status at c1vac. PTP3 went up to 10 torr-ish and this made the diff pressure for TP3 over 1torr. Then the watchdog kicked in.

To check the TP3 functionality, AUX RP was turned on and the manual valve (MV in the figure) was opened to pump the foreline of TP3. This easily made PTP3 <0.2 torr and TP3 happy (I didn't try to open V5 though).

So the conclusion is that RP for TP3 has failed. Presumably, the tip-seal needs to be replaced.

Right now TP3 was turned off and is ready for the tip-seal replacement. V5 was closed since the watchdog tripped.

Attachment 1: vac.png
vac.png
Attachment 2: Screen_Shot_2020-09-19_at_17.52.40.png
Screen_Shot_2020-09-19_at_17.52.40.png
  15584   Sat Sep 19 18:46:48 2020 KojiUpdateGeneralHand soap

I supplied a bottle of hand soap. Don't put water in the bottle to dilute it as it makes the soap vulnarable for cotamination.

  15585   Sat Sep 19 19:14:59 2020 KojiUpdateGeneralITMX released / ETMY UR magnet knocked off

There were two SUSs which didn't look normal.

- ITMX was easily released by the bias slider -> Shake the pitch slider and while all the OSEM values are moving, turn on the damping control (with x10 large watchdog threshold)

- ETMY has UR OSEM 0V output. This means that there is no light. And this didn't change at all with the slider move.
- Went to the Y table and tried to look at the coils. It seems that the UR magnet is detached from the optic and stuck in the OSEM.

We need a vent to fix the suspension, but until then what we can do is to redistribute the POS/PIT/YAW actuations to the three coils.

Attachment 1: IMG_6218.jpeg
IMG_6218.jpeg
  15595   Tue Sep 22 16:29:30 2020 KojiUpdateGeneralControl Room AC setting for continuous running

I came to the lab. The control room AC was off -> Now it is on.

Here is the setting of the AC meant for continuous running

Attachment 1: P_20200922_161125.jpg
P_20200922_161125.jpg
  15597   Tue Sep 22 23:16:54 2020 KojiUpdateGeneralHEPA Inspection

Gautam reported that the PSL HEPA stopped running (ELOG 15592). So I came in today and started troubleshooting.

It looks like that the AC power reaches the motors. However, both motors do not run. It looks like the problem exists in the capacitors, the motors, or both.
Parts specs can be found in the next ELOG.


Attachment 1 is the connection diagram of the HEPA. The AC power is distributed by the breaker panel. The PSL HEPA is assigned to use M22 breaker (Attachment 2). I checked the breaker switch and it was (and is) ON. The power goes to the junction box above the enclosure (Attachment 3). A couple of wires goes to the HEPA switch (right above the enclosure light switch) and the output goes to the variac. The inside of the junction box looked like this (Attachment 4).

By the way, the wires were just twisted and screwed into a metal threaded (but isolated) caps (Attachment 5). Is this legit? Shouldn't we use stronger crimping? Anyway, there was nothing wrong with the caps w.r.t the connection for now.

I could easily trace the power up to the variac. The variac output was just fine (Attachment 6). The cord goes from the variac to the junction box (and then HEPAs) looked scorched. The connection from the plug to HEPAs was still OK, but this should be eventually replaced. Right now the cable was unplugged after the following tests for the safety reason.

The junction box for each HEPA unit was opened to check the voltage. The supply voltage came to the junction boxes and it was just fine. In Attachments 8 & 9, the voltages look low but this is because I just turned the variac only a little.

At the (main) junction box, the resistances of the HEPAs were checked with the Fluke. As the HEPA units are connected to the AC in parallel, the resistances were individually checked as follows.

South HEPA SW North HEPA SW Resistance
OFF OFF High
OFF LO 5 Ohm
OFF HIGH 7 Ohm
LO OFF 7 Ohm
HIGH OFF 5 Ohm

The coils were not disconnected (... I wonder if the wiring of South HEPA was flipped? But this is not the main issue right now.)
 

By removing the pre-filters, the motors were inspected Attachments 10 & 11. At least the north HEPA motor was warm, indicating there was some current before. A capacitor was connected per motor. When the variac was tuned up a bit, one side of the capacitor could see the voltage. I could not judge which has the issue between the capacitor and the motor.

Attachment 1: 0_PSL_HEPA.pdf
0_PSL_HEPA.pdf
Attachment 2: 1_Breaker_Panel.JPG
1_Breaker_Panel.JPG
Attachment 3: 2_Junction_Box.JPG
2_Junction_Box.JPG
Attachment 4: 3_Junction_Box_Inside.JPG
3_Junction_Box_Inside.JPG
Attachment 5: 4_Junction_Box_Inside_2.JPG
4_Junction_Box_Inside_2.JPG
Attachment 6: 5_Variac_100%.JPG
5_Variac_100%.JPG
Attachment 7: 6_VariAC_to_HEPA.JPG
6_VariAC_to_HEPA.JPG
Attachment 8: 7_North_HEPA_IN.JPG
7_North_HEPA_IN.JPG
Attachment 9: 8_South_HEPA_IN.JPG
8_South_HEPA_IN.JPG
Attachment 10: 9_North_Prefilter_Removed.JPG
9_North_Prefilter_Removed.JPG
Attachment 11: 10_South_Prefilter_Removed.JPG
10_South_Prefilter_Removed.JPG
  15598   Tue Sep 22 23:17:51 2020 KojiUpdateGeneralHEPA Inspection

Dimensions / Specs

- HEPA unit dimentions
- HEPA unit manufacturer
- Motor
- Capacitor

Attachment 1: A_HEPA_Dimention.JPG
A_HEPA_Dimention.JPG
Attachment 2: B_HEPA_Company.JPG
B_HEPA_Company.JPG
Attachment 3: C_North_HEPA_Spec.JPG
C_North_HEPA_Spec.JPG
Attachment 4: D_South_HEPA_Spec.JPG
D_South_HEPA_Spec.JPG
Attachment 5: E_Motor_Spec.JPG
E_Motor_Spec.JPG
Attachment 6: F_Cap_Spec.JPG
F_Cap_Spec.JPG
  15600   Wed Sep 23 10:06:52 2020 KojiUpdateVACTP2 running HOT

Here is the timeline. This suggests TP2 backing RP failure.

1st line: TP2 foreline pressure went up. Accordingly TP2 P, current, voltage, and temp went up. TP2 rotation went down.

2nd line: TP2 temp triggered the interlock. TP2 foreline pressure was still high (10torr) so TP2 struggled and was running at 1 torr.

3rd line: Gautam's operation. TP2 was isolated and stopped.

Between the 1st line and 2nd line, TP2 pressue (=TP1 foreline pressure) went up to 1torr. This made TP1 current increased from 0.55A to 0.68A (not shown in the plot), but TP1 rotation was not affected.

Attachment 1: Screen_Shot_2020-09-23_at_10.00.43.png
Screen_Shot_2020-09-23_at_10.00.43.png
  15612   Mon Oct 5 00:53:16 2020 KojiUpdateBHDSingle bounce interferometer locked

🤘🤘🤘

 

ELOG V3.1.3-