40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 147 of 344  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  15222   Mon Feb 24 08:36:32 2020 ChubUpdateGeneralHVAC repair

The HVAC people replaced a valve and repaired the pneumatic plumbing on the roof air handler.  Temperature has been stable during the day since Thursday.  If anyone is in the control room during the evening, please make a note of the temperature.

Chub

  15270   Thu Mar 12 11:10:49 2020 YehonathanUpdateGeneralPMC got unlcoked

Came this morning to find the PMC was unlocked since 6AM. Laser is still on, but PMC REFL PD DC shows dead white constant 0V on PMC screen. All the controls on the PMC screen show constant 0V actually except for the PMC_ERR_OUTPUT which is a fast channel.

 

Is PSL Acromag already failing?

 

I restarted the IOC but it didn't help.

I am now rebooting c1psl... That seemed to help. PMC screen seem to be working again. I am able to lock the PMC now.

IMC was locking easily once some switches on the MC servo screen were put to normal states.

TTs were grossly misaligned. Onces they where aligned, arm cavities were locking easily. Dither align for the X arm is very slow though...

  15271   Thu Mar 12 12:44:34 2020 gautamUpdateGeneralPMC got unlcoked

Of course the reboot wiped any logs we could have used for clues as to what happened. Next time it'll be good to preserve this info. I suspect the local subnet went down.

P.S. for some reason the system logs are priveleged now - I ran sudo sysctl kernel.dmesg_restrict=0 on c1psl to make it readable by any user. This change won't persist on reboot.

Quote:

I restarted the IOC but it didn't help.

I am now rebooting c1psl... That seemed to help. PMC screen seem to be working again. I am able to lock the PMC now.

  15286   Mon Mar 30 19:02:49 2020 ranaUpdateGeneraldonated cleanroom supplies to Hospitals

Yesterday evening I took nearly all of the masks, gloves, gowns, alcohol wipes, hats, and shoe covers. These were the ones in the cleanroom cabinets at the east end of the Y-arm, as well as the many boxes under the yarm near those cabinets.

This photo album shows the stuff, plus some other random photos I took around the same time (6-7 PM) of the state of parts of the lab.

  15301   Mon Apr 13 15:28:07 2020 KojiUpdateGeneralPower Event and recovery

[Larry (on site), Koji & Gautam (remote)]

Network recovery (Larry/KA)

  • Asked Larry to get into the lab. 

  • 14:30 Larry went to the lab office area. He restarted (power cycled) the edge-switch (on the rack next to the printer). This recovered the ssh-access to nodus. 

  • Also Larry turned on the CAD WS. Koji confirmed the remote access to the CAD WS.

Nodus recovery (KA)

  • Apr 12, 22:43 nodus was restarted.

  • Apache (dokuwiki, svn, etc) recovered along with the systemctl command on wiki

  • ELOG recovered by running the script

Control Machines / RT FE / Acromag server Status

  • Judging by uptime, basically only the machines that are on UPS (all control room workstations + chiara) survived the power outage. All RT FEs are down. Apart from c1susaux, the acromag servers are back up (but the modbus processes have NOT been restarted yet). Vacuum machine is not visible on the network (could just be a networking issue and the local subnet to valves/pumps is connected, but no way to tell remotely).

  • KA imagines that FB took some finite time to come up. However, the RT machines required FB to download the OS. That made the RTs down. If so, what we need is to power cycle them.

  • Acromag: unknown state

The power was lost at Apr 12 22:39:42, according to the vacuum pressure log. The power loss was for a few min.

  15303   Tue Apr 14 23:50:06 2020 KojiUpdateGeneral40m power glitch recovery

[Koji / Gautam (Remote)]

Lab status

  • Gray Panel: The lab AC was off. Turned on all three (N/S, CTRL RM, E/W)
  • The control room AC was running.

Work stations

  • Control Room: All the control machines were running. We knew that nodus/chiara/fb were running
  • 1X6/7:
    • JETSTOR was making beeping sound. “Power #1 failed””power #2 failed”
    • Optimus & megatron were off -> turned on -> up and running now
  • 1X1/2:
    • Power cycled the netgear at the top of the IOO rack (maybe not necessary)
    • Turned on c1ioo -> up and running now
  • 1X4/5: Rebooted c1sus / c1lsc -> up and running now
  • 1X9: Rebooted c1iscex -> up and running now
  • 1Y4: Rebooted c1iscex -> up and running now

Vacuum status

  • Looked like everything was running as if it did not see the power glitch
  • TP1 normal: Set speed 33.6k rpm / Actual speed 33.6k rpm 
  • TP2 normal: 66k rpm / PTP2 16.0 mtorr
  • TP3 normal: 31k rpm / PTP3 45.4mtorr
  • P1 LOW / P2 1.7mtorr / CC2 1.1e-6 / P3 7.6e-2 / P4 LO
  • Annuli: 2.7~3torr
  • CC1 9.6e-6 / SUPER BEE 0.9mtorr

C1VAC recovery

  • c1vac was alive, but was isolated from the martian network
  • Checked the network I/F status with /sbin/ifconfig -a
    • eth0 had no IP
    • eth1 had the vac subnet IP (192.168.114.9)
  • Ran sudo /sbin/ifdown eth0 then  sudo /sbin/ifup eth0
  • The I/F eth0 started running and c1vac became visible from martian
  • Later checked the vacuum screen: The pressure values and valve statuses looked normal.
    The interlock state was “running”. The system state was “unrecognized”.

End RTS recovery 

  • The end slow machines (auxex and auxey) were already running
  • Restarting end RT models:
    • c1iscey -> rtcds start --all
    • c1iscex -> rtcds start --all
  • Confirmed that the models can dump the SUSs

Vertex RTS recovery

  • We wanted to use the reboot script. (/opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh)
  • c1susaux​​
    • To be safe, we wanted to bring c1susaux first.
    • c1susaux does not make the network I/Fs up automatically upon reboot.
      -> Connect an LCD display / keyboard / mouse to c1susaux
      -> Ran sudo /sbin/ifup eth0 and sudo /sbin/ifup eth1
    • Now c1susaux is visible from martian.
    • Login c1susaux and ran:  
      sudo systemctl start modbusIOC.service 
      -> c1susaux epics is up and running now
    • ...Meanwhile c1susaux lost its eth1 somehow. This made the slow values of 8 vertex sus all zero
      -> Ran sudo /sbin/ifdown eth1 and sudo /sbin/ifup eth1 again on c1susaux ->  this resolved the issue
  • c1psl
    • Login c1psl and ran:  
      sudo systemctl start modbusIOC.service 
      -> c1psl epics is up and running now
  • Prepared for the rebooting script
    • Ran /opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh
    • Rebooting was done successfully. All the suspensions looked free and healthy.
    • Burtrestored c1susaux (used Apr 12 21:19 snapshot)

Hardware

  • PSL laser / Xend AUX laser / Yend AUX laser were off -> turned on
  • The PMC was immediately automatically locked.
  • The main marconi was off -> forgot to turn on
  • The end temp controllers for the SHG crystals were on but not enabled -> now enabled

RTS recovery ~ part 2

  • FB: FB status of all the RTS models were still red
  • Timing: c1x01/2/3/5 were 1 sec behind of FB and c1x04 was 2 sec behind
  • -> Remedy:  https://nodus.ligo.caltech.edu:8081/40m/14349
    • Software rebooting of FB
    • Manually start the open-mx and mx services using
    • sudo systemctl start open-mx.service 
    • sudo systemctl start mx.service
    • Check that the system time returned by gpstime matches the gpstime reported by internet sources. e.g. http://leapsecond.com/java/gpsclock.htm
    • Manually start the daqd processes using
      sudo systemctl start daqd_*
  • This made all the FB(FE) indicators green!
  • Ran the reboot script again -> All green!

IMC recovery

  • The IMC status was checked
  • No autolocker, but it could be manually locked. i.e. MC1/2/3 were not so much misalignment
  • Autolocker/Slow FSS recovery along with https://nodus.ligo.caltech.edu:8081/40m/15121
    • sudo systemctl start MCautolocker.service
    • sudo systemctl start FSSSlow.service
  • Both of them failed to run
  • Note by Gautam: The problem with the systemctl commands failing was that the NFS mount points weren’t mounted. Which in turn was because of the familiar /etc/resolv.conf problem. I added chiara to the namespace in this file, and then manually mounted the NFS mount points. This fixed the problem.
    Now the IMC is locked and the autolocker is left running.

Burt restore

  • Used Apr 12 21:19 snapshot
  • c1psl
  • c1alsepics/c1assepics/c1asxepics/c1asyepics
  • c1aux/c1auxex/c1auxey/
  • c1iscaux/c1susaux
  • This made REFL and AS beams back to the CCDs. As has small fringes.
  • Y arm has small IR flashes as well as green flashes.

JETSTOR recovery

  • JETSTOR was beeping. 
  • Shutdown megatron
  • Followed the instruction https://nodus.ligo.caltech.edu:8081/40m/13107
  • This stopped beeping. Waiting for JETSTOR to come up -> In a minute, JETSTOR display became normal and all disks showed green.
  • Bring megatron back up again

N2 bottle

  • The left N2 bottle was empty. The right one had 1500PSI.
  • Replaced the left bottle with the spare one in the room.
  • Now the left one 2680PSI and the right one 1400PSI.

Closing

  • Closed PSL/AUX laser shutters
  • Turned off the lights in the lab, CTRL room, and the office.

Remaining Issues

  • [done] MCAutoLocker / FSSSlow scripts are not running
  • The PRM alignment slider has no effect (although the PRM is aligned…) -> SLOW DAQ frozen???
  • JETSTOR is not mounted on megatron [gautam mounted Jetstor on megatron on 4/18 at 2pm]
  15308   Mon Apr 20 17:49:58 2020 gautamUpdateGeneralSome housekeeping
  • Empty N2 replaced. 
  • Logged back into zita and started the StripTool traces (even though we keep the TV off nowadays).
  • c1susaux acro-crate power cycled to re-enable PRM suspension control (all other vertex optics also now respond to slow bias voltage sliders being moved).
  • c1iscaux needed a hard reboot as it wasn’t seen on martian. I power cycled the crate for good measure.
  • Marconi turned back on with correct frequency/amplitude.
  • c0rga is now seen again on martian network. I re-enabled the RGA scanning so that it takes a scan every morning at 4am. 
  • The forepumps for TP2/TP3 are noisier than I remember. The former has ~10,000 hrs on the clock. How often does the tip seal replacement need to happen?
  • HV supplies for ASX/ASY PZTs re-energized.
  • IFO re-aligned for locking.
  • c1oaf and c1daf models restarted. c1oaf required the usual start/stop/start sequence to make the DAQ errors go away, and luckily the FE didn’t crash when the model was unloaded.
  • POX/POY/PRMI 1f carrier/green locking all was smooth.
  • For some reason, the PRC angular FF filters i trained no longer do anything good (but MCL is still good). collected 20mins of PRMI 1f locked data for investigations.
Update 21 Apr 2020 1200: Looking at Attachments #1 and #2, the spectra for motion sensed by the POP QPD does indeed look very different on Apr 6 vs Apr 20. Could be some interference from Oplev loop or maybe some EPICS values didn't get reset correctly, needs more investigation. It doesn't seem reasonable to me that the plant changes by so much (spectra were taken at similar times of the day, ~5pm).
Update 22 Apr 2020 1500: As suspected, the PRM oplev was disabled for whatever reason. Re-enabling it, I recovered the good performance from two weeks ago. ✅ 
Attachment 1: fDomainWF_Apr06.pdf
fDomainWF_Apr06.pdf
Attachment 2: fDomainWF_Apr20.pdf
fDomainWF_Apr20.pdf
  15340   Wed May 20 19:34:58 2020 KojiUpdateGeneralITM spares and New PR3 mirrors transported to Downs for phasemap measurement

Two ITM spares (ITMU01/ITMU02) and five new PR3 mirrors (E1800089 Rev 7-1~Rev7-5) were transported to Downs for phasemap measurement

Attachment 1: container.jpg
container.jpg
  15344   Fri May 22 10:14:47 2020 JordanUpdateGeneralNitrogen Replacement

I was in the lab for Clean and Bake activities and I replaced an empty N2 tank. Left tank is at 2600 psi right tank at ~1300 psi.

  15354   Tue May 26 10:04:54 2020 JordanUpdateGeneralN2 Replacement

Replaced empty N2 tank, left tank at ~2000 psi, right tank ~2600 psi.

  15375   Thu Jun 4 08:45:41 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m, in the Clean and bake lab today from ~9am to ~3pm.

  15378   Fri Jun 5 08:44:50 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m, in the Clean and bake lab today from ~9am to ~3pm.

  15385   Tue Jun 9 09:35:02 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 9:30am to 4pm.

  15388   Wed Jun 10 14:00:33 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab from 10am to 4pm today. I will also replace an empty N2 cylinder.

  15390   Thu Jun 11 11:14:12 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 11am to 4pm.

  15395   Fri Jun 12 11:40:14 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 12pm to 4pm.

  15400   Tue Jun 16 08:58:11 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m today at 10am to deliver optics to Downs and to replace the TP2 controller.

  15403   Tue Jun 16 16:05:26 2020 JordanUpdateGeneralN2 Replacement

I replaced an empty N2 cylinder, there are now two empty tanks in the outside rack.

  15405   Thu Jun 18 09:46:03 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m today from 9:30am to 4pm.

  15414   Fri Jun 19 08:47:10 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m today from 9am to 3pm.

  15416   Fri Jun 19 11:02:10 2020 ChubUpdateGeneralcustom feedthrough flanges are here!

The four 4x25DSUB and single 8x25DSUB feedthrough flanges have arrived and will be picked up from the dock and brought to the 40M lab.

  15420   Fri Jun 19 19:21:25 2020 gautamUpdateGeneralPSL shutter re-opened

The PSL shutter was closed from the vacuum interlock trip. Today, I did the following:

  • Re-aligned input beam to PMC to recover high transmission / low reflection.
  • Re-set the LSC offsets.
  • ETMX watchdog was tripped. Reset it.
  • Opened the PSL shutter, IMC autolocker was able to lock the cavity almost immediately.
  • Tested POX/POY locking, ran the ASS to maximize single arm transmission.

All looks good for now. I will probably get back to PRFPMI locking Monday.

  15422   Mon Jun 22 13:16:38 2020 JordanUpdateGeneralPresence at 40m

I will be at the 40m today from 11am to 4pm.

  15426   Wed Jun 24 10:14:56 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 10am to 4pm.

  15430   Thu Jun 25 11:09:01 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab from 11pm to 4pm

  15432   Fri Jun 26 11:00:52 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 11am to 4pm.

  15437   Mon Jun 29 11:41:04 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 11:30am to 4pm

  15441   Tue Jun 30 08:50:12 2020 JordanUpdateGeneralPresence at 40m

I will be in the clean and bake lab today from 9am to 4pm.

  15444   Wed Jul 1 08:51:52 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab from 9am to 4pm today.

  15448   Thu Jul 2 16:51:23 2020 JordanUpdateGeneralBathroom Science

As part of an ongoing effort to improve airflow in workspaces/bathrooms on campus, I have installed an air scrubber unit in each of the bathrooms at the 40m lab.

Attachment 1: AirScrubber40m.jpg
AirScrubber40m.jpg
  15453   Mon Jul 6 08:48:15 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 8:30am to 4pm

  15459   Wed Jul 8 08:51:35 2020 JordanUpdateGeneralPresence at 40m

I will be in the clean and bake lab today from 9am to 3pm.

  15461   Thu Jul 9 09:22:44 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 9am to 3pm

  15467   Fri Jul 10 10:37:30 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 9am to 4pm

  15478   Tue Jul 14 09:04:53 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake Lab today from 9am to 4pm.

  15485   Wed Jul 15 19:23:44 2020 gautamUpdateGeneralEmergency light on in control room

The emergency lamps above the exit sign on the NW entrance to the control room are on. I tried opening and closing the door, but it remains on. Probably nothing to worry about, but noting here anyway.

  15486   Wed Jul 15 19:51:51 2020 KojiUpdateGeneralEmergency light on in control room

It happened before too. Doesn't it say it has occasional self-testing or something?

  15487   Wed Jul 15 20:58:40 2020 gautamUpdateGeneralEmergency light on in control room

True - it is now not on anymore.

Quote:

It happened before too. Doesn't it say it has occasional self-testing or something?

  15490   Thu Jul 16 14:41:22 2020 gautamUpdateGeneralFire extinguisher inspection

The (masked) tech accessed all areas in the lab (office area, control room, VEA) between ~230pm-3pm. The laser safety goggles he used have been kept aside for appropriate sanitaiton.

  15491   Fri Jul 17 00:18:13 2020 gautamUpdateGeneralLocking updat
  1. I found that an EPICS channel wasn't reset to the correct value by burtrestore after the FE bootfest yesterday.
    • This cost me the whole of last night, found it finally tonight. 
    • I'll try and modify the locking scripts to better capture such errors, but ideally, we should just use Guardian or something since it's made for this purpose already.
    • Anyways, tonight I was able to re-acquire the PRFPMI lock in a completely scripted way.
  2. Locking CARM on POX remains out of reach.
    • I think this has to do with the fact that the zero-crossing of the CARM and REFL error signals are dependent on the 3f PRCL/MICH error point offsets.
    • So even if the DC gain is right, the fact that we use POX for the digital AO path and REFL for the analog AO path is leading to some conflict I think.
    • Ran out of energy tonight, I'll try again tomorrow.

The DQ channels of the ETM coils were active tonight, so I'll make the coil driver actuation budget over the next couple of days.

  15492   Fri Jul 17 09:03:58 2020 JordanUpdateGeneralPresence at 40m

I will be in the Clean and Bake lab today from 9am to 4pm.

  15523   Thu Aug 13 18:10:22 2020 gautamUpdateGeneralPower outage

There was a power outage ~30 mins ago that knocked out CDS, PSL etc. The lights in the office area also flickered briefly. Working on recovery now. The elog was also down (since nodus presumably rebooted), I restarted the service just now. Vacuum status seems okay, even though the status string reads "Unrecognized".

The recovery was complete at 1830 local time. Curiously, the EX NPRO and the doubling oven temp controllers stayed on, usually they are taken out as well. Also, all the slow machines and associated Acromag crates survived. I guess the interruption was so fleeting that some devices survived.

The control room workstation, zita, which is responsible for the IFO status StripTool display on the large TV screen, has some display driver issues I think - it crashed twice when I tried to change the default display arrangement (large TV + small monitor). It also wants to update to Ubuntu 18.04 LTS, but I decided not to for the time being (it is running Ubuntu 16.04 LTS). Anyways, after a couple of power cycles, the wall StripTools are up once again.

  15530   Mon Aug 17 21:24:43 2020 gautamUpdateGeneralFire extinguisher inspection

A technician came to the lab today at ~4pm. He entered the VEA (with booties and googles), and also the clean and bake lab. The whole procedure lasted ~10 minutes. I did not follow him around, but was available in the control room throughout the process. I think the whole episode went without incident.

BTW, this guy didn't ring the doorbell, I just happened to be here when he came by. I don't know if this is usual practise - are we happy with the technicians entering the VEA and/or clean and bake labs without supervision? AFAIK, this wasn't scheduled.

  15550   Sun Aug 30 11:29:33 2020 ranaUpdateGeneralpower blink?

My power at home winked out for a second this morning, but it looks like either nothing happened in the 40m lab or else it rode it out.

MC is locked - lost lock around 11:25 AM and then relocked.

  15559   Sat Sep 5 14:28:03 2020 KojiUpdateGeneralLO beam: Fiber coupling work

2PM: Arrived at the 40m. Started the work for the coupling of the RF modulated LO beam into a fiber. -> I left the lab at 10:30 PM.

The fiber coupling setup for the phase-modulated beam was made right next to the PSL injection path. (See attachment 1)

  • For the alignment of the beam, the main PSL path, including the alignment of the 2" PO mirror, has not been touched.
  • There are two PO beams with the optical power of 0.8mW (left) and 1.6mW (right). Both had been blocked but the right one was designed to be used for PSL POS and ANG. For the fiber coupling, the right beam was used.
  • The alignment/mode-matching work has been done with a short (2m?) fiber patch cable from Thorlabs. The fiber is the same as the one used for LO delivery.
  • I tried to have a mode-matching telescope in the LO path. I ended up having no lens for the best result. The resulting transmitted power is 1.21mW out of 1.64mW incident (~74%). These powers were measured with the Ophir power meter. (Note that Thorlabs' fiber power meter indicated 1.0mW transmission.)

Some notes

  • After the PSL activity, the IMC locking was checked to see if I messed up the PSL alignment. It locks fine and looks fine.
    • The input shutter (left closed after Jon's vacuum work?) was opened.
    • The alignment was not optimal and had some pitch misalignment (e.g. TEM03).
    • After some MC SUS alignment, the automatic locking of TEM00 was recovered. Mainly MC3 pitch was moved (+0.17).
    • I've consulted with Gautam and he thinks this is with the level of regular drift. The AS beam was visible.
  • The IMC and MI were moving so much, but this seemed just the usual Saturday night Millikan shake.
  • During the activity, the PSL HEPA was turned up to 100 and it was reverted to 33 after the work.
  • I have been wearing a mask and gloves throughout the work there.
Attachment 1: 20200905212254_IMG_9938.JPG
20200905212254_IMG_9938.JPG
  15568   Thu Sep 10 15:56:08 2020 KojiUpdateGeneralHEPA & Particle Level Status

15:30
- PSL HEPA was running at 33% and is now at 100%
- South End HEPA was not on and is now running
- Yarm Portable HEPA was not running and is now running at max speed: the power was taken beneath the ITMY table. It is better to unplug it when one uses the IFO.
- Yend Portable HEPA was not running and is now running (presumably) at max speed

Particle Levels: (Not sure about the unit. The convention here is to multiply x10 of the reading)

Before running the HEPAs at their maximum
9/10/2020 15:30 / 0.3um 292180 / 0.5um 14420

(cf 9/5/2020 / 0.3um 94990 / 0.5um 6210)
==>
After running the HEPAs at their maximum
The number gradually went down and now became constant at about half of the initial values
9/10/2020 19:30 / 0.3um 124400 / 0.5um 7410

  15580   Sat Sep 19 01:49:52 2020 KojiUpdateGeneralM4.5 EQ in LA

M4.5 EQ in LA 2020-09-19 06:38:46 (UTC) / -1d 23:38:46 (PDT) https://earthquake.usgs.gov/earthquakes/eventpage/ci38695658/executive

I only checked the watchdogs. All watchdogs were tripped. ITMX and ETMY seemed stuck (or have the OSEM magnet issue). They were left tripped. The watchdogs for the other SUSs were reloaded.

  15581   Sat Sep 19 11:27:04 2020 ranaUpdateGeneralM4.5 EQ in LA

the seismometers obviously saturated during the EQ, but the accelerometers captured some of it. It looks like there's different saturation levels on different sensors.

Also, it seems the mounting of the MC2 accelerometers is not so good. There's some ~10-20 Hz resonance its mount that's showing up. Either its the MC2 chamber legs or the accelerometers are clamped poorly to the MC2 baseplate.

Sun Sep 20 00:02:36 2020 edit: fixed indexing error in plots

* also assuming that the sensors are correctly calibrated in the front end to 1 count = 1 um/s^2 (this is what's used in the summ pages)

Attachment 1: Sep18-EQ.pdf
Sep18-EQ.pdf
  15583   Sat Sep 19 18:08:34 2020 ranaUpdateGeneralM4.5 EQ in LA

the EQ was ~14 km south of Caltech and 17 km deep

Quote:

the seismometers obviously saturated during the EQ, but the accelerometers captured some of it. It looks like there's different saturation levels on different sensors.

Also, it seems the mounting of the MC2 accelerometers is not so good. There's some ~10-20 Hz resonance its mount that's showing up. Either its the MC2 chamber legs or the accelerometers are clamped poorly to the MC2 baseplate.

I'm amazed at how much higher the noise is on the MC2 accelerometer. Is that really how much amplification of the ground motion we're getting? If so, its as if the MC has no vibration isolation from the ground in that band. We should put one set on the ground and make the more direct comparison of the spectra. Also, perhaps do some seismic FF using this sensor - I'm not sure how successful we've been in this band.

Attaching the coherence plot from ldvw.ligo.caltech.edu (apparently it has access to the 40m data, so we can use that as an alternative to dtt or python for remote analysis):

Coherence (GWpy) result, image #: 288304

It would be interesting to see if we can use the ML based FF technology from this summer's SURF project by Nadia to increase the coherence by including some slow IMC alignment channels.

  15584   Sat Sep 19 18:46:48 2020 KojiUpdateGeneralHand soap

I supplied a bottle of hand soap. Don't put water in the bottle to dilute it as it makes the soap vulnarable for cotamination.

ELOG V3.1.3-