40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 151 of 348  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  15667   Tue Nov 10 11:31:13 2020 KojiUpdateGeneralPumpdown

Main volume pressure as of 11:30AM 2020/11/10

Attachment 1: Screen_Shot_2020-11-10_at_11.30.21.png
Screen_Shot_2020-11-10_at_11.30.21.png
  15671   Tue Nov 10 15:13:41 2020 ranaUpdateGeneralETMY suspension eigenmodes

For the input matrix diagonalization, it seemed to me that when we had a significant seismic event or a re-alignment of the optic with the bias sliders, the input matrix also changes.

Meaning that our half-light voltage may not correspond to the half point inside the LED beam, but that rather we may be putting the magnet into a partially occluding state. It would be good to check this out by moving the bias to another setting and doing the ringdown there.

  15672   Tue Nov 10 17:46:06 2020 gautamUpdateGeneralIFO recovery

Summary:

  1. Recovery was complicated by RFM failure on c1iscey - see Attachment #1. This is becoming uncomfortably frequent. As a result, the ETMY suspension wasn't being damped properly. Needed a reboot of c1iscey machine and a restart of the c1rfm model to fix.
  2. POX/POY locking was restored. Arm alignment was tuned using the dither alignment system.
  3. AS beam was centered on its CCD (I put a total of ND=1.0 filters back on the CCD). Note that the power in the AS beam is ~4x what it was given we have removed the in-vacuum pickoff to the OMC.
  4. Green beams were aligned to the arm cavities. See Attachment #2. Both green cameras were adjusted on the PSL table to have the beam be ~centered on them.
  5. ALS noise is far too high for locking, needs debugging. See Attachment #3.
  6. AS beam was aligned onto the AS55 photodiode. With the PRM aligned, the REFL beam was centerd on the various REFL photodiodes. The PRMI (resonant carrier) could be locked, see Attachment #4.

I want to test out an AS port WFS now that I have all the parts in hand - I guess the Michelson / PRMI will suffice until I make the ALS noise good again, and anyways, there is much assembly work to be done. Overnight, I'm repeating the suspension eigenmode measurement.

Attachment 1: RFMerrs.png
RFMerrs.png
Attachment 2: IFOrecovery.png
IFOrecovery.png
Attachment 3: ALS_ool.pdf
ALS_ool.pdf
Attachment 4: PRMIcarr.png
PRMIcarr.png
  15673   Thu Nov 12 14:26:35 2020 gautamUpdateGeneralETMY suspension eigenmodes

The results from the ringdown are attached - in summary:

  • The peak positions have shifted <50 mHz from their in-air locations, so that's good I guess
  • The fitted Qs of the POS and SIDE eigenmodes are ~500, but those for PIT and YAW are only ~200
  • The fitting might be sub-optimal due to spurious sideband lobes around the peaks themselves - I didn't go too deep into investigating this, especially since the damping seems to work okay for now
  • There is up to a factor of 5 variation in the response at the eigenfrequencies in the various sensors - this seems rather large
  • The condition number of the matrix that would diagonalize the sensing is a scarcely believable 240, but this is unsurprising given the large variation in the response in the different sensors. Unclear what the implications are - I'm not messing with the input matrix for now
Attachment 1: ETMY.tar.bz2
  15680   Tue Nov 17 13:24:40 2020 ChubUpdateGeneralbig UPS on the way

Ordered 11/16 from CDW, on PO# S492940, the high voltage Tripp Lite SMART5000XFMRXL  for TP-1.  Should be arriving in about a week.

  15829   Sat Feb 20 16:20:33 2021 gautamUpdateGeneralHousekeeping + PRMI char

In prep to try some of these debugging steps, I did the following.

  1. ndscope updated from 0.7.9 to 0.11.3 on rossa. I've been testing/assisting the development for a few months now and am happy with it, and like the new features (e.g. PDF export). v0.7.9 is still available on the system so we can revert whenever we want.
  2. Arms locked on POX/POY, dither aligned to maximize TRX/TRY, normalization reset.
  3. PRMI locked, dither aligned to maximize POPDC.
  4. All vertex oplevs re-centered on their QPDs.

While working, I noticed that the annoying tip-tilt drift seems to be worse than it has been in the last few months. The IPPOS QPD is a good diagnostic to monitor stability of TT1/TT2. While trying to trend the data, I noticed that from ~31 Jan (Saturday night/Sunday morning local time), the IP-POS QPD segment data streams seem "frozen", see Attachment #1. This definitely predates the CDS crash on Feb 2. I confirmed that the beam was in fact incident on the IPPOS QPD, and at 1Y2/1Y3 that I was getting voltages going into the c1iscaux Acromag crate. All manner of soft reboots (eth1 network interface, modbusIOC service) didn't fix the problem, so I power cycled the acromag interface crate. This did the trick. I will take this opportunity to raise again the issue that we do not have a useful, reliable diagnsotic for the state of our Acromag systems. The problem seems to not have been with all the ADC cards inside the crate, as other slow ADC channels were reporting sensible numbers.

Anyways, now that the QPD is working again, you can see the drift in Attachment #2. I ran the dither alignment ~4 hours ago, and in the intervening time, the spot, which was previously centered on the AS camera CRT display, has almost drifted completely off (my rough calibration is that the spot has moved 5mm on the AS CCD camera). I was thinking we could try installing the two HAM-A coil drivers to control the TTs, this would allow us to rule out flaky electronics as the culprit, but I realize some custom cabling would be required, so maybe not worth the effort. The phenomenology of the drift make me suspect the electronics - hard for me to imagine that a mechanical creep would stop creeping after 3-4 hours? How would we explain the start of such a mechanical drift? On the other hand, the fact that the drift is almost solely in pitch lends support to the cause being mechanical. This would really hamper the locking efforts, the drift is on short enough timescales that I'd need to repeatedly go back and run the dither alignment between lock attempts - not the end of the world but costs ~5mins per lock attempt.


On to the actual tests: before testing the hardware, I locked the PRMI (no ETMs). In this configuration, I'm surprised to see that there is nearly perfect coherence between the MICH and PRCL error signals between 100Hz-1kHz 🤔 . When the AS55 demodulated signals are whitened prior to digitization (and then de-whitened digitally), the coherence structure changes. The electronics noise (measured with the PSL shutter closed) itself is uncorrelated (as it should be), and below the level of the two aforementioned spectra, so it is some actual signal I'm measuring there with the PRMI locked, and the coherence is on the light fields on the photodiode. So it would seem that I am just injecting a ton of AS55 sensing noise into the PRCL loop via the MICH->PRM LSC output matrix element. Weird. The light level on the AS55 photodiode has increased by ~2x after the September 2020 vent when we removed all the unused output optics and copper OMC. Nevertheless, the level isn't anywhere close to being high enough to saturate the ADC (confirmed by time domain signals in ndscope).

To get some insight into whether the whole RF system is messed up, I first locked the arm cavities with POX and POY as the error signals. Attachment #3 shows the spectra and coherence betweeen these two DoFs (and the dark noise levels for comparison). This is the kind of coherence profile I would expect - at frequencies where the loop gain isn't so high as to squish the cavity length noise (relative to laser frequency fluctuations), the coherence is high. Below 10 Hz, the coherence is lower than between 10-100 Hz because the OLG is high, and presumably, we are close to the sensing noise level. And above ~100 Hz, POX and POY photodiodes aren't sensing any actual relative frequency fluctuations between the arm length and laser frequency, so it's all just electronics noise, which should be incoherent.

The analogous plot for the PRMI lock is shown in Attachment #4. I guess this is telling me that the MICH sensing noise is getting injected into the PRCL error point between 100Hz-1kHz, where the REFL11 photodiode (=PRCL sensor) isn't dark noise limited, and so there is high coherence? I tuned the MICH-->PRM LSC output matrix element to minimize the height of a single frequency line driving the BS+PRM combo at ~313Hz in the PRCL error point. 

All the spectra are in-loop, the loop gain has not been undone to refer this to free-running noise. The OLGs themselves looked fine to me from the usual DTT swept sine measurements, with ~100 Hz UGF.

Attachment 1: IPPOSdeat.pdf
IPPOSdeat.pdf
Attachment 2: TTdrift.pdf
TTdrift.pdf
Attachment 3: POXnPOY.pdf
POXnPOY.pdf
Attachment 4: PRMI.pdf
PRMI.pdf
  15831   Sun Feb 21 20:51:21 2021 ranaUpdateGeneralHousekeeping + PRMI char

I'm curious to see if the demod phase for MICH in REFL & AS chamges between thi simple Mcihelson and PRMI. IF there's a change, it could point to a PRCL/f2 mismatch.

But I would still bet on demod chain funniness.

  15834   Tue Feb 23 00:10:05 2021 gautamUpdateGeneralDemod char part 1

I measured the conversion efficiencies for all the RFPD demod boards except the POP port ones. An RF source was used to drive the PD input on the demod board, one at a time, and the I/F outputs were monitored on a 300 MHz oscilloscope. The efficiency is measured as the percentage ratio V_IF / V_RF. 

I will upload the full report later, but basically, the numbers I measured today are within 10% of what I measured in 2017 when I previously did such a characterization. The orthogonality also seems fine. 

I believe I restored all the connections at 1Y2 correctly, and I can lock POX/POY and PRMI on 1f signals after my work. I will do the noise characterization tomorrow - but I think this test already rules out any funkiness with the demod setup (e.g. non orthogonality of the digitized "I" and "Q" signals). The whitening part of the analog chain remains untested.

Quote:

But I would still bet on demod chain funniness


Update 2/23 1215: I've broken up the results into the demod boards that do not (Attachment #1) and do (Attachment #2) have a D040179 preamp installed. Actually, the REFL11 AO path also has the preamp installed, but I forgot to capture the time domain data for those channels. The conversion efficiency inferred from the scope was ~5.23 V/V, which is in good agreement with what I measured a few years ago.

  • The scope traces were downloaded.
  • The resulting X/Y traces are fitted with ellipses to judge the gain imbalance and orthogonality.
  • The parameter phi is the rotation of the "bounding box" for the fitted ellipses - if the I and Q channels are exactly orthogonal, this should be either 0 or 90 degrees. There is significant deviation from these numbers for some of the demodulators, do we want to do something about this? Anyways, the REFL11 and AS55 boards, which are used for PRMI locking, report reasonable values. But REFL165 shows an ellipse with significant rotation. This is probably how the CDS phase rotator should be tuned, by fitting an ellipse to the digitized I/Q data and then making the bounding box rotation angle 0 by adjusting the "Measured Diff" parameter.
  • The gain imbalance seems okay across the board, better than 1dB.
  • The POX and POY traces are a bit weird, looks like there is some non-trivial amount of distortion from the expected pure sinusoid.
  • I measured the LO input levels going into each demod board - they all lie in the range 2-3dBm (measured with RF power meter), which is what is to be expected per the design doc. The exception the the 165 MHz LO line, which was 0.4 dBm. So this board probably needs some work. 
  • As I mentioned earlier, the conversion efficiencies are consistent with what I measured in 2017. I didn't break out the Eurocards using an extender and directly probe the LO levels at various points, but the fact that the conversion efficiencies have not degraded and the values are consistent with the insertion loss of various components in the chain make me believe the problem lies elsewhere. 

For completeness, I will measure the input terminated I/F output noise levels later today. Note also that my characterization of the optical modulation profile did not reveal anything obviously wrong (to me at least). 

Attachment 1: noPreamp.pdf
noPreamp.pdf
Attachment 2: withPreamp.pdf
withPreamp.pdf
  15839   Wed Feb 24 11:53:24 2021 gautamUpdateGeneralDemod char part 2

I measured the noise of the I/F outputs of all the LSC demodulators. I made the measurement in two conditions, one with the RF input to the demodulators terminated with 50 ohms to ground, and the other with the RFPD plugged in, but the PSL shutter closed (so the PD dark noise was the input to the demodulator). The LO input was driven at the nominal level for all measurements (2-3 dBm going in to the LO input, measured with the RF power meter, but I don't know what the level reaching the mixer is, because there is a complicated chain of ERA amplifiers and attenuators that determine what the level is). 

As in the previous elog, I have grouped the results into boards that do not (Attachment #1) and do (Attachment #2) have the low noise preamp installed. The top row is for the "Input terminated" measurements, while the bottom is with the RFPD plugged in, but dark. I think not a single board shows the "expected" noise performance for both I and Q channels. In the case where the preamp isn't installed and assuming the mixer is being driven with >17dBm LO, we expect the mixer to demodulate the Johnson noise of 50 ohms, which would be ~1nV/rtHz, and so with the SR785, we shouldn't measure anything in exceess of the instrument noise floor. With the low noise preamp installed, the expected output noise level is ~10nV/rtHz, which should just about be measurable (I didn't use any additional Low Noise front end preamp for these measurements). The AS55_I channel shows noise consistent with what was measured in 2017 after it was repaired, but the Q channel shows ~twice the noise. It seemed odd to me that the Q channels show consistently higher noise levels in general, but I confirmed that the SR785 channel 2 did not show elevated instrument noise at least when terminated with 50 ohms, so seems like a real thing.

While this is clearly not an ideal state of operation, I don't see how this can explain the odd PRMI sensing.

Quote:

For completeness, I will measure the input terminated I/F output noise levels later today. Note also that my characterization of the optical modulation profile did not reveal anything obviously wrong (to me at least). 

Attachment 1: noises_noPreamp.pdf
noises_noPreamp.pdf
Attachment 2: noises_withPreamp.pdf
noises_withPreamp.pdf
  15840   Wed Feb 24 12:11:08 2021 gautamUpdateGeneralDemod char part 3

I did the characterization discussed at the meeting today.

  1. RF signal at 100 Hz offset from the LO frequency was injected into the PD input on the demod boards.
  2. The digitized CDS channels were monitored. I chose to look at the C1:LSC-{PD}_I_OUT and C1:LSC-{PD}_Q_OUT channels. This undoes the effect of the analog whitening, but is before the digital phase rotation.
  3. Attachments #1 and Attachments #2 are for the case where the analog whitening is not engaged, white Attachments #3 and Attachments #4 are for when the whitening is engaged, and they look the same (as they should), which rules out any crazy mismatch between the analog filter and the digital dewhitening filter.
  4. I have absorbed the flat whitening gain applied to the various PDs in the cts/V calibration indicated on these plots. So the size of the ellipse is proportional to the conversion gain.

I think this test doesn't suggest anything funky in the analog demod/whitening/AA/digitization chain. We can repeat this process after the demod boards are repaired and use the angle of rotation of the ellipse to set the "D" parameter in the CDS phase rotator part, I didn't do it today.

Attachment 1: noPreamp.pdf
noPreamp.pdf
Attachment 2: withPreamp.pdf
withPreamp.pdf
Attachment 3: noPreamp_whitened.pdf
noPreamp_whitened.pdf
Attachment 4: withPreamp_whitened.pdf
withPreamp_whitened.pdf
  15841   Wed Feb 24 12:29:18 2021 gautamUpdateGeneralInput pointing recovered

While working at the LSC rack, I lost the input pointing into the IFO (the TT wiring situation is apparently very fragile, and this observation supports the hypothesis that the drifting TTs are symptomatic of some electronics issue). After careful beam walking, it was recovered and the dither alignment system was used to maximize TRX/TRY once again. No lasting damage done. If I can figure out what the pin-mapping is for the TT coils in vacuum, I'm inclined to try installing the two HAM-A coil drivers to control the TTs. Does anyone know where I can find said pin-out? The wiki page links seem broken and there isn't a schematic available there...

Ok it should be possible to back it out from the BOSEM pin out, and the mapping of the in-vacuum quadrupus cable, though careful accounting of mirroring will have to be done... The HAM-A coil driver actually already has a 15 pin output like the iLIGO coil drivers that are currently in use, but the pin mapping is different so we can't just replace the unit. On the bright side, this will clear up 6U of rack space in 1Y2. In fact, we can also consider hooking up the shadow sensor part of the BOSEMs if we plan to install 2 HAM-A coil drivers + 1 Dual satellite amplifier combo (I'm not sure if this number of spares is available in what we ordered from Todd).

  15844   Thu Feb 25 16:50:53 2021 gautamUpdateGeneralPRMI sensing matrix

After all the work at the LSC rack over the last couple of days, I re-locked the PRMI (ETMs misaligned), and measured the sensing matrix once again. The PRMI was locked using 1f error signals, with AS55_Q as the MICH sensor and REFL11_I as the PRCL sensor. As shown in Attachment #1, the situation has not changed, there is still no separation between the DoFs in the REFL signals. I will measure the MC lock point offset using the error point dither technique today to see if there is something there.

Attachment 1: PRMI1f_noArmssensMat.pdf
PRMI1f_noArmssensMat.pdf
  15845   Thu Feb 25 20:37:49 2021 gautamUpdateGeneralSetting modulation frequency and checking IMC offset

The Marconi frequency was tuned by looking at 

  1. The ~3.68 MHz (= 3*f1 - fIMC) peak at the IMC servo error point, TP1A, and
  2. The ~25.8 MHz (= 5*f1 - fIMC) peak at the MC REFL PD monitor port. The IMC error point is not a good place to look for this signal because of the post-demodulation low pass filter (indeed, I didn't see any peak above the analyzer noise floor).

The nominal frequency was 11.066209 MHz, and I found that both peaks were simultaneously minimized by adjusting it to 11.066195 MHz, see Attachment #1. This corresponds to a length change of ~20 microns, which I think is totally reasonable. I guess the peaks can't be nulled completely because of imbalance in the positive and negative sidebands. 

Then, I checked for possible offsets at the IMC error point, by injecting a singal to the AO input of the IMC servo board (using the Siglent func gen), at ~300 Hz. I then looked at the peak height at the modulation frequency, and the second harmonic. The former should be minimized when the cavity is exactly on resonance, while the latter is proportional to the modulation depth at the audio frequency. I found that I had to tweak the MC offset voltage slider from the nominal value of 0V to 0.12 V to null the former peak, see Attachment #2. After accounting for the internal voltage division factor of 40, and using my calibration of the IMC error point as 13 kHz/V, this corresponds to a 40 Hz (~50 microns) offset from the true resonant point. Considering the cavity linewidth of ~4 kHz, I think this is a small detuning, and probably changes from lock to lock, or with time of day, temperature etc.

Conclusion: I think neither of these tests suggest that the IMC is to blame for the weirdness in the PRMI sensing, so the mystery continues.

Attachment 1: modFreq.pdf
modFreq.pdf
Attachment 2: IMC_offset.pdf
IMC_offset.pdf
  15908   Fri Mar 12 03:22:45 2021 KojiUpdateGeneralGaussmeter in the electronics drawer

For magnet strength measurement: There is a gaussmeter in the flukes' drawer (2nd from the top). It turns on and reacts to a whiteboard magnet.

Attachment 1: P_20210311_231104.jpg
P_20210311_231104.jpg
  15989   Thu Apr 1 23:55:33 2021 KojiSummaryGeneralHEPA AC cord replacement

I think the PSL HEPA (both 2 units) are not running. The switches were on. And the variac was changed from 60% to 0%~100% a few times but no success.
I have no troubleshooting power anymore today. The main HEPA switch was turned off.

  15992   Fri Apr 2 15:17:23 2021 gautamSummaryGeneralHEPA AC cord replacement

From the last failure, I had ordered 2 extra capacitors (they are placed on top of the PSL enclosure above where the capacitors would normally be installed). If the new capacitors lasted < 6months, may be symptomatic of some deeper problem though, e.g. the HEPA fans themselves need replacing. We don't really have a good diagnostic of when the failure happened I guess as we don't have any channel recording the state of the fans.

Quote:

I think the PSL HEPA (both 2 units) are not running. The switches were on. And the variac was changed from 60% to 0%~100% a few times but no success.
I have no troubleshooting power anymore today. The main HEPA switch was turned off.

  15995   Mon Apr 5 08:25:59 2021 Anchal, PacoUpdateGeneralRestore MC from early quakes

[Paco, Anchal]

Came in a little bit after 8 and found the MC unlocked and struggling to lock for the past 3 hours. Looking at the SUS overview, both MC1 and ITMX Watchdogs had tripped so we damped the suspensions and brought them back to a good state. The autolocker was still not able to catch lock, so we cleared the WFS filter history to remove large angular offsets in MC1 and after this the MC caught its lock again.

Looks like two EQs came in at around 4:45 AM (Pacific) suggested by a couple of spikes in the seismic rainbow, and this.

  16002   Tue Apr 6 21:17:04 2021 KojiSummaryGeneralPSL HEPA investigation

- Last week we found both of the PSL HEPA units were not running.

- I replaced the capacitor of the north unit, but it did not solve the issue. (Note: I reverted the cap back later)
- It was found that the fans ran if the variac was removed from the chain.
- But I'm not certain that we can run the fans in this configuration with no attendance considering fire hazard.

@3AM: UPON LEAVING the lab, I turned off the HEPA. The AC cable was not warm, so it's probably OK, but we should wait for the continuous operation until we replace the scorched AC cable.


The capacitor replacement was not successful. So, the voltages on the fan were checked more carefully. The fan has the three switch states (HIGH/OFF/LOW). If there is no load (SW: OFF), the variac out was as expected. When the load was LOW or HIGH, it looked as if the motor is shorted (i.e. no voltage difference between two wires).

I thought the motors may have been shorted. But if the load resistance was measured with the fluke meter, it showed some resistance

- North Unit: SW LOW 4.6Ohm / HIGH 6.0Ohm
- South Unit: SW LOW 6.0Ohm / HIGH 4.6Ohm (I believe the internal connection is incorrect here)

I believed the motors are alive! Then the fans were switched on with the variac removed... they ran. So I set the switch LOW for the north unit and HIGH for the south unit.

Then I inspected the variac:

  • The AC output has some liquid leaking (oil?) (Attachment 1)
  • The AC plug on the variac out has a scorch mark (Attachments 2/3)

So, this scorched AC plug/cable connected directly to the AC right now. I'm not 100% confident about the safety of this configuration.
Also I am not sure what was wrong with the system.

  • Has the variac failed first? Because of the heat? I believe that the HEPA was running @30% most of the time. Maybe the damage was already there at the failure in Nov 2020?
  • Or has the motor stopped at some point and this made the variac failed?
  • Was the cable bad and the heat made the variac failed (then the problem is still there).

So, while I'm in the lab today, I'll keep the HEPA running, but upon my taking off, I'll turn it off. We'll discuss what to do in the meeting tomorrow.

 

Attachment 1: 20210406211741_IMG_0554.jpeg
20210406211741_IMG_0554.jpeg
Attachment 2: 20210406211840_IMG_0555.jpeg
20210406211840_IMG_0555.jpeg
Attachment 3: 20210406211850_IMG_0556.jpeg
20210406211850_IMG_0556.jpeg
  16025   Wed Apr 14 12:27:10 2021 gautamUpdateGeneralLab left open again

Once again, I found the door to the outside in the control room open when I came in ~1215pm. I closed it.

  16026   Wed Apr 14 13:12:13 2021 AnchalUpdateGeneralSorry, it was me

Sorry about that. It must be me. I'll make sure it doesn't happen again. I was careless to not check back, no further explanation.indecision

  16028   Wed Apr 14 14:52:42 2021 gautamUpdateGeneralIFO State

The C1:IFO-STATE variable is actually a bunch (16 to be precise) of bits, and the byte they form (2 bytes) converted to decimal is what is written to the EPICS channel. It was reported on the call today that the nominal value of the variable when the IMC is locked was "8", while it has become "10" today. In fact, this has nothing to do with the IMC. You can see that the "PMC locked" bit is set in Attachment #1. This is done in the AutoLock.sh PMC autolocker script, which was run a few days ago. Nominally, I just lock the PMC by moving some sliders, and I neglect to set/unset this bit.

Basically, there is no anomalous behavior. This is not to say that the situation cannot be improved. Indeed, we should get rid of the obsolete states (e.g. FSS Locked, MZ locked), and add some other states like "PRMI locked". While there is nothing wrong with setting these bits at the end of execution of some script, a better way would be to configure the EPICS record to automatically set / unset itself based on some diagnostic channels. For example, the "PMC locked" bit should be set if (i) the PMC REFL is < 0.1 AND (ii) PMC TRANS is >0.65 (the exact thresholds are up for debate). Then we are truly recording the state of the IFO and not relying on some script to write to the bit (I haven't thoguht through if there are some edge cases where we need an unreasonable number of diagnostic channels to determine if we are in a certain state or not).

Attachment 1: IFOSTATE.png
IFOSTATE.png
  16029   Wed Apr 14 15:30:29 2021 ranaUpdateGeneralSorry, it was me

Maybe tighten the tensioner on the door closer so that it closes by itself even in the low velocity case. Or maybe just use the front door like everyone else?

  16030   Wed Apr 14 16:46:24 2021 AnchalUpdateGeneralIFO State

That makes sense. I assumed that IFO-STATE is configured as you have proposed it to be configured. This could be implemented in later.

Quote:
 

a better way would be to configure the EPICS record to automatically set / unset itself based on some diagnostic channels. For example, the "PMC locked" bit should be set if (i) the PMC REFL is < 0.1 AND (ii) PMC TRANS is >0.65 (the exact thresholds are up for debate). Then we are truly recording the state of the IFO and not relying on some script to write to the bit (I haven't thoguht through if there are some edge cases where we need an unreasonable number of diagnostic channels to determine if we are in a certain state or not).

 

  16039   Fri Apr 16 00:21:52 2021 KojiUpdateGeneralGlue Freezer completely frozen

I was looking at the laser head/amp and somehow decided to open the glue freezer. And it was stuck. I've managed to open it but the upper room was completely frozen.
Some of the batteries were embedded in a block of ice. I think we should throw them out.

Can the person who comes in the morning work on defrosting?

- Coordinate with Yehonathan and move the amps and the wooden crate so that you can move the freezer.

- Remove the contents to somewhere (it's OK to be room temp for a while)

- Unplug the freezer

- Leave the freezer outside with the door open. After a while, the ice will fall without care.

- At the end of the day, move it back to the lab. Continue defrosting the other day if the ice remains.

Attachment 1: P_20210416_000906.jpg
P_20210416_000906.jpg
Attachment 2: P_20210416_000850.jpg
P_20210416_000850.jpg
  16040   Fri Apr 16 10:58:16 2021 YehonathanUpdateGeneralGlue Freezer completely frozen

{Paco, Anchal, Yehonathan}

We emptied the fridge and moved the amplifier equipment on top of the amplifier crate. We unplugged the freezer and moved it out of the lab to defrost (attachment).

Quote:

I was looking at the laser head/amp and somehow decided to open the glue freezer. And it was stuck. I've managed to open it but the upper room was completely frozen.
Some of the batteries were embedded in a block of ice. I think we should throw them out.

Can the person who comes in the morning work on defrosting?

- Coordinate with Yehonathan and move the amps and the wooden crate so that you can move the freezer.

- Remove the contents to somewhere (it's OK to be room temp for a while)

- Unplug the freezer

- Leave the freezer outside with the door open. After a while, the ice will fall without care.

- At the end of the day, move it back to the lab. Continue defrosting the other day if the ice remains.

 

Attachment 1: 20210416_105048.jpg
20210416_105048.jpg
  16045   Fri Apr 16 19:07:31 2021 YehonathanUpdateGeneralGlue Freezer completely frozen

There is still a huge chunk of unmelted ice in the fridge. I moved the content of that fridge in the main fridge and put "do not eat" warning signs.

I returned the fridge to the lab and plugged it back in to prevent flooding.

Defrosting will have to continue on Monday.

  16048   Mon Apr 19 10:52:27 2021 YehonathanUpdateGeneralGlue Freezer completely frozen

{Anchal, Paco, Yehonathan}

We took the glue fridge outside.

  16051   Mon Apr 19 19:40:54 2021 YehonathanUpdateGeneralGlue Freezer completely frozen

{Paco, Yehonathan}

We broke the last chunk of ice and cleaned the fridge. We move the fridge back inside and plugged it into the wall. The glues were moved back from the main fridge.

The batteries that were found soaking wet are now somewhat dry and were left on the cabinet drawers for future recycling.

Quote:

{Anchal, Paco, Yehonathan}

We took the glue fridge outside.

 

  16058   Wed Apr 21 05:48:47 2021 ChubUpdateGeneralPSL HEPA Maintenance

Yikes!  That's ONE filter.  I'll get another from storage.

  16065   Wed Apr 21 13:10:12 2021 KojiUpdateGeneralPSL HEPA Maintenance

It's probably too late to say but there are/were two boxes. (just for record)

 

  16113   Mon May 3 18:59:58 2021 AnchalSummaryGeneralWeird gas leakagr kind of noise in 40m control room

For past few days, a weird sound of decaying gas leakage comes in the 40m control room from the south west corner of ceiling. Attached is an audio capture. This comes about every 10 min or so. 

Attachment 1: 40mNoiseFinal.mp3
  16115   Mon May 3 23:28:56 2021 KojiSummaryGeneralWeird gas leakagr kind of noise in 40m control room

I also noticed some sound in the control room. (didn't open the MP3 yet)

I'm afraid that the hard disk in the control room iMac is dying.

 

  16119   Tue May 4 19:14:43 2021 YehonathanUpdateGeneralOSEMs from KAGRA

I put the box containing the untested OSEMs from KAGRA near the south flow bench on the floor.

  16121   Wed May 5 13:05:07 2021 ChubUpdateGeneralchassis delivery from De Leone

Assembled chassis from De Leone placed in the 40 Meter Lab, along the west wall and under the display pedestal table.  The leftover parts are in smaller Really Useful boxes, also on the parts pile along the west wall.

Attachment 1: de_leone_del_5-5-21.jpg
de_leone_del_5-5-21.jpg
  16156   Mon May 24 10:19:54 2021 PacoUpdateGeneralZita IOO strip

Updated IOO.strip on Zita to show WFS2 pitch and yaw trends (C1:IOO-WFS2_PIY_OUT16 and C1:IOO-WFS2_YAW_OUT16) and changed the colors slightly to have all pitch trends in the yellow/brown band and all yaw trends in the pink/purple band.

No one says, "Here I am attaching a cool screenshot, becuz else where's the proof? Am I right or am I right?"

Mon May 24 18:10:07 2021 [Update]

After waiting for some traces to fill the screen, here is a cool screenshot (Attachment 1). At around 2:30 PM the MC unlocked, and the BS_Z (vertical) seismometer readout jumped. It has stayed like this for the whole afternoon... The MC eventually caught its lock and we even locked XARM without any issue, but something happened in the 10-30 Hz band. We will keep an eye on it during the evening...

Tue May 25 08:45:33 2021 [Update]

At approximately 02:30 UTC (so 07:30 PM yesterday) the 10-30 Hz seismic step dropped back... It lasted 5 hours, mostly causing BS motion along Z (vertical) as seen by the minute trend data in Attachment 2. Could the MM library have been shaking? Was the IFO snoring during its afternoon nap?

Attachment 1: Screenshot_from_2021-05-24_18-09-37.png
Screenshot_from_2021-05-24_18-09-37.png
Attachment 2: 24and25_05_2021_PEM_BS_10_30.png
24and25_05_2021_PEM_BS_10_30.png
  16206   Wed Jun 16 19:34:18 2021 KojiUpdateGeneralHVAC

I made a flow sensor with a stick and tissue paper to check the airflow.

- The HVAC indicator was not lit, but it was just the blub problem. The replacement bulb is inside the gray box.

- I went to the south arm. There are two big vent ducts for the outlets and intakes. Both are not flowing the air.
  The current temp at 7pm was ~30degC. Max and min were 31degC and 18degC.

- Then I went to the vertex and the east arm. The outlets and intakes are flowing.

Attachment 1: HVAC_Power.jpeg
HVAC_Power.jpeg
Attachment 2: South_Arm.jpeg
South_Arm.jpeg
Attachment 3: South_End_Tenperature.jpeg
South_End_Tenperature.jpeg
Attachment 4: Vertex.jpeg
Vertex.jpeg
Attachment 5: East_Arm.jpeg
East_Arm.jpeg
  16234   Thu Jul 1 11:37:50 2021 PacoUpdateGeneralrestarted c0rga

Physically rebooted c0rga workstation after failing to ssh into it (even as it was able to ping into it...) the RGA seems to be off though. The last log with data on it appears to date back to 2020 Nov 10, but reasonable spectra don't appear until before 11-05 logs. Gautam verified that the RGA was intentionally turned off then.

  16240   Tue Jul 6 17:40:32 2021 KojiSummaryGeneralLab cleaning

We held the lab cleaning for the first time since the campus reopening (Attachment 1).
Now we can use some of the desks for the people to live! Thanks for the cooperation.

We relocated a lot of items into the lab.

  • The entrance area was cleaned up. We believe that there is no 40m lab stuff left.
    • BHD BS optics was moved to the south optics cabinet. (Attachment 2)
    • DSUB feedthrough flanges were moved to the vacuum area (Attachment 3)
  • Some instruments were moved into the lab.
    • The Zurich instrument box
    • KEPCO HV supplies
    • Matsusada HV supplies
  • We moved the large pile of SUPERMICROs in the lab. They are around MC2 while the PPE boxes there were moved behind the tube around MC2 area. (Attachment 4)
  • We have moved PPE boxes behind the beam tube on XARM behind the SUPERMICRO computer boxes. (Attachment 7)
  • ISC/WFS left over components were moved to the pile of the BHD electronics.
    • Front panels (Attachment 5)
    • Components in the boxes (Attachment 6)

We still want to make some more cleaning:

- Electronics workbenches
- Stray setup (cart/wagon in the lab)
- Some leftover on the desks
- Instruments scattered all over the lab
- Ewaste removal

Attachment 1: P_20210706_163456.jpg
P_20210706_163456.jpg
Attachment 2: P_20210706_161725.jpg
P_20210706_161725.jpg
Attachment 3: P_20210706_145210.jpg
P_20210706_145210.jpg
Attachment 4: P_20210706_161255.jpg
P_20210706_161255.jpg
Attachment 5: P_20210706_145815.jpg
P_20210706_145815.jpg
Attachment 6: P_20210706_145805.jpg
P_20210706_145805.jpg
Attachment 7: PXL_20210707_005717772.jpg
PXL_20210707_005717772.jpg
  16245   Wed Jul 14 16:19:44 2021 gautamUpdateGeneralBrrr

Since the repair work, the temperature is significantly cooler. Surprisingly, even at the vertex (to be more specific, inside the PSL enclosure, which for the time being is the only place where we have a logged temperature sensor, but this is not attributable to any change in the HEPA speed), the temperature is a good 3 deg C cooler than it was before the HVAC work (even though Koji's wind vane suggest the vents at the vertex were working). The setpoint for the entire lab was modified? What should the setpoint even be?

Quote:
 

- I went to the south arm. There are two big vent ducts for the outlets and intakes. Both are not flowing the air.
  The current temp at 7pm was ~30degC. Max and min were 31degC and 18degC.

- Then I went to the vertex and the east arm. The outlets and intakes are flowing.

Attachment 1: rmTemp.pdf
rmTemp.pdf
  16246   Wed Jul 14 19:21:44 2021 KojiUpdateGeneralBrrr

Jordan reported on Jun 18, 2021:
"HVAC tech came today, and replaced the thermostat and a coolant tube in the AC unit. It is working now and he left the thermostat set to 68F, which was what the old one was set to."

  16250   Sat Jul 17 00:52:33 2021 KojiUpdateGeneralCanon camera / small silver tripod / macro zoom lens / LED ring light borrowed -> QIL

Canon camera / small silver tripod / macro zoom lens / LED ring light borrowed -> QIL

Attachment 1: P_20210716_213850.jpg
P_20210716_213850.jpg
  16255   Sun Jul 25 18:21:10 2021 KojiUpdateGeneralCanon camera / small silver tripod / macro zoom lens / LED ring light returned / Electronics borrowed

Camera and accesories returned

One HAM-A coildriver and one sat amp borrowed -> QIL

https://nodus.ligo.caltech.edu:8081/QIL/2616

 

  16265   Wed Jul 28 20:20:09 2021 YehonathanUpdateGeneralThe temperature sensors and function generator have arrived in the lab

I put the temperature sensors box on Anchal's table (attachment 1) and the function generator on the table in front of the c1auxey Acromag chassis (attachment 2).

 

Attachment 1: 20210728_201313.jpg
20210728_201313.jpg
Attachment 2: 20210728_201607.jpg
20210728_201607.jpg
  16269   Wed Aug 4 18:19:26 2021 pacoUpdateGeneralAdded infrasensing temperature unit to martian network

[ian, anchal, paco]

We hooked up the infrasensing unit to power and changed its default IP address from 192.168.11.160 (factory default) to 192.168.113.240 in the martian network. The sensor is online with user controls and the usual password for most workstations in that IP address.

  16270   Thu Aug 5 14:59:31 2021 AnchalUpdateGeneralAdded temperature sensors at Yend and Vertex too

I've added the other two temperature sensor modules on Y end (on 1Y4, IP: 192.168.113.241) and in the vertex on (1X2, IP: 192.168.113.242). I've updated the martian host table accordingly. From inside martian network, one can go to the browser and go to the IP address to see the temperature sensor status . These sensors can be set to trigger alarm and send emails/sms etc if temperature goes out of a defined range.

I feel something is off though. The vertex sensor shows temperature of ~28 degrees C, Xend says 20 degrees C and Yend says 26 degrees C. I believe these sensors might need calibration.

Remaining tasks are following:

  • Modbus TCP solution:
    • If we get it right, this will be easiest solution.
    • We just need to add these sensors as streaming devices in some slow EPICS machine in there .cmd file and add the temperature sensing channels in a corresponding database file.
  • Python workaround:
    • Might be faster but dirty.
    • We run a python script on megatron which requests temperature values every second or so from the IP addresses and write them on a soft EPICs channel.
    • We still would need to create a soft EPICs channel fro this and add it to framebuilder data acquisition list.
    • Even shorted workaround for near future could be to just write temperature every 30 min to a log file in some location.

[anchal, paco]

We made a script under scripts/PEM/temp_logger.py and ran it on megatron. The script uses the requests package to query the latest sensor data from the three sensors every 10 minutes as a json file and outputs accordingly. This is not a permanent solution.

  16274   Tue Aug 10 17:24:26 2021 pacoUpdateGeneralFive day trend

Attachment 1 shows a five and a half day minute-trend of the three temperature sensors. Logging started last Thursday ~ 2 pm when all sensors were finally deployed. While it appears that there is a 7 degree gradient along the XARM it seems like the "vertex" (more like ITMX) sensor was just placed on top of a network switch (which feels lukewarm to the touch) so this needs to be fixed. A similar situation is observed in the ETMY sensor. I shall do this later today.


Done. The temperature reading should now be more independent from nearby instruments.


Wed Aug 11 09:34:10 2021 I updated the plot with the full trend before and after rearranging the sensors.

Attachment 1: six_day_minute_trend.png
six_day_minute_trend.png
  16277   Thu Aug 12 11:04:27 2021 PacoUpdateGeneralPSL shutter was closed this morning

Thu Aug 12 11:04:42 2021 Arrived to find the PSL shutter closed. Why? Who? When? How? No elog, no fun. I opened it, IMC is now locked, and the arms were restored and aligned.

  16278   Thu Aug 12 14:59:25 2021 KojiUpdateGeneralPSL shutter was closed this morning

What I was afraid of was the vacuum interlock. And indeed there was a pressure surge this morning. Is this real? Why didn't we receive the alert?

Attachment 1: Screen_Shot_2021-08-12_at_14.58.59.png
Screen_Shot_2021-08-12_at_14.58.59.png
  16279   Thu Aug 12 20:52:04 2021 KojiUpdateGeneralPSL shutter was closed this morning

I did a bit more investigation on this.

- I checked P1~P4, PTP2/3, N2, TP2, TP3. But found only P1a and P2 were affected.

- Looking at the min/mean/max of P1a and P2 (Attachment 1), the signal had a large fluctuation. It is impossible to have P1a from 0.004 to 0 instantaneously.

- Looking at the raw data of P1a and P2 (Attachment 2), the value was not steadily large. Instead it looks like fluctuating noise.

So my conclusion is that because of an unknown reason, an unknown noise coupled only into P1a and P2 and tripped the PSL shutter. I still don't know the status of the mail alert.

Attachment 1: Screen_Shot_2021-08-12_at_20.51.19.png
Screen_Shot_2021-08-12_at_20.51.19.png
Attachment 2: Screen_Shot_2021-08-12_at_20.51.34.png
Screen_Shot_2021-08-12_at_20.51.34.png
  16288   Mon Aug 23 11:51:26 2021 KojiUpdateGeneralCampus Wide Power Glitch Reported: Monday, 8/23/21 at 9:30am

Campus Wide Power Glitch Reported: Monday, 8/23/21 at 9:30am (more like 9:34am according to nodus log)

nodus: rebooted. ELOG/apache/svn is running. (looks like Anchal worked on it)

chiara: survived the glitch thanks to UPS

fb1: not responding -> @1pm open to login / seemed rebooted only at 9:34am (network path recovered???)

megatron: not responding

optimus: no route to host

c1aux: ping ok, ssh not responding -> needed to use telnet (vme / vxworks)
c1auxex: ssh ok
c1auxey: ping ok, ssh not respoding -> needed to use telnet (vme / vxworks)
c1psl: ping NG, power cycled the switch on 1X2 -> ssh OK now
c1iscaux: ping NG -> rebooted the machine -> ssh recovered

c1iscaux2: does not exist any more
c1susaux: ping NG -> responds after 1X2 switch reboot

c1pem1: telnet ok (vme / vxworks)
c1iool0: does not exist any more

c1vac1: ethernet service restarted locally -> responding
ottavia: doesnot exist?
c1teststand: ping ok, ssh not respoding

3:20PM we started restarting the RTS

ELOG V3.1.3-