40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 339 of 349  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Type Category Subjectdown
  6127   Sat Dec 17 00:00:03 2011 kiwamuUpdateGreen Locking60 Hz line nose gone

Quote from #6126
As shown in the noise budget below, the 60 Hz line noise currently dominates the arm displacement.

 The 60 Hz line noise has gone away.

It turned out that the line noise came from an oscilloscope.
The oscilloscope had been connected to a SR560, which amplifies the frequency-discriminated signal before the ADC as a whitening filter.
I still don't have a good explanation for it, but somehow connecting the oscilloscope made the line noise pretty high.
  17413   Mon Jan 23 22:51:17 2023 yutaSummaryBHD60 Hz harmonics side lobe investigations

[Paco, Yehonathan, Yuta]

Since we have installed BH44, we are seeing side lobes of 60 Hz + harmonics in AS55, REFL55, BH55, BH44, preventing us from locking FPMI BHD (40m/17405).

BH55 RF amp removed:
  - We have noticed that the side lobes are there in BH55 (but not in BH44) when LO-ITMX single bounce is fringing (ETMs and ITMY mis-aligned).
  - Changing whitening gains and turning on/off whitening/unwhitening filters didn't help.
  - When LO-ITMX single bounce is locked with BH55, the side lobe in BH55 reduces.
  - Dithering LO1 at 11 Hz created 180 +/- 11 Hz signal, which confirms that this side lobes are from the up conversion of optic motion.
  - We thought it could be from RF saturation, so we have put a 55-67 MHz bandbass filter (SBP-60+) in between BH55 RFPD and RF amp (ZFL-1000LN+; 40m/17195). Didn't help.
  - We then removed the RF amp. This largely reduced the side lobes (but still some at 180 Hz). We could lock LO-ITMX single bounce without the RF amp, so we decided to remove it for now.

Side lobes only when one of the arms is locked:
  - When ETMs are mis-aligned, MICH fringing and BHD fringing, there are 60 Hz + harmonics, but the side lobes are not there.
  - But with Xarm is locked (ETMY, ITMY mis-aligned) or Yarm is locked (ETMX, ITMX mis-aligned), the side lobes appear in AS55, REFL55, BH55, BH44.
  - Changing whitening gains and turning on/off whitening/unwhitening filters didn't help.
  - As the error signals are normalized by TRX and TRY, we turned on/off the power normalization, but didn't help.
  - Switching 60 Hz comb in BS, ITMX, ITMY, ETMX, ETMY suspension damping didn't help.

POY11 Investigations:
  - When ETMs are mis-alined, POX11 had relatively large 60 Hz + harmonics, but almost none in POY11 (unlike other RFPDs; see Attchment #1).
  - However, when ETMY is aligned and Yarm is loked with POY11, the side lobe grows in POY11.
  - Changing the feedback point from ETMY to ITMY or MC2 didn't help.
  - We have unplugged the IQ demod board for BH44 from the eurorack (without removing the cables) and removed the fuse for the power supply of the RF amp for 44 MHz generation (40m/17401), but these also didn't help.
  - We have also tried locking Yarm with REFL55(= ~2 x POY11), BH55(= ~10 x POY11), ALSY(= ~2000 x POY11)wink, but the side lobes were always there.
  
Next:
  - Disconnect cables in BH44 to open possible ground loops made during BH44 installation (especially 44 MHz generation part??).
  - Check if the noise was there before BH44 installation using past data.

Attachment 1: Screenshot_2023-01-24_11-43-40_POXPOYDark.png
Screenshot_2023-01-24_11-43-40_POXPOYDark.png
  17461   Mon Feb 13 11:54:54 2023 yutaSummaryBHD60 Hz frequency noise is coming from MC1 coils

[JC, Yuta]

We have found that MC1 coils are causing 60 Hz noise.
Tripping watchdogs for MC1 coils reduced 60 Hz noise seen in YARM by a factor of 100.

Method:
 - Locked YARM with POY11 and measured YARM sensitivity to use it for 60 Hz frequency noise monitor0.187493**0.5
 - Tripped MC1, MC2, MC3 coil output watchdogs to see if they are causing this 60 Hz frequency noise. IMC WFS were turned off.

Result:
 - Attachment #1 is YARM sensitivity and MC_F in Hz with MC1,2,3 untripped (dotted) and MC1 tripped (solid).

YARM (PSL locked vs Yarm), MC1,2,3 untripped: 6.0e2 Hz/rtHz (2.6e2 Hz RMS)
MC_F (sum of noises in IMC loop), MC1,2,3 untripped: 4.8e4 Hz/rtHz (2.1e4 Hz RMS)
YARM (PSL locked vs Yarm), MC1 tripped: 6.6e0 Hz/rtHz (2.9e0 Hz RMS) -- reduced by a factor of 100
MC_F (sum of noises in IMC loop), MC1 tripped: 4.7e4 Hz/rtHz (2.0e4 Hz RMS)

 - We have also tried tripping MC2 and MC3 coils, but they didn't make much difference.
 - Untripping only one of MC1 face coils created 60 Hz frequency noise, so all the face coils seem to have the same level of 60 Hz noise.

Next:
 - Inspect around MC1 coil driver

Attachment 1: YARM_calibrated_noise_20230213_Hz.pdf
YARM_calibrated_noise_20230213_Hz.pdf
Attachment 2: Screenshot_2023-02-13_12-23-23_TrippingMC1.png
Screenshot_2023-02-13_12-23-23_TrippingMC1.png
  17462   Mon Feb 13 17:35:20 2023 AnchalSummaryBHD60 Hz frequency noise is coming from MC1 coils

[Anchal, Yuta]

We think we have narrowed down the source of 60 Hz noise to one fo the following possibilities:

  • Ground loop present along the MC1 suspension damping loop
  • 60 Hz DAC noise on inputs of MC1 coil driver
  • 60 Hz noise injected at dewhitening board before the dewhitening filter

The second and third cases are unlikely because we see 60 Hz noise present only in MC1 coils, not MC3 coils while they both share the same connection from DAC to SOS dewhitening filter boards as they share the SOS dewhitening board D000316-A. So it is unlikely that only the MC1 channels have this noise while the MC3 channels do not.

This inference was made from following observations:

Change Reduction in noise at C1:LSC-YARM_IN1_DQ (dB)
Turn off damping loops, keep coil output enabled 0
Turn off coil outputs (only fast actuation) 43
Turn ON Analog Coil Dewhitening Filter on one face coil only 30
Turn ON Analog Coil Dewhitening Filter on all face coils (attachment 1) 43

Note: Turning ON analog dewhitenign on MC1 coil is done by turning off FM9 switch which is the simulated digital dewhitening filter. Also note that theanalog dewhitening filter has an attenuation of 30 dB at 60 Hz.

MC1 has an unconvetional setup where the satellite amplifier is from the new generation while the coil driver and dewhitening boards are from the old generation. The new generation satellite amplifiers sen PD signal through differential ended signals but the old generation PD whitening interface expects single ended inputs, so we ahve been using PD monitor outputs from the satellite amplifier which connects the ground of the two boards to each other. Maybe this is the reason for the ground loop.

Attachment 1: YARM_calibrated_noise_20230213_Hz_MC1SimDWOnOff.pdf
YARM_calibrated_noise_20230213_Hz_MC1SimDWOnOff.pdf
  3752   Thu Oct 21 12:15:02 2010 ranaUpdatePEM6.9 Mag EQ in Gulf of California
Magnitude 6.9
Date-Time
Location 24.843°N, 109.171°W
Depth 10 km (6.2 miles) set by location program
Region GULF OF CALIFORNIA
Distances 105 km (65 miles) S of Los Mochis, Sinaloa, Mexico
125 km (75 miles) SW of Guamuchil, Sinaloa, Mexico
140 km (85 miles) NE of La Paz, Baja California Sur, Mexico
1200 km (740 miles) WNW of MEXICO CITY, D.F., Mexico
Location Uncertainty horizontal +/- 6.1 km (3.8 miles); depth fixed by location program
Parameters NST=187, Nph=187, Dmin=843.1 km, Rmss=1.17 sec, Gp=133°,
M-type=teleseismic moment magnitude (Mw), Version=6
Source
  • USGS NEIC (WDCS-D)
Event ID us2010crbl
  3166   Wed Jul 7 11:35:59 2010 GopalUpdateWIKI-40M Update6.30.10 - 7.7.10 Weekly Update

Summary of this Week's Activities:

6/30: 2nd and 3rd drafts of Progress Report

7/1: 4th draft and final drafts of Progress Report; submitted to SFP

7/5: Began working through busbar COMSOL example

7/6: LIGO meeting and lecture; meeting with Koji and Steve to find drawing of stacks; read through Giaime's thesis, Chapter 2 as well as two other relevant papers.

7/7: Continued working on busbar in COMSOL; should finish this as well as get good headway on stack design by the end of the day.

  3142   Wed Jun 30 11:35:06 2010 Gopal UpdateGeneral6.23.10 - 6.30.10 Weekly Update

Summary of this Week's Activities:

6/23: LIGO Safety Tour; Simulink Controls Tutorial

6/24: Simulink Diagram for Feedback Loop; Constructed Pendulum Transfer Function; Discussion with Dr. Weinstein

6/25: Prepare for pump-down of vacuum chamber; crane broken due to locking failure; worked through COMSOL tutorials

6/28: Ran through Python Tutorials; Began learning about Terminal

6/29: Wrote Progress Report 1 First Draft

6/30: Began editing Progress Report 1

  3103   Wed Jun 23 12:31:36 2010 GopalUpdateGeneral6.16.10-6.23.10 Weekly Update

Summary of This Week's Activities:

6/16: LIGO Orientation; First Weekly Meeting; 40m tour with Jenne; Removed WFS Box Upper Panel, Inserted Cable, Reinstalled panel

6/17: Read Chapter 1 of Control Systems Book; LIGO Safety Meeting; Koji's Talk about PDH Techniques, Fabry-Perot Cavities, and Sensing/Control; Meeting w/ Nancy and Koji

6/18: LIGO Talk Part II; Glossed over "LASERS" book; Read Control Systems Book Chapter 2; Literary Discussion Circle

6/21: Modecleaner Matrix Discussion with Nancy; Suggested Strategy: construct row-by-row with perturbations to each d.f. --> Leads to some questions on how to experimentally do this.

6/22: Learned Simulink; Learned some Terminal from Joe and Jenne; LIGO Meeting; Rana's Talk; Christian's Talk; Simulink Intro Tutorial

6/23 (morning): Simulink Controls Tutorial; Successfully got a preliminary feedback loop working (hooray for small accomplishments!)

 

Outlook for the Upcoming Week:

Tutorials (in order of priority): Finish Simulink Tutorials, Work through COMSOL Tutorials

Reading (in order of priority): Jenne's SURF Paper, Controls Book, COMSOL documentation, Lasers by Siegman.

Work: Primarily COMSOL-related and pre-discussed with Rana

  11139   Fri Mar 13 03:10:35 2015 JenneUpdateLSC6+ CARM->REFL transitions, 1 DARM->AS transition

Much more success tonight.  I only started my tally after I got the CARM transition to work entirely by script, and I have 6 tally marks, so I probably made the CARM to RF-only transition 7 or maybe 8 times tonight in total.  Unfortunately, I only successfully made the DARM transition to AS55 once.    From the wall striptool, counting the number of times the transmitted power went high, I had about 40 lock trials total. 

The one RF-only lock ended around 1:27am.

I think 2 things were most important in their contributions to tonight's success.  I modified the bounceRoll filters in the CARM and DARM filter banks to eat less phase.  Also, using Q's recipe as inspiration, I started engaging the AO path partway through the CARM transition which makes it much less delicate. 


Bounce roll filter

Koji and I added a ~29Hz resonant gain in the bounce roll filter several months ago, to squish some noise that we were seeing in the CARM and DARM ALS error signals.  This does a lot of the phase-eating.  I'm assuming / hoping that that peak won't be present in the CARM and DARM RF error signals.  But even if it is, we can deal with it later.  For now, that peak is not causing so much motion that I require it.  So, it's gone. 

This allowed me to move the complex zero pair from 30 Hz down to 26 Hz.  Overall I think this gained me about 10 degrees of phase at 100Hz, and moved the low end of the phase bubble down by about 10Hz. 


Prep for REFL 11 I through the CM board and CM_SLOW

In order to use Q's recipe (elog 11138), I wanted to be able to lock CARM on REFL11 using the CM_SLOW filter bank. 

I did a few sweeps through CARM resonance while holding on ALS, and determined that the REFL1 input to the CM board needed a gain of -20dB in order to match the slope of CM_SLOW_OUT to CARM_IN (ALS), leaving all of Q's other settings alone.  Q had been using a REFL1 gain of 0dB for the PRY earlier today.

I needed to flip the sign in the input matrix relative to what Q had (he was using +1 in the CM_SLOW -> CARM_B, I used -1 there).  To match this in the fast path, I flipped the polarity of the CM board (Q was using minus polarity, I am using positive).

The CM_SLOW filter bank had a gain of 0.000189733.  I assume that Q did this so that the input matrix element could be unity.  I left this number alone.  It is of the same order as the plain REFL11I->CARM input matrix element of 1e-4 from Saturday night, so it seemed fine.

During my sweeps through CARM resonance, I also saw that I needed an offset to make CM_SLOW's average about 0.  With the crazy gain number, I needed an offest of -475 in the CM_SLOW filter bank.  As I type this though, it occurs to me that I should have put this in the CM board, since the fast path will have an offset that isn't handled.  Ooops. 


Trying Q's recipe for engaging AO path

I am able to get the MC2 AO gain slider up to -10dB (-7 is also okay).  If I increase the digital CARM gain too much, I see gain peaking at about 800Hz, so something good is happening.  (That was with a CARM_B gain of 2.0 and CARM_A gain of 0.  Don't go to 2.0)

I tried once without engaging his 300:80 1/f^2 filter in the CM_SLOW filter banks to start stepping up the CM REFL1 and MC AO gains together, but I only made it 2 steps of 1dB each before I lost lock. 

I tried once or twice turning on that 300:80 filter that Q said over the phone really helped his PRY locking, but it causes loop oscillations in CARM.  Also, I forgot to turn it off for ~45 minutes, and it caused several locklosses.  Ooops.  Anyhow, this isn't the right filter for this situation.


AS55 whitening problem

Twice I tried turning on the AS55 whitening.  Once, I was only partly transitioned from ALSdiff to AS55, the other time was the one time I made the full transition.  It caused the lockloss from the only RF-only lock I had tonight :(

Unfortunately I don't have the time series before the whitening filters (not _DQ-ed), but you can see a giant jump in the _ERR signals when I turn on the whitening, just before the arm power dies:

AS55whitening_lockloss_12March2015.pdf

The AS55 phase is -30, I has an offset of 28.2 and Q has an offset of 6.4.  Both have a gain of 1.  This should give us enough info to back out what the _IN1 signals looked like before I turned on the whitening if that's useful.


Other random notes

Ramp times for CARM_A, CARM_B, DARM_A and DARM_B are all 5 seconds.  This is set in the carm_cm_up script.

carm_cm_up script freezes the arm ASS before it starts the IR->ALS transition, to make it more convenient to run the ASS each lockloss.

carm_cm_up script no longer has a bunch of stuff at the bottom that we're not using.  It's all archived in the svn, but the remnants from things like variable finesse aren't actively  useful.

carm_cm_down script turns off the CM_SLOW whitening (which gets set in the up script)

carm_cm_down script clears the history of the ETM oplevs, in case they went bad (from some near divide-by-zero action?), but the watchdog isn't tripped. This clears away all the high freq crap and lets them do their job.

FSS Slow has been larger than 0.55 all night, larger than 0.6 most of the night, and larger than 0.7 for the last bit of the night.  MC seems happy.

both carm_cm_up and carm_cm_down are checked into the svn.  The up script is rev 45336 and the down script is 45337.

Some offset (maybe the fact that the fast AO path had an un-compensated offset?) is pulling the arm powers down as I make the transitions:


Recipe overview

  • Lock PRMI with arms held on ALS at 3nm CARM offset.  Bring CARM offset to 0.
  • Turn on CARM_B and DARM_B a little bit, then turn on their integrators
  • Lower the PRCL and MICH gains a little.
  • Increase the CARM_B gain a bit, then turn off FM1 for both CARM and DARM.
  • Increase CARM_B gain, lowering CARM_A gain.
  • Increase DARM_B gain, lowering DARM_A gain.  Now the power should definitely be stable (usually ends up around 80).
  • Partly engage AO path.
    • CM board REFL1 gain = -20dB
    • CM board AO gain = 0dB
    • MC2 board AO gain starts at -32dB, stepped up to -20dB
  • Increase CARM_B gain a bit
  • More AO path:  MC2 board AO gain steps from -20dB to -10dB
  • Increase CARM_B gain to 1.5, turn CARM_A gain to zero
  • CM_SLOW whitening on

After that, I by-hand made the DARM transition on the 6th successful scripted CARM transition, and tried to script what I did, although I was never able to complete the DARM transition again.  So, starting where the recipe left off above,

  • Turn off DARM's FM2 boost to win some more phase margin.
  • Increase DARM_B gain to 0.5, lower DARM_A gain to 0.

Since DARM doesn't have an analog fast path, it is stuck in the delicate filter situation.  I think that I should probably start using the UGF servo once the arm power is stable so that DARM stays in the middle of its phase bubble.

Rather than typing out the details of the recipe, I am attaching the up script.

Attachment 1: AS55whitening_lockloss_12March2015.pdf
AS55whitening_lockloss_12March2015.pdf
Attachment 2: MoreDARMB_powerWentDown_12March2015.png
MoreDARMB_powerWentDown_12March2015.png
Attachment 3: carm_cm_up_zip.sh.gz
  11140   Fri Mar 13 14:11:59 2015 ranaUpdateLSC6+ CARM->REFL transitions, 1 DARM->AS transition

Since the DARM_OUT signal is only 500 counts_peak, I don't see why AS55 whitening needs to be switched on.cool Maybe in a couple weeks after the lock is robust. In any case, its much better to do the switching BEFORE you're using AS55, not after.

  12392   Wed Aug 10 15:34:24 2016 SteveUpdateSUS6 in-lbs torque driver for wire clamp screw

The 7.5 in-lb of Wiha seems at the upper end of torque range for a 4-40 SS screw

Wiha 28502 ordered with range 5 -10 in-lb for silver plated 4-40 screws

Do not trust the Venzo torque wrench under 2 Nm ! It miss lead me.

Recommended torque values for silver-plated fasteners are here. For aLIGO we use the guidelines in T1100066-v6, This doc is posted in 40m wiki under Mechanics also.

So, we'll use 6 in-lbs  on silver plated 18-8 stainless steel socket head cap screw 4-40 x 3/8 into SS tower bridge.

Please replace these clamp screws every time if they were tightened without a torque wrench.

Quote:

New Wiha 28504 torque wrench for SOS wire clamping. It's range 7.5  - 20 in-lb in 0.5 steps [ 0.9 - 2.2 Nm ] Audible and perceptible click when the pre-set torque has been attained at ±6% accuracy.  

The new ETMX sus wire torqued to ~ 11.5 in-lb [1.3 Nm ]

Quote:

Gautam and Steve,

The clamp's left side was jammed onto the left guide pin. It was installed slit facing left. Gautam had to use force to remove it. The clamp should move freely seating on the guide rods till torque aplied. Do not move on with the hanging of optic with a jammed clamp. Fix it.

Never use force as you are hanging - aligning optic. The clamp is in the shop for resurfacing and slit opening.

 

 

 

  11596   Mon Sep 14 23:12:49 2015 ericqUpdateLSC55MHz modulation phase effect on PRMI

With the adjustable delay line box installed in the 55MHz modulation path, I've measured the PRMI sensing matrix as a function of delay / relative phase between the 11MHz and 55MHz modulations. The relative frequency difference of 44MHz tells us that this should be cyclical after ~23nsec of delay, but losses in the delay cable change this; see Koji's elogs about the modulation cancellation setup for details. 

TL;DR: Nothing really changes, other than REFL33 optical gain. MICH/PRCL angles remain degenerate.


The results aren't so surprising. The demod angles for the 55MHz diodes don't even change, since the same 55MHz signal is used for the modulator and demodulators, so delaying it before the split should go unnoticed. Most of these measurements were made during the same lock stretch, PRCL on REFL11 I and MICH on AS55Q.

The only signals we would expect to change much are ones that have significant contriubtions from field products influenced by both modulations. None of the 1F PDs are like this, nor is REFL165. REFL33 is the odd man out, where the +44MHz field produced as a -11MHz sideband on the +55MHz sideband beats with the +11MHz sideband (and the same with the signs flipped). I made a simulation for the 40m poster at the March 2015 LVC meeting, but I don't think it ever made it to the ELOG. 

So:

Here are the results for the 0ns and 4ns cases, as an illustration of what changes (REFL33), and what doesn't (everything else). Again, these are calibrated to Volts out of the analog demod boards per meter of DoF motion. 

 

So, since REFL33 is the only one really changing, let's just look at it by itself:

Qualitatively, the change in magnitude looks similar to the simulation result. The demod angles fall by some roughly linear amount. The angle difference is even more stationary than predicted there, though. 

Attachment 1: PRMI_CAR_0ns.pdf
PRMI_CAR_0ns.pdf
Attachment 2: PRMI_CAR_4ns.pdf
PRMI_CAR_4ns.pdf
Attachment 3: delaySweep_nominal.pdf
delaySweep_nominal.pdf
Attachment 4: 55delay_PRMI_REFL33.pdf
55delay_PRMI_REFL33.pdf
  11173   Wed Mar 25 18:48:11 2015 KojiSummaryLSC55MHz demodulators inspection

[Koji Den EricG]

We inspected the {REFL, AS, POP}55 demodulators.

Short in short, we did the following changes:

- The REFL55 PD RF signal is connected to the POP55 demodulator now.
Thus, the POP55 signals should be used at the input matrix of the LSC screens for PRMI tests.

- The POP55 PD RF signal is connected to the REFL55 demodulator now.

- We jiggled the whitening gains and the whitening triggers. Whitening gains for the AS, REFL, POP PDs are set to be 9, 21, 30dB as before.
However, the signal gain may be changed. The optimal gains should be checked through the locking with the interferometer.


- Test 1

Inject 55.3MHz signal to the demodulators. Check the amplitude in the demodulated signal with DTT.
The peak height in the spectrum was calibrated to counts (i.e. it is not counts/rtHz)
We check the amplitude at the input of the input filters (e.g. C1:LSC-REFL55_I_IN1). The whitening gains are set to 0dB.
And the whitening filters were turned off.

REFL55
f_inj = 55.32961MHz -10dBm
REFL55I @999Hz  22.14 [cnt]
REFL55Q @999Hz  26.21 [cnt]


f_inj = 55.33051MHz -10dBm
REFL55I @ 99Hz  20.26 [cnt]  ~200mVpk at the analog I monitor
REFL55Q @ 99Hz  24.03 [cnt]


f_inj = 55.33060MHz -10dBm
REFL55I @8.5Hz  22.14 [cnt]
REFL55Q @8.5Hz  26.21 [cnt]


----
f_inj = 55.33051MHz -10dBm
AS55I   @ 99Hz 585.4 [cnt]
AS55Q   @ 99Hz 590.5 [cnt]   ~600mVpk at the analog Q monitor

f_inj = 55.33051MHz -10dBm
POP55I  @ 99Hz 613.9 [cnt]   ~600mVpk at the analog I monitor
POP55Q  @ 99Hz 602.2 [cnt]

We wondered why the REFL55 has such a small response. The other demodulators seems to have some daughter board. (Sigg amp?)
This maybe causing this difference.

-----

- Test 2

We injected 1kHz 1Vpk AF signal into whitening board. The peak height at 1kHz was measured.
The whitening filters/gains were set to be the same condition above.

f_inj = 1kHz 1Vpk
REFL55I 2403 cnt
REFL55Q
2374 cnt
AS55I   2374 cnt
AS55Q   2396 cnt
POP55I  2365 cnt
POP55Q
  2350 cnt

So, they look identical. => The difference between REFL55 and others are in the demodulator.

  8057   Mon Feb 11 16:16:27 2013 SteveUpdateVAC55 days at atmoshere

CP Stat 100  sheet-covers were replaced by clean ones on open chambers BS, ITMX, ITMY and ETMY this morning.

Try to fold the sheets such way that the clean side is facing each other, so they do not accumulate dust.

 

Attachment 1: atm55d.png
atm55d.png
  11490   Tue Aug 11 02:40:29 2015 ericqUpdateLSC50m delay lines - Rough calibrations

Jessica will soon ELOG about some measurements suggesting that the conductive connector-ized ALS delay line enclosure is the way to go, when considering crosstalk between the delay lines. It is currently mounted and hooked up on the LSC rack, though I need to make a bunch of new SMA cables now that I think a semi-permanent arrangement has been reached. 

I did a rough re-calibration of the phase tracker output, since the increased cable delay changes the degree/Hz gain. This was done by fitting a line to a slow sawtooth FM of the SRS DS345's (1Hz rate, 10kHz deviation, 30MHz carrier). This resulted in the following calibration updates

  • ALSX: 19230 -> 13051 Hz/count, 3.4dB more sensitive

  • ALSY: 19425 -> 12585 Hz/count, 3.8db more sensitive

Again, this is a rough calibration. Nevertheless, it is not so surprising we don't get the 50m/30m = 4.4dB increase we would expect just from the lengths; the (I presume) increased cable loss matters. Also, the loss' frequency dependance is an additional reason that the phase tracker calibration is not constant over all frequencies. 

I took spectra with the arms in IR lock, but didn't see any real improvement beyond a possible dip in the floor from 100-200Hz. This doesn't surprise me too much, however, since I don't believe that we are currently dominated by electronic noises that this gain increase would help overcome. 

Last week, Koji mentioned the ALS phase noise added due to the post-cavity table motion the arm-transmitted green beams experience before hitting the beat PD. I should estimate the size of this effect for our situation. 

  574   Thu Jun 26 14:06:00 2008 MashaUpdateGeneral500mW INNOLIGHT NPRO info
Below is the placement of 500mW INNOLIGHT NPRO mephisto laser. It is set up on the Symmetric Port table.
  8557   Thu May 9 02:19:53 2013 JenneUpdateLocking50% BS installed in POP path

Koji had the good idea of trying to measure the motion of the POP beam, and feeding that signal to PRM yaw to stabilize the motion.  To facilitate this, I have installed a 50% beam splitter before the POP 110/22 PD (so also before the camera). 

Before touching anything, I locked the PRM-ITMY half-cavity so that I had a constant beam at POP.  I measured the POP DC OUT to be 58.16 counts.  I then installed a 1" 50% BS, making sure (using the 'move card in front of optic while watching camera' technique) that I was not close to clipping on the new BS.  I then remeasured POP DC OUT, and found it to be 30.63.  I closed the PSL shutter to get the dark value, which was -0.30 .  This means that I now have a factor of 0.53 less light on the POP110/22 PD.  To compensate for this, I changed the values of the power normalization matrix from 0.01 (MICH) to 0.0189, and 100 (PRCL) to 189.

After doing this, I restored the ITMX and am able to get several tens of seconds of PRMI lock (using AS55Q and REFL33I). 

I found several QPDs in the PD cabinet down the Y arm, but no readout electronics.  The QPD I found is D990272.  I don't really want to spend any significant amount of time hacking something for this together, if Valera can provide a QPD with BNC outputs. For now, I have not installed any DC PD or razor blade (which can be a temporary proxy for a QPD, enough to get us yaw information).

 

  754   Tue Jul 29 11:50:01 2008 JenneUpdateEnvironment5.6 Earthquake
Earthquake Details
Magnitude 5.6
Date-Time

* Tuesday, July 29, 2008 at 18:42:15 UTC
* Tuesday, July 29, 2008 at 11:42:15 AM at epicenter

Location 33.959°N, 117.752°W
Depth 12.3 km (7.6 miles)
Region GREATER LOS ANGELES AREA, CALIFORNIA
Distances

* 3 km (2 miles) SW (235°) from Chino Hills, CA
* 8 km (5 miles) SE (127°) from Diamond Bar, CA
* 9 km (5 miles) NNE (23°) from Yorba Linda, CA
* 11 km (7 miles) S (178°) from Pomona, CA
* 47 km (29 miles) ESE (103°) from Los Angeles Civic Center, CA

Location Uncertainty horizontal +/- 0.3 km (0.2 miles); depth +/- 1.3 km (0.8 miles)
Parameters Nph=144, Dmin=8 km, Rmss=0.42 sec, Gp= 18°,
M-type=local magnitude (ML), Version=1
Source

* California Integrated Seismic Net:
* USGS Caltech CGS UCB UCSD UNR

Event ID ci14383980

All the watchdogs tripped. I'll put them back after lunch, after the optics have had time to settle down.
  1653   Thu Jun 4 23:39:23 2009 peteUpdatePEM5 days, 20 days of accelerometers

Looks like yesterday was particularly noisy.  It's unclear to me why diurnal variation much more visible in MC1_Y, and why the floor wanders.

 

The first plot shows 5 days.  The second plot shows 20 days.

Attachment 1: acc_5day.png
acc_5day.png
Attachment 2: acc_20days.png
acc_20days.png
  6092   Thu Dec 8 22:44:55 2011 KojiUpdateRF System4ch demod test result

1) Linearity Test

LO input level was +10dBm. The LO freq was 11MHz and 55MHz for CH1 and CH2 respectively.
The IF frequency was fixed at 10kHz.

The amplitude of the RF input was swept from -50dBm to +15dBm.
Basically I and Q output of CH1 and CH2 was quite linear in this amplitude range.

2) Freqency Response

RF input was fixed at -20dBm and the IF frequency was swept from 1kHz to 1MHz.

The response was flat upto 100kHz, and have sensitivity upto 300kHz.

3) Output noise

Noise floor of the output is ~20nV/rtHz. All of the channels behave in the same way.
1/f start from 100Hz.

Attachment 1: RF_DEMOD_TEST_111208.pdf
RF_DEMOD_TEST_111208.pdf RF_DEMOD_TEST_111208.pdf RF_DEMOD_TEST_111208.pdf
  6086   Thu Dec 8 00:45:13 2011 KojiUpdateRF System4ch demod is ready

I have tested the left 2ch of 4ch demod board.

The left most is for 11MHz, and the next one is for the 55MHz.

  1414   Fri Mar 20 15:54:29 2009 steveOmnistructureGeneral480V crane power switch on MEZ

CES Mezzanine is beeing rebuilt to accommodate our new neighbor: the 20ft high water slide...& .jacuzzi

All our ac power transformers are up there. Yesterday we labelled the power switch of 480VAC on the mezz

that we need to keep to run the 3 cranes in the lab.

  2612   Thu Feb 18 10:10:43 2010 steveConfigurationGeneral480 V AC power turned off

Only the 40m cranes are running on 480VAC The electricians are rewiring this transformer on the mezzanine so it was shut down.

I tested all three cranes before the 480V power was turned off. The last thing to do with the cranes to wipe them down before use.

It will happen on next Tuesday morning.

  2966   Fri May 21 11:56:34 2010 AlbertoUpdate40m Upgrading40mUpgrade Field Power and RF Power Spectrum at the ports. 38m/38.55m arm length issue.

I update my old 40mUpgrade Optickle model, by adding the latest updates in the optical layout (mirror distances, main optics transmissivities, folding mirror transmissivities, etc). I also cleaned it from a lot of useless, Advanced LIGO features.

I calculated the expected power in the fields present at the main ports of the interferometer.

I repeated the calculations for both the arms-locked/arms-unlocked configurations. I used a new set of functions that I wrote which let me evaluate the field power and RF power anywhere in the IFO. (all in my SVN directory)

As in Koji's optical layout, I set the arm length to 38m and I found that at the SP port there was much more power that I woud expect at 44Mhz and 110 MHz.

It's not straightforward to identify unequivocally what is causing it (I have about 100 frequencies going around in the IFO), but presumably the measured power at 44MHz was from the beat between f1 an f2 (55-11=44MHz), and that at 110MHz was from the f2 first sidebands.

Here's what i found:

RFPower_locked_38m.png

RFPower_unlocked_38m.png

FieldPower_locked_38m.png

FieldPower_unlocked_38m.png

 

I found that When I set the arm length to 38.55m (the old 40m average arm length), the power at 44 and 110 MHz went significantly down. See here:

RFPower_3855m.png

 FieldPower_3855m.png

I checked the distances between all the frequencies circulating in the IFO from the closest arm resonance to them.

I found that the f2 and 2*f2 are two of the closest frequencies to the arm resonance (~80KHz). With a arm cavity finesse of 450, that shouldn't be a problem, though.

40mUpgrade_distanceFromResonance_38m.png

 I'll keep using the numbers I got to nail down the culprit.

Anyways, now the question is: what is the design length of the arms? Because if it is really 38m rather than 38.55m, then maybe we should change it back to the old values.

  2968   Fri May 21 16:24:11 2010 KojiUpdateLSC40mUpgrade Field Power and RF Power Spectrum at the ports. 38m/38.55m arm length issue.

1. Give us the designed arm length. What is the criteria?

2. The arm lengths got shorter as the ITMs had to shift to the end. To make them longer is difficult. Try possible shorter length.

  2973   Mon May 24 10:03:14 2010 ranaUpdateLSC40mUpgrade Field Power and RF Power Spectrum at the ports. 38m/38.55m arm length issue.

 

 If you have a working 40m Optickle model, put it in a common place in the SVN, not in your own folder.

I can't figure out why changing the arm length would effect the RF sidebands levels. If you are getting RF sidebands resonating in the arms, then some parameter is not set correctly.

As the RF sideband frequency gets closer to resonating in the arm, the CARM/DARM cross-coupling to the short DOFs probably gets bigger.

  2974   Mon May 24 11:32:05 2010 ranaUpdateLSC40mUpgrade Field Power and RF Power Spectrum at the ports. 38m/38.55m arm length issue.

Quote:

 

 If you have a working 40m Optickle model, put it in a common place in the SVN, not in your own folder.

I can't figure out why changing the arm length would effect the RF sidebands levels. If you are getting RF sidebands resonating in the arms, then some parameter is not set correctly.

As the RF sideband frequency gets closer to resonating in the arm, the CARM/DARM cross-coupling to the short DOFs probably gets bigger.

I uploaded the latest iscmodeling package to the SVN under /trunk. It includes my addition of the 40m Upgrade model: /trunk/iscmodeling/looptickle/config40m/opt40mUpgrade2010.m.

I don't know the causes of this supposed resonances yet. I'm working  to try to understand that. It would be interesting also to evaluate the results of absolute length measurements.

Here is what I also found:

reflRFpowerVsArmLength.png

It seems that 44, 66 and 110 are resonating.

If that is real, than 37.5m could be a better place. Although I don't have a definition of "better" yet.  All I can say is these resonances are smaller there.

  15879   Mon Mar 8 12:54:54 2021 gautamUpdateEquipment loan40m-->Cryo
  1. Busby box
  2. SR554 transformer preamplifier
  549   Fri Jun 20 08:30:27 2008 stivUpdatePhotos40m summer line up 2008
atm1: John, Alberto, Yoichi, Koji, Masha, and Sharon

atm2: surf students Max of CIT, Sharon of MIT, Masha of Harvard, Eric of CIT not shown
Attachment 1: P1020559.png
P1020559.png
Attachment 2: P1020560.png
P1020560.png
  6883   Wed Jun 27 15:10:34 2012 JamieUpdateComputer Scripts / Programs40m summary webpages move

I have moved the summary pages stuff that Duncan set up to a new directory that it accessible to the nodus web server and is therefore available from the outside world:

/users/public_html/40-summary

which is available at:

https://nodus.ligo.caltech.edu:30889/40m-summary/

I updated the scripts, configurations, and crontab appropriately:

/users/public_html/40m-summary/bin/c1_summary_page.sh
/users/public_html/40m-summary/share/c1_summary_page.ini

 

  6686   Fri May 25 19:13:10 2012 Duncan MacleodSummaryComputer Scripts / Programs40m summary webpages

40m summary webpages

 The aLIGO-style summary webpages are now running on 40m data! They are running on megatron so can be viewed from within the martian network at:

http://192.168.113.209/~controls/summary

At the moment I have configured the 5 seismic BLRMS bands, and a random set of PSL channels taken from a strip tool.

Technical notes

  • The code is in python depending heavily on the LSCSoft PyLAL and GLUE modules.
    • /home/controls/public_html/summary/bin/summary_page.py
  • The HTML is supported by a CSS script and a JS script which are held locally in the run directory, and JQuery linked from the google repo.
    • /home/controls/public_html/summary/summary_page.css
    • /home/controls/public_html/summary/pylaldq.js
  • The configuration is controlled via a single INI format file
    • /home/controls/public_html/summary/share/c1_summary_page.ini

Getting frames

Since there are no segments or triggers for C1, the only data sources are GWF frames. These are mounted from the framebuilder under /frames on megatron. There is a python script that takes in a pair of GPS times and a frame type that will locate the frames for you. This is how you use it to find T type frames (second trends) for May 25 2012:

python /home/controls/public_html/summary/bin/framecache.py --ifo C1 --gps-start-time 1021939215 --gps-end-time 1022025615 --type T -o framecache.lcf

If you don't have GPS times, you can use the tconvert tool to generate them

$ tconvert May 25
1021939215

The available frame types, as far as I'm aware are R (raw), T (seconds trends), and M (minute trends).

Running the code

The code is designed to be fairly easy to use, with most of the options set in the ini file. The code has three modes - day, month, or GPS start-stop pair. The month mode is a little sketchy so don't expect too much from it. To run in day mode:

python /home/controls/public_html/summary/bin/summary_page.py --ifo C1 --config-file /home/controls/public_html/summary/share/c1_summary_page.ini --output-dir . --verbose --data-cache framecache.lcf -SRQDUTAZBVCXH --day 20120525

Please forgive the large apparently arbitrary collection of letters, since the 40m doesn't use segments or triggers, these options disable processing of these elements, and there are quite a few of them. They correspond to --skip-something options in long form. To see all the options, run

python /home/controls/public_html/summary/bin/summary_page.py --help

There is also a convenient shell script that will run over today's data in day mode, doing everything for you. This will run framecache.py to find the frames, then run summary_page.py to generate the results in the correct output directory. To use this, run

bash /home/controls/public_html/summary/bin/c1_summary_page.sh

Configuration

Different data tabs are disabled via command link --skip-this-tab style options, but the content of tabs is controlled via the ini file. I'll try to give an overview of how to use these. The only configuration required for the Seismic BLRMS 0.1-0.3 Hz tab is the following section:

 

[data-Seismic 0.1-0.3 Hz]
channels = C1:PEM-RMS_STS1X_0p1_0p3,C1:PEM-RMS_STS1Y_0p1_0p3,C1:PEM-RMS_STS1Z_0p1_0p3
labels = STS1X,STS1Y,STS1Z
frame-type = R
plot-dataplot1 =
plot-dataplot3 =
amplitude-log = True
amplitude-lim = 1,500
amplitude-label = BLRMS motion ($\mu$m/s)

The entries can be explained as follows:

  1. '[data-Seismic 0.1-0.3 Hz] - This is the section heading. The 'data-' mark identifies this as data, and is a relic of how the code is written, the 'Seismic 0.1-0.3 Hz' part is the name of the tab to be displayed in the output.
  2. 'channels = ...' - This is a comma-separated list of channels as they are named in the frames. These must be exact so the code knows how to find them.
  3. 'labels = STS1X,STS1Y,STS1Z' - This is a comma-separated list of labels mapping channel names to something more readable for the plots, this is optional.
  4. 'frame-type = R' - This tells the code what frame type the channels are, so it can determine from which frames to read them, this is not optional, I think.
  5. 'plot-dataplotX' - This tells the code I want to run dataplotX for this tab. Each 'dataplot' is defined in it's own section, and if none of these options are given, the code tries to use all of them. In this configuration 'plot-dataplot1' tells the code I want to display the time-series of data for this tab.
  6. 'amplitude-XXX = YYY' - This gives the plotter specific information about this tab that overrides the defaults defined in the dataplotX section. The options in this example tell the plotter that when plotting amplitude on any plot, that axis should be log-scale, with a limit of 1-500 and with a specific label. The possible plotting configurations for this style of option are: 'lim', 'log', 'label', I think.

Other compatible options not used in this example are:

 

  • scale = X,Y,Z - a comma-separated list of scale factors to apply to the data. This can either be a single entry for all channels, or one per channel, nothing in between.
  • offset = X,Y,Z - another comma-separate list of DC offsets to apply to the data (before scaling, by default). DAQ noise may mean a channel that should read zero during quick times is offset by some fixed amount, so you can correct that here. Again either one for all channels, or one per channel.
  • transform = lambda x: f(x) - a python format lambda function. This is basically any mathematical function that can be applied to each data sample. By default the code constructs the function 'lambda d: scale * (d-offset)', i.e. it calibrates the data by removing the offset an applying the scale.
  • band = fmin, fmax - a low,high pair of frequencies within which to bandpass the data. Sketchy at best...
  • ripple_db = X - the ripple in the stopband of the bandpass filter
  • width = X - the width in the passband of the bandpass filter
  • rms_average = X - number of seconds in a single RMS average (combine with band to make BLRMS)
  • spectrum-segment-length = X - the length of FFT to use when calculating the spectrum, as a number of samples
  • spectrum-overlap = X - the overlap (samples) between neighbouring FFTs when calculating the spectrum
  • spectrum-time-step = X - the length (seconds) of a single median-mean average for the spectrogram

At the moment a package version issue means the spectrogram doesn't work, but the spectrum should. At the time of writing, to use the spectrum simple add 'plot-dataplot2'.

You can view the configuration file within the webpage via the 'About' link off any page.

Please e-mail any suggestions/complaints/praise to duncan.macleod@ligo.org.

  6687   Fri May 25 20:45:25 2012 Duncan MacleodSummaryComputer Scripts / Programs40m summary webpages

There is now a job in the crontab that will run the shell wrapper every hour, so the pages _should_ take care of themselves. If you make adjustments to the configuration file they will get picked up on the hour, or you can just run the script by hand at any time.

$ crontab -l
# m h  dom mon dow   command
0 */1 * * * bash /home/controls/public_html/summary/bin/c1_summary_page.sh > /dev/null 2>&1

  5254   Wed Aug 17 12:14:27 2011 Josh SmithOmnistructureComputer Scripts / Programs40m summary page plans

Josh Smith, Fabian Magana-Sandoval, Jackie Lee (Fullerton)

Thanks to Jamie and Jenne for the tour and the input on the pages.

We had a look at the GEO summary pages and thought about how best to make a 40m summary page that would eventually become and aligo summary page. Here's a rough plan:

- First we'll check that we can access the 40m NDS2 server to get data from the 40m lab in Fullerton.

- We'll make a first draft of a 40m summary page in python, using pynds, and base the layout on the current geo summary pages.

- When this takes shape we'll iterate with Jamie, Jenne, Rana to get more ideas for measurements, layout.

Other suggestions: Jenne is working on an automated noisebudget and suggests having a placeholder for it on the page. We can also incorporate some of the features of Aidan's 40m overview medm screen that's in progress, possibly with different plots corresponding to different parts of the drawing, etc. Jenne also will email us the link of once per hour medm screenshots.

 

  2115   Mon Oct 19 11:00:52 2009 steveHowToSAFETY40m safety training

Kiwamu, Alex and Zach are practicing mandatory IR-safety scan at the 40m-PSL

40m specific safety indoctrination were completed.

Attachment 1: safety_10_2009.JPG
safety_10_2009.JPG
  15303   Tue Apr 14 23:50:06 2020 KojiUpdateGeneral40m power glitch recovery

[Koji / Gautam (Remote)]

Lab status

  • Gray Panel: The lab AC was off. Turned on all three (N/S, CTRL RM, E/W)
  • The control room AC was running.

Work stations

  • Control Room: All the control machines were running. We knew that nodus/chiara/fb were running
  • 1X6/7:
    • JETSTOR was making beeping sound. “Power #1 failed””power #2 failed”
    • Optimus & megatron were off -> turned on -> up and running now
  • 1X1/2:
    • Power cycled the netgear at the top of the IOO rack (maybe not necessary)
    • Turned on c1ioo -> up and running now
  • 1X4/5: Rebooted c1sus / c1lsc -> up and running now
  • 1X9: Rebooted c1iscex -> up and running now
  • 1Y4: Rebooted c1iscex -> up and running now

Vacuum status

  • Looked like everything was running as if it did not see the power glitch
  • TP1 normal: Set speed 33.6k rpm / Actual speed 33.6k rpm 
  • TP2 normal: 66k rpm / PTP2 16.0 mtorr
  • TP3 normal: 31k rpm / PTP3 45.4mtorr
  • P1 LOW / P2 1.7mtorr / CC2 1.1e-6 / P3 7.6e-2 / P4 LO
  • Annuli: 2.7~3torr
  • CC1 9.6e-6 / SUPER BEE 0.9mtorr

C1VAC recovery

  • c1vac was alive, but was isolated from the martian network
  • Checked the network I/F status with /sbin/ifconfig -a
    • eth0 had no IP
    • eth1 had the vac subnet IP (192.168.114.9)
  • Ran sudo /sbin/ifdown eth0 then  sudo /sbin/ifup eth0
  • The I/F eth0 started running and c1vac became visible from martian
  • Later checked the vacuum screen: The pressure values and valve statuses looked normal.
    The interlock state was “running”. The system state was “unrecognized”.

End RTS recovery 

  • The end slow machines (auxex and auxey) were already running
  • Restarting end RT models:
    • c1iscey -> rtcds start --all
    • c1iscex -> rtcds start --all
  • Confirmed that the models can dump the SUSs

Vertex RTS recovery

  • We wanted to use the reboot script. (/opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh)
  • c1susaux​​
    • To be safe, we wanted to bring c1susaux first.
    • c1susaux does not make the network I/Fs up automatically upon reboot.
      -> Connect an LCD display / keyboard / mouse to c1susaux
      -> Ran sudo /sbin/ifup eth0 and sudo /sbin/ifup eth1
    • Now c1susaux is visible from martian.
    • Login c1susaux and ran:  
      sudo systemctl start modbusIOC.service 
      -> c1susaux epics is up and running now
    • ...Meanwhile c1susaux lost its eth1 somehow. This made the slow values of 8 vertex sus all zero
      -> Ran sudo /sbin/ifdown eth1 and sudo /sbin/ifup eth1 again on c1susaux ->  this resolved the issue
  • c1psl
    • Login c1psl and ran:  
      sudo systemctl start modbusIOC.service 
      -> c1psl epics is up and running now
  • Prepared for the rebooting script
    • Ran /opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh
    • Rebooting was done successfully. All the suspensions looked free and healthy.
    • Burtrestored c1susaux (used Apr 12 21:19 snapshot)

Hardware

  • PSL laser / Xend AUX laser / Yend AUX laser were off -> turned on
  • The PMC was immediately automatically locked.
  • The main marconi was off -> forgot to turn on
  • The end temp controllers for the SHG crystals were on but not enabled -> now enabled

RTS recovery ~ part 2

  • FB: FB status of all the RTS models were still red
  • Timing: c1x01/2/3/5 were 1 sec behind of FB and c1x04 was 2 sec behind
  • -> Remedy:  https://nodus.ligo.caltech.edu:8081/40m/14349
    • Software rebooting of FB
    • Manually start the open-mx and mx services using
    • sudo systemctl start open-mx.service 
    • sudo systemctl start mx.service
    • Check that the system time returned by gpstime matches the gpstime reported by internet sources. e.g. http://leapsecond.com/java/gpsclock.htm
    • Manually start the daqd processes using
      sudo systemctl start daqd_*
  • This made all the FB(FE) indicators green!
  • Ran the reboot script again -> All green!

IMC recovery

  • The IMC status was checked
  • No autolocker, but it could be manually locked. i.e. MC1/2/3 were not so much misalignment
  • Autolocker/Slow FSS recovery along with https://nodus.ligo.caltech.edu:8081/40m/15121
    • sudo systemctl start MCautolocker.service
    • sudo systemctl start FSSSlow.service
  • Both of them failed to run
  • Note by Gautam: The problem with the systemctl commands failing was that the NFS mount points weren’t mounted. Which in turn was because of the familiar /etc/resolv.conf problem. I added chiara to the namespace in this file, and then manually mounted the NFS mount points. This fixed the problem.
    Now the IMC is locked and the autolocker is left running.

Burt restore

  • Used Apr 12 21:19 snapshot
  • c1psl
  • c1alsepics/c1assepics/c1asxepics/c1asyepics
  • c1aux/c1auxex/c1auxey/
  • c1iscaux/c1susaux
  • This made REFL and AS beams back to the CCDs. As has small fringes.
  • Y arm has small IR flashes as well as green flashes.

JETSTOR recovery

  • JETSTOR was beeping. 
  • Shutdown megatron
  • Followed the instruction https://nodus.ligo.caltech.edu:8081/40m/13107
  • This stopped beeping. Waiting for JETSTOR to come up -> In a minute, JETSTOR display became normal and all disks showed green.
  • Bring megatron back up again

N2 bottle

  • The left N2 bottle was empty. The right one had 1500PSI.
  • Replaced the left bottle with the spare one in the room.
  • Now the left one 2680PSI and the right one 1400PSI.

Closing

  • Closed PSL/AUX laser shutters
  • Turned off the lights in the lab, CTRL room, and the office.

Remaining Issues

  • [done] MCAutoLocker / FSSSlow scripts are not running
  • The PRM alignment slider has no effect (although the PRM is aligned…) -> SLOW DAQ frozen???
  • JETSTOR is not mounted on megatron [gautam mounted Jetstor on megatron on 4/18 at 2pm]
  15300   Tue Apr 7 15:30:40 2020 JonSummaryNoiseBudget40m noise budget migrated to pygwinc

In the past year, pygwinc has expanded to support not just fundamental noise calculations (e.g., quantum, thermal) but also any number of user-defined noises. These custom noise definitions can do anything, from evaluating an empirical model (e.g., electronics, suspension) to loading real noise measurements (e.g., laser AM/PM noise). Here is an example of the framework applied to H1.

Starting with the BHD review-era noises, I have set up the 40m pygwinc fork with a working noise budget which we can easily expand. Specific actions:

  • Updated the 40m fork to the latest pygwinc version (while preserving the commit history).
  • Added a directory ./CIT40m containing the 40m-specific noise budget files (created by GV).
  • Added an ipython notebook CIT40m.ipynb at the root level showing how to generate a noise budget.
  • Integrated our DAC and seismic noise estimators into pygwinc.
  • Marked the old 40m NB repo as obsolete (last commit > 2 yrs ago). Many of these noise estimates are probably stale, but I will work with GV to identify which ones can be migrated.

I set up our fork in this way to keep the 40m separate from the main pygwinc code (i.e., not added to as a built-in IFO type). With the 40m code all contained within one root-level directory (with a 40m-specific name), we should now always be able to upgrade to the latest pygwinc without creating intractable merge conflicts.

  5650   Tue Oct 11 15:19:17 2011 ranaHowToEnvironment40m map

The Kinemetrics dudes are going to visit us @ 1:45 tomorrow (Wednesday) to check out our stacks, seismos, etc.

40mLabMap.png40mLabMap.jpg

I put these maps here on the elog since people are always getting lost trying to find the lab.

  6058   Thu Dec 1 11:25:10 2011 steveUpdatePEM40m infrastructure holds up well in strong wind condition

 Santa Anna wind speed was locked around 60 kmph last night on campus. The strongest in 30 years.  The lab hold up well. We did not lose  AC power either.

Threes and windows were blown out and  over on campus.

We have 4 sliding glass windows without "heavy-laser proved" inside protection.

We should plan to upgrade ALL  sliding glass windows with metal protection from the inside.The strongest in 30 years.

Attachment 1: santaannawind.png
santaannawind.png
Attachment 2: wind.png
wind.png
  340   Sun Feb 24 10:51:58 2008 tfFrogsEnvironment40m in phdcomics?
  5651   Tue Oct 11 17:32:05 2011 jamieHowToEnvironment40m google maps link

Here's another useful link:

http://maps.google.com/maps?q=34.13928,-118.123756

  10507   Mon Sep 15 18:55:51 2014 ranaUpdateDAQ40m frames onto the cluster

 Dan Kozak is rsync transferring /frames from NODUS over to the LDAS grid. He's doing this without a BW limit, but even so its going to take a couple weeks. If nodus seems pokey or the net connection to the outside world is too tight, then please let me and him know so that he can throttle the pipe a little.

  10632   Wed Oct 22 21:06:33 2014 ChrisUpdateDAQ40m frames onto the cluster

Quote:

 Dan Kozak is rsync transferring /frames from NODUS over to the LDAS grid. He's doing this without a BW limit, but even so its going to take a couple weeks. If nodus seems pokey or the net connection to the outside world is too tight, then please let me and him know so that he can throttle the pipe a little.

The recently observed daqd flakiness looks related to this transfer. It appears to still be ongoing:

nodus:~>ps -ef | grep rsync
controls 29089   382  5 13:39:20 pts/1   13:55 rsync -a --inplace --delete --exclude lost+found --exclude .*.gwf /frames/trend
controls 29100   382  2 13:39:43 pts/1    9:15 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10975 131.
controls 29109   382  3 13:39:43 pts/1    9:10 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10978 131.
controls 29103   382  3 13:39:43 pts/1    9:14 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10976 131.
controls 29112   382  3 13:39:43 pts/1    9:18 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10979 131.
controls 29099   382  2 13:39:43 pts/1    9:14 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10974 131.
controls 29106   382  3 13:39:43 pts/1    9:13 rsync -a --delete --exclude lost+found --exclude .*.gwf /frames/full/10977 131.
controls 29620 29603  0 20:40:48 pts/3    0:00 grep rsync

Diagnosing the problem:

I logged into fb and ran "top". It said that fb was waiting for disk I/O ~60% of the time (according to the "%wa" number in the header). There were 8 nfsd (network file server) processes running with several of them listed in status "D" (waiting for disk). The daqd logs were ending with errors like the following suggesting that it couldn't keep up with the flow of data:

[Wed Oct 22 18:58:35 2014] main profiler warning: 1 empty blocks in the buffer
[Wed Oct 22 18:58:36 2014] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1098064730 to 1098064731

This all pointed to the possibility that the file transfer load was too heavy.

Reducing the load:

The following configuration changes were applied on fb.

Edited /etc/conf.d/nfs to reduce the number of nfsd processes from 8 to 1:

OPTS_RPC_NFSD="1"

(was "8")

Ran "ionice" to raise the priority of the framebuilder process (daqd):

controls@fb /opt/rtcds/rtscore/trunk/src/daqd 0$ sudo ionice -c 1 -p 10964

And to reduce the priority of the nfsd process:

controls@fb /opt/rtcds/rtscore/trunk/src/daqd 0$ sudo ionice -c 2 -p 11198

I also tried punishing nfsd with an even lower priority ("-c 3"), but that was causing the workstations to lag noticeably.

After these changes the %wa value went from ~60% to ~20%, and daqd seems to die less often, but some further throttling may still be in order.

  2315   Mon Nov 23 17:53:08 2009 JenneUpdateComputers40m frame builder backup acting funny

As part of the fb40m restart procedure (Sanjit and I were restarting it to add some new channels so they can be read by the OAF model), I checked up on how the backup has been going.  Unfortunately the answer is: not well.

Alan imparted to me all the wisdom of frame builder backups on September 28th of this year.  Except for the first 2 days of something having gone wrong (which was fixed at that time), the backup script hasn't thrown any errors, and thus hasn't sent any whiny emails to me.  This is seen by opening up /caltech/scripts/backup/rsync.backup.cumlog , and noticing that  after October 1, 2009, all of the 'errorcodes' have been zero, i.e. no error (as opposed to 'errorcode 2' when the backup fails).  

However, when you ssh to the backup server to see what .gwf files exist, the last one is at gps time 941803200, which is Nov 9 2009, 11:59:45 UTC.  So, I'm not sure why no errors have been thrown, but also no backups have happened. Looking at the rsync.backup.log file, it says 'Host Key Verification Failed'.  This seems like something which isn't changing the errcode, but should be, so that it can send me an email when things aren't up to snuff.  On Nov 10th (the first day the backup didn't do any backing-up), there was a lot of Megatron action, and some adding of StochMon channels.  If the fb was restarted for either of these things, and the backup script wasn't started, then it should have had an error, and sent me an email.  Since any time the frame builder's backup script hasn't been started properly it should send an email, I'm going to go ahead and blame whoever wrote the scripts, rather than the Joe/Pete/Alberto team.

Since our new raid disk is ~28 days of local storage, we won't have lost anything on the backup server as long as the backup works tonight (or sometime in the next few days), because the backup is an rsync, so it copies anything which it hasn't already copied.  Since the fb got restarted just now, hopefully whatever funny business (maybe with the .agent files???) will be gone, and the backup will work properly. 

I'll check in with the frame builder again tomorrow, to make sure that it's all good.

  2322   Tue Nov 24 16:06:45 2009 JenneUpdateComputers40m frame builder backup acting funny

Quote:

As part of the fb40m restart procedure (Sanjit and I were restarting it to add some new channels so they can be read by the OAF model), I checked up on how the backup has been going.  Unfortunately the answer is: not well.

I'll check in with the frame builder again tomorrow, to make sure that it's all good.

 All is well again in the world of backups.  We are now up to date as of ~midnight last night. 

  2330   Wed Nov 25 11:10:05 2009 JenneUpdateComputers40m frame builder backup acting funny

Quote:

Quote:

As part of the fb40m restart procedure (Sanjit and I were restarting it to add some new channels so they can be read by the OAF model), I checked up on how the backup has been going.  Unfortunately the answer is: not well.

I'll check in with the frame builder again tomorrow, to make sure that it's all good.

 All is well again in the world of backups.  We are now up to date as of ~midnight last night. 

 Backup Fail.  At least this time however, it threw the appropriate error code, and sent me an email saying that it was unhappy.  Alan said he was going to check in with Stuart regarding the confusion with the ssh-agent.  (The other day, when I did a ps -ef | grep agent, there were ~5 ssh-agents running, which could have been then cause of the unsuccessful backups without telling me that they failed.  The main symptom is that when I first restart all of the ssh-agent stuff, according to the directions in the Restart fb40m Procedures, I can do a test ssh over to ldas-cit, to see what frames are there.  If I log out of the frame builder and log back in, then I can no longer ssh to ldas-cit without a password.  This shouldn't happen....the ssh-agent is supposed to authenticate the connection so no passwords are necessary.) 

I'm going to restart the backup script again, and we'll see how it goes over the long weekend. 

  13404   Sat Oct 28 00:36:26 2017 gautamUpdateCDS40m files backup situation - ddrescue

None of the 3 dd backups I made were bootable - at boot, selecting the drive put me into grub rescue mode, which seemed to suggest that the /boot partition did not exist on the backed up disk, despite the fact that I was able to mount this partition on a booted computer. Perhaps for the same reason, but maybe not.

After going through various StackOverflow posts / blogs / other googling, I decided to try cloning the drives using ddrescue instead of dd.

This seems to have worked for nodus - I was able to boot to console on the machine called rosalba which was lying around under my desk. I deliberately did not have this machine connected to the martian network during the boot process for fear of some issues because of having multiple "nodus"-es on the network, so it complained a bit about starting the elog and other network related issues, but seems like we have a plug-and-play version of the nodus root filesystem now.

chiara and fb1 rootfs backups (made using ddrescue) are still not bootable - I'm working on it.

Nov 6 2017: I am now able to boot the chiara backup as well - although mysteriously, I cannot boot it from the machine called rosalba, but can boot it from ottavia. Anyways, seems like we have usable backups of the rootfs of nodus and chiara now. FB1 is still a no-go, working on it.

Quote:

Looks to have worked this time around.

controls@fb1:~ 0$ sudo dd if=/dev/sda of=/dev/sdc bs=64K conv=noerror,sync
33554416+0 records in
33554416+0 records out
2199022206976 bytes (2.2 TB) copied, 55910.3 s, 39.3 MB/s
You have new mail in /var/mail/controls

I was able to mount all the partitions on the cloned disk. Will now try booting from this disk on the spare machine I am testing in the office area now. That'd be a "real" test of if this backup is useful in the event of a disk failure.

 

 

Attachment 1: 415E2F09-3962-432C-B901-DBCB5CE1F6B6.jpeg
415E2F09-3962-432C-B901-DBCB5CE1F6B6.jpeg
Attachment 2: BFF8F8B5-1836-4188-BDF1-DDC0F5B45B41.jpeg
BFF8F8B5-1836-4188-BDF1-DDC0F5B45B41.jpeg
  13262   Mon Aug 28 16:20:00 2017 gautamUpdateCDS40m files backup situation

This elog is meant to summarize the current backup situation of critical 40m files.

What are the critical filesystems? I've also indicated the size of these disks and the volume currently used, and the current backup situation. 

Name

Disk Usage

Description / remarks

Current backup status

FB1 root filesystem 1.7TB / 2TB
  • FB1 is the machine that hosts the diskless root for the front end machines
  • Additionally, it runs the daqd processes which write data from realtime models into frame files
Not backed up
/frames up to 24TB
  • This is where the frame files are written to 
  • Need to setup a wiper script that periodically clears older data so that the disk doesn't overflow.

Not backed up 

LDAS pulls files from nodus daily via rsync, so there's no cron job for us to manage. We just allow incoming rsync.

Shared user area 1.6TB / 2TB
  • /home/cds on chiara
  • This is exported over NFS to 40m workstations, FB1 etc.
  • Contains user directories, scripts, realtime models etc.

Local backup on /media/40mBackup on chiara via daily cronjob

Remote backup to ldas-cit.ligo.caltech.edu::40m/cvs via daily cronjob on nodus

Chiara root filesystem 11GB / 440GB
  • This is the root filesystem for chiara
  • Contains nameserver stuff for the martian network, responsible for rsyncing /home/cds
Not backed up
Megatron root filesystem 39GB / 130GB
  • Boot disk for megatron, which is our scripts machine
  • Runs MC autolocker, FSS loops etc.
  • Also is the nds server for facilitating data access from outside the martian network
Not backed up
Nodus root filesystem 77GB / 355GB
  • This is the boot disk for our gateway machine
  • Hosts Elog, svn, wikis
  • Supposed to be responsible for sending email alerts for NFS disk usage and vacuum system N2 pressure
Not backed up
JETSTOR RAID Array 12TB / 13TB
  • Old /frames
  • Archived frames from DRFPMI locks
  • Long term trends

Currently mounted on Megatron, not backed up.

Then there is Optimus, but I don't think there is anything critical on it. 

So, based on my understanding, we need to back up a whole bunch of stuff, particularly the boot disks and root filesystems for Chiara, Megatron and Nodus. We should also test that the backups we make are useful (i.e. we can recover current operating state in the event of a disk failure).

Please edit this elog if I have made a mistake. I also don't have any idea about whether there is any sort of backup for the slow computing system code.

  13263   Mon Aug 28 17:13:57 2017 ericqUpdateCDS40m files backup situation

In addition to bootable full disk backups, it would be wise to make sure the important service configuration files from each machine are version controlled in the 40m SVN. Things like apache files on nodus, martian hosts and DHCP files on chiara, nds2 configuration and init scripts on megatron, etc. This can make future OS/hardware upgrades easier too.

  13332   Tue Sep 26 15:55:20 2017 gautamUpdateCDS40m files backup situation

Backups of the root filesystems of chiara and nodus are underway right now. I am backing them up to the 1 TB LaCie external hard drives we recently acquired.

I first initialized the drives by hooking them up to my computer and running the setup.app file. After this, plugging the drive into the respective machine and running lsblk, I was able to see the mount point of the external drive. To actually initialize the backup, I ran the following command from a tmux session called ddBackupLaCie:

sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync

Here, /dev/sda is the disk with the root filesystem, and /dev/sdb is the external hard-drive. The installed version of dd is 8.13, and from version 8.21 onwards, there is a progress flag available, but I didn't want to go through the exercise of upgrading coreutils on multiple machines, so we just have to wait till the backup finishes.

We also wanted to do a backup of the root of FB1 - but I'm not sure if dd will work with the external hard drive, because I think it requires the backup disk size (for us, 1TB) to be >= origin disk size (which on FB1, according to df -h, is 2TB). Unsure why the root filesystem of FB is so big, I'm checking with Jamie what we expect it to be. Anyways we have also acquired 2TB HGST SATA drives, which I will use if the LaCie disks aren't an option.

 

  13339   Thu Sep 28 10:33:46 2017 gautamUpdateCDS40m files backup situation

After consulting with Jamie, we reached the conclusion that the reason why the root of FB1 is so huge is because of the way the RAID for /frames is setup. Based on my googling, I couldn't find a way to exclude the nfs stuff while doing a backup using dd, which isn't all that surprising because dd is supposed to make an exact replica of the disk being cloned, including any empty space. So we don't have that flexibility with dd. The advantage of using dd is that if it works, we have a plug-and-play clone of the boot disk and root filesystem which we can use in the event of a hard-disk failure.

  1. One option would be to stop all the daqd processes, unmount /frames, and then do a dd backup of the true boot disk and root filesystem.
  2. Another option would be to use rsync to do the backup - this way we can selectively copy the files we want and ignore the nfs stuff. I suspect this is what we will have to do for the second layer of backup we have planned, which will be run as a daily cron job. But I don't think this approach will give us a plug-and-play replacement disk in the event of a disk failure.
  3. Third option is to use one of the 2TB HGST drives, and just do a dd backup - some of this will be /frames, but that's okay I guess.

I am trying option 3 now. dd however does requrie that the destination drive size be >= source drive size - I'm not sure if this is true for the HGST drives. lsblk suggests that the drive size is 1.8TB, while the boot disk, /dev/sda, is 2TB. Let's see if it works.

Backup of chiara is done. I checked that I could mount the external drive at /mnt and access the files. We should still do a check of trying to boot from the LaCie backup disk, need another computer for that.

nodus backup is still not complete according to the console - there is no progress indicator so we just have to wait I guess.

Quote:

Backups of the root filesystems of chiara and nodus are underway right now. I am backing them up to the 1 TB LaCie external hard drives we recently acquired.

We also wanted to do a backup of the root of FB1 - but I'm not sure if dd will work with the external hard drive, because I think it requires the backup disk size (for us, 1TB) to be >= origin disk size (which on FB1, according to df -h, is 2TB). Unsure why the root filesystem of FB is so big, I'm checking with Jamie what we expect it to be. Anyways we have also acquired 2TB HGST SATA drives, which I will use if the LaCie disks aren't an option.

 

 

ELOG V3.1.3-