ID |
Date |
Author |
Type |
Category |
Subject |
17091
|
Thu Aug 18 18:10:49 2022 |
Koji | Summary | LSC | FPMI Sensitivity | The overlapping plot of the calibrated error and control signals gives you an approximately good estimation of the freerun fluctuation, particularly when the open-loop gain G is much larger or much smaller than the unity.
However, when the G is close to the unity, they are both affected by "servo bump" and both signals do not represent the freerun fluctuation around that frequency.
To avoid this, the open-loop gain needs to be measured every time when the noise budget is calculated. In the beginning, it is necessary to measure the open-loop gain over a large frequency range so that you can refine your model. Once you gain sufficient confidence about the shape of the open-loop gain, you can just use measurement at a frequency and just adjust the gain variation (most of the cases it comes from the optical gain).
I am saying this because I once had a significant issue of (project-wide) incorrect sensitivity estimation by omitting this process. |
4180
|
Thu Jan 20 22:17:12 2011 |
rana | Summary | LSC | FPMI Displacement Noise | I found this old plot in an old elog entry of Osamu's (original link).
It gives us the differential displacement noise of the arms. This was made several months after we discovered how the STACIS made the low frequency noise bad, so I believe it is useful to use this to estimate the displacement noise of the arm cavity today. There are no significant seismic changes. The change of the suspension and the damping electronics may produce some changes around 1 Hz, but these will be dwarfed by the non-stationarity of the seismic noise. |
Attachment 1: osamu-1140657006.pdf
|
|
17499
|
Wed Mar 8 18:32:22 2023 |
Anchal | Configuration | Calibration | FPMI DARM calibration run set to happen at 1 am | On rossa in tmux session name FPMI_DARM_Cal, a script is running to take FPMI DARM calibration data at 1:00 am on March 9th. Please do not disturb the experiment untill 6 am. To stop the script do following on rossa:
tmux a -t FPMI_DARM_Cal
ctrl-C
The script will lock both arms, run ASS, then lock FPMI, then tune beatnote frequency with Y AUX laser to around 40 MHz, set phase tracked UGF to 2 kHz, clear phase history, take OLTF of DARM from 2 kHz to 10 Hz, take OLTF of CARM and AUX loop at calibration line frequencies, turn on the calibration lines, and wait for FPMI to unlock or 5 hours to pass, whatever happens first. At the end it will turn off the calibration lines. |
17502
|
Thu Mar 9 19:20:44 2023 |
Anchal | Configuration | Calibration | FPMI DARM calibration run set to happen at 1 am | Running this test again tonight. Will probably run it every night now.
|
17492
|
Sat Mar 4 18:57:18 2023 |
Paco | Configuration | Calibration | FPMI DARM calibration run | Locked FPMI, measured DARM and CARM OLTFs, locked YAUX and measured the analog loop TF at the cal line frequencies. Turned the cal lines on with the new filters Anchal added on MC2 (ResGain within and Notches outside the CARM bandwidth which is set to 200 Hz), and hope to get 3600 seconds of data this evening. Log and measurement data are saved under /opt/rtcds/caltech/c1/Git/40m/scripts/CAL/FPMI |
17916
|
Mon Oct 23 22:35:53 2023 |
Paco | Update | LSC | FPMI CARM with CM board | [paco, yuta]
We achieved a ~23 kHz CARM bandwidth this evening when locking FPMI using the CM Board.
Configuration:
- REFL55_Q_MON to IN1 of CM Board.
- The REFL55 RFPD demod angle is conveniently 92 deg, so REFL55_I (CARM error point) = REFL55_Q_MON
- AO to MC Servo Board
- Moku:Pro Freq response analyzer used to measure the CM Board loop
- TP1 (CMB) to IN1 (Moku) and TP2 to IN2 points around the first OUT2 (Moku) to EXC (CMB).
- The gain sliders we used were REFL1 Gain (+23 dB) and AO Gain (max +18 dB before we see loop oscillations).
- No Boost could be enabled, regardless of the polarity.
- Apparently both polarities were good for this lock (why?)
See Attachment #1 for the MEDM screenshot.
Results:
The inferred CARM UGF is ~ 23 kHz, as can be seen from Attachment #2 and using the loop algebra described in (40m/17628). Instead of the definition in 40m/17628, we plotted the following for the inferred CARM HBW loop OLTF (with all the other loops off).
G_CARM = G_IMC / O_IMC * O_YARM * C_YARM * F_CMB = r * G_IMC * C_YARM * F_CMB
Now the OLTF of CARM loop measured at CARM Common Mode Board with all the loops on can be calculated as
G_meas = G_CARM / (1 + G_IMC) / (1 + G_LSC/(1 + G_IMC) )
where G_LSC/(1 + G_IMC) is the OLTF for CARM digital LSC loop, with IMC loop on, which is usual CARM LSC OLTF we measure digitally.
(ORANGE texts added by YM on Oct 25 at 17:40 to update G_CARM definition to clarify; see 40m/17920 for loop algebra)
Attachment #3 shows the calibrated DARM readout with and without the CM Board feedback enabled. We can only assert the noise dropped slightly around 50-100 Hz, but not a lot (which is good in a sense).
Next:
- Check BOOST and Polarity, maybe incorporate an independently measured BOOST transfer function into the model to understand better.
|
Attachment 1: CMB_medm_CARM_FPMI_20kHz_Screenshot_2023-10-23_22-40-59.png
|
|
Attachment 2: HighBWCARMmodel.pdf
|
|
Attachment 3: FPMI_calibrated_noise_20231023_HBW.pdf
|
|
17920
|
Wed Oct 25 14:03:17 2023 |
Radhika | Update | LSC | FPMI CARM with CM board | I drew a block diagram to work through the high-bandwidth CARM feedback using the CM board. The goal was to obtain a derivation for the loop algebra used in CM board YARM locking here, applied to CARM locking. Attachment 1 is the diagram with the following blocks:
CIMC: IMC cavity TF (incl. pole, optical gain)
CCARM: CARM cavity TF (incl. pole, optical gain)
FIMC: MC servo board TF
FCMB: CM servo board TF
FLSC: digital CARM controller TF
Below are the independent, decoupled OLTF expressions for each loop (other loops off):
1. GIMC is the OLTF of the IMC-->PSL loop (middle loop in diagram):

2. GLSC is the OLTF of the slow LSC loop (bottom loop in diagram):

3. GCMB is the OLTF of the fast CM board loop (top loop in diagram):

Now that the independent loop OLTFs are defined, we can interpret the loops that have been measured:
1. The LSC slow CARM loop is always measured with the IMC locked, so the measured TF will include the IMC loop supression:

2. The CMB TF measurement is taken with the IMC locked and the slow LSC loop enabled (all 3 loop on). The measured TF will include IMC loop supression and suppression from the LSC loop, which is further suppressed by the IMC loop. The expression below is what you obtain when calculating i1/i2 in Attachment 1 (see derivation in Attachment 2):

Note that the final expression is in terms of known or measured quantitites. |
Attachment 1: IMG_5845.JPG
|
|
Attachment 2: IMG_5847.JPG
|
|
17468
|
Thu Feb 16 14:44:06 2023 |
yuta | Update | BHD | FPMI BHD with BH55 recovered | FPMI BHD with LO phase locked using BH55 is recovered after 60 Hz frequency noise saga.
Attachment #1 shows the calibrated FPMI spectrum with RF(AS55_Q) readout and BHD, compared with those taken on January 13 (40m/17400, before BH44 installation).
There is unknown excess noise at round 30-40 Hz. This is not from MC2 DAC noise, as turning on/off dewhitening filters didn't make a difference.
Attachment #2 shows the samething but zoomed at 60 Hz. 60 Hz noise is actually reduced by an order of magnitude compared with what we had before BH44 installation.
Note that RF amp for BH55 which was there on January 13 was removed now (40m/17413).
LO_PHASE is locked with BH55_Q, under whitening gain of 45 dB, whitening filter on, C1:LSC-BH55_PHASE_R=-110 deg, C1:HPC-LO_PHASE_GAIN=-10, FM5 and FM8. This gives UGF of ~20 Hz (we used to be able to get ~100 Hz, but not possible now).
Locking LO_PHASE with BH44 is not stable now, probably due to small optical gain. We might have to install RF amp for BH44.
Next:
- Check the beam alignment to BH44 RF PD
- Install RF amp for BH44
- Re-install RF amp for BH55 |
Attachment 1: FPMI_calibrated_noise_20230216.pdf
|
|
Attachment 2: FPMI_calibrated_noise_20230216_60Hz.pdf
|
|
17395
|
Thu Jan 12 12:00:09 2023 |
Paco | Summary | LSC | FPMI BHD sensitivity curve with higher resolution upto 6.9 kHz | Here's the same sensitivity plot from yesterday (40m/17392), but with higher frequency resolution, upto 6.9 kHz, using GPS times from yesterday.
Now curves from FPMI with AS55_Q are also from yesterday, just before switching to BHD, so it will be more direct comparison |
Attachment 1: FPMI_calibrated_noise_20230112.pdf
|
|
Attachment 2: FPMI_calibrated_noise_20230112_HF.pdf
|
|
17506
|
Mon Mar 13 19:53:36 2023 |
yuta | Update | BHD | FPMI BHD sensing matrix measurement with individual lines | FPMI BHD sensing matrix was measured by an updated method with updated RF demodulation phases for REFL55 and AS55.
Now audio demodulation phase for CARM components is 90 deg to make the sign correct.
Also, oscillators are turned on one by one to reduce contamination between DoFs (especially between MICH and CARM).
These helped a lot in reducing errors.
Sensing matrix with FPMI locked in RF, LO_PHASE locked with BH55_Q using LO1
Sensing matrix with the following demodulation phases (counts/m)
{'AS55': -177.9, 'REFL55': 77.06, 'BH55': -110.0, 'BH44': -8.9}
Sensors DARM @307.88 Hz CARM @309.21 Hz MICH @311.1 Hz LO1 @315.17 Hz
AS55_I (+3.25+/-0.67)e+11 [90] (-8.63+/-0.41)e+11 [90] (-1.02+/-1.49)e+09 [0] (+0.44+/-1.39)e+07 [0]
AS55_Q (-6.04+/-0.05)e+11 [90] (+0.92+/-3.10)e+10 [90] (+9.10+/-6.78)e+08 [0] (+0.12+/-2.08)e+07 [0]
REFL55_I (+1.18+/-0.03)e+11 [90] (+2.78+/-0.12)e+12 [90] (-0.35+/-2.34)e+09 [0] (-0.94+/-2.38)e+07 [0]
REFL55_Q (+5.85+/-0.43)e+09 [90] (-2.34+/-0.13)e+10 [90] (+2.39+/-0.38)e+08 [0] (+3.56+/-7.44)e+06 [0]
BH55_I (-3.51+/-3.45)e+10 [90] (-6.65+/-0.82)e+10 [90] (-4.91+/-3.03)e+08 [0] (-1.82+/-0.09)e+09 [0]
BH55_Q (+7.86+/-0.29)e+11 [90] (+2.99+/-0.42)e+11 [90] (-2.87+/-7.76)e+08 [0] (+2.81+/-0.15)e+09 [0]
BH44_I (-0.34+/-1.99)e+12 [90] (+0.02+/-1.49)e+12 [90] (-0.42+/-8.53)e+10 [0] (-0.01+/-3.08)e+10 [0]
BH44_Q (-0.60+/-3.95)e+13 [90] (-0.01+/-3.00)e+13 [90] (+0.00+/-1.68)e+12 [0] (-0.15+/-5.77)e+11 [0]
BHDC_DIFF (-9.18+/-0.29)e+11 [90] (-4.11+/-4.66)e+10 [90] (+1.46+/-0.10)e+09 [0] (-1.70+/-0.41)e+08 [0]
BHDC_SUM (+2.97+/-0.21)e+11 [90] (+0.44+/-1.57)e+10 [90] (-1.01+/-0.06)e+09 [0] (+2.68+/-0.84)e+07 [0]
- AS55_Q now has 70% more gain to DARM for some reason (see 40m/17478). Whitening gain haven't changed from 24 dB.
- There's still some room to tune AS55 RF demodulation phase to maximize DARM response.
- CARM to REFL55_Q is 100 times smaller than that to REFL55_I; this is good.
- There's still some room to tune BH55 RF demodulation phase to maximize LO1 response.
- BH44 doesn't have much response to LO1, probably because LO_PHASE is locked with orthogonal BH55.
Sensing matrix with FPMI locked in RF, LO_PHASE locked with BH44_Q using LO1
Sensing matrix with the following demodulation phases (counts/m)
{'AS55': -177.9, 'REFL55': 77.06, 'BH55': -110.0, 'BH44': -8.9}
Sensors DARM @307.88 Hz CARM @309.21 Hz MICH @311.1 Hz LO1 @315.17 Hz
AS55_I (+3.94+/-0.52)e+11 [90] (-1.00+/-0.05)e+12 [90] (-1.61+/-1.17)e+09 [0] (+0.45+/-1.52)e+07 [0]
AS55_Q (-5.52+/-0.24)e+11 [90] (+1.19+/-2.99)e+10 [90] (+1.10+/-0.43)e+09 [0] (-1.06+/-2.30)e+07 [0]
REFL55_I (+8.97+/-0.49)e+10 [90] (+2.71+/-0.11)e+12 [90] (-0.38+/-2.28)e+09 [0] (-0.97+/-2.10)e+07 [0]
REFL55_Q (+6.30+/-0.65)e+09 [90] (-2.01+/-0.12)e+10 [90] (+2.26+/-0.69)e+08 [0] (-2.61+/-6.97)e+06 [0]
BH55_I (+4.46+/-0.52)e+11 [90] (-1.52+/-0.27)e+11 [90] (-1.82+/-0.56)e+09 [0] (+0.68+/-1.24)e+08 [0]
BH55_Q (+9.59+/-0.44)e+11 [90] (+2.79+/-0.52)e+11 [90] (+2.75+/-2.49)e+08 [0] (+2.45+/-1.06)e+08 [0]
BH44_I (-0.40+/-2.42)e+12 [90] (-0.03+/-1.88)e+12 [90] (-0.03+/-1.13)e+11 [0] (+0.12+/-4.18)e+10 [0]
BH44_Q (-0.19+/-1.09)e+13 [90] (+0.70+/-7.91)e+12 [90] (-0.09+/-4.65)e+11 [0] (+0.11+/-1.34)e+11 [0]
BHDC_DIFF (+3.90+/-0.46)e+11 [90] (+1.06+/-0.18)e+11 [90] (-4.62+/-1.89)e+08 [0] (+3.60+/-0.40)e+08 [0]
BHDC_SUM (+1.96+/-0.18)e+11 [90] (-1.08+/-1.29)e+10 [90] (-8.93+/-1.41)e+08 [0] (-8.67+/-0.81)e+07 [0]
- BHDC_DIFF sensitivity to DARM is less than that with LO_PHASE locked with BH55.
- BH44 sensing matrix has too much error. Requires more averaging time and oscillator amplitude.
Jupyter notebook: /opt/rtcds/caltech/c1/Git/40m/scripts/CAL/SensingMatrix/ReadSensMat.ipynb
Next:
- Tune AS55, BH55, BH44 RF demodulation phases
- Try measuring sensing matrix for BH44 with more averaging time, oscillator amplitude, and PD whitening gain
- Repeat measurement in 40m/17351 with BH44 under MICH configuration.
- Compare LO phase noise in MICH configuration when LO_PHASE is locked with BH44 and BH55.
- Make a noise budget in MICH BHD.
- Investigate 28 Hz noise in FPMI
- Tune BS local damping loops |
17392
|
Wed Jan 11 16:56:57 2023 |
yuta | Summary | LSC | FPMI BHD recovered, LO phase noise not limiting the sensitivity | [Paco, Yuta]
We recovered FPMI BHD, and sensitivity was estimated. High frequency sensitivity is improved by an order of magnitude compared with AS55 FPMI.
We also estimated the contribution from LO phase noise, and found that LO phase noise is not limiting the sensitivity.
Locking sequence:
1. Lock electronic FPMI
DARM:
- 0.5 * POX11_I - 0.5 * POY11_I
- DARM filter module, FM4,5 for acquisition, FM1,2,3,6,8 triggered, C1:LSC-DARM_GAIN = 0.015 (gain lowered from BHD FPMI in December (was 0.02) to have more gain margin)
- Actuation on 0.5 * ETMX - 0.5 * ETMY
- UGF ~ 150 Hz
CARM:
- 0.5 * POX11_I + 0.5 * POY11_I
- CARM filter module, FM4,5 for acquisition, FM1,2,3,6,8 triggered, C1:LSC-CARM_GAIN = 0.012
- Actuation on -0.734 * MC2
2. Lock MICH
MICH:
- 1.1 * REFL55_Q
- MICH filter module, FM4,5,8 for acquisition, FM2,3,6 triggered, C1:LSC-MICH_GAIN = +10
- Actuation on 0.5 * BS
- UGF ~35 Hz (Attachment #1)
3. Hand over to real DARM/CARM
DARM:
- 2.617 * AS55_Q
CARM:
- 0.496 * REFL55_I
- UGF ~200 Hz (Attachment #2)
4. Lock LO_PHASE
LO_PHASE:
- 1 * BH55_Q (demod phase: -110 deg; this was chosen by hand to maximize fringe in BH55_Q when FPMI is locked. This seemed to make more robust LO_PHASE lock, compared with December)
- FM5,8, C1:HPC-LO_PHASE_GAIN=-1
- Acuation on 1 * LO1
- UGF ~110 Hz (Attachment #3)
5. Hand over to BHD_DIFF
DARM:
- -1.91 * BHD_DIFF (ratio between AS55_Q to BHD_DIFF was measured with a DARM line at 575.125 Hz, which was measured to be -0.455; BHD_DIFF was zeroed by balancing A and B before the measurement)
- UGF ~150Hz (Attachment #4)
Measured sensing matrix:
/opt/rtcds/caltech/c1/Git/40m/scripts/CAL/SensingMatrix/ReadSensMat.ipynb
Sensing matrix with the following demodulation phases (counts/counts)
{'AS55': -168.5, 'REFL55': 92.32, 'BH55': -110.0}
Sensors DARM @307.88 Hz CARM @309.21 Hz MICH @311.1 Hz LO1 @315.17 Hz
AS55_I (-0.35+/-1.44)e-02 (+3.23+/-1.09)e-02 (-0.18+/-9.84)e-03 (+0.02+/-1.33)e-03
AS55_Q (-0.45+/-3.48)e-02 (-0.19+/-1.23)e-02 (+0.05+/-1.43)e-03 (-0.02+/-2.03)e-04
REFL55_I (+0.05+/-1.31)e-01 (+4.51+/-0.07)e-01 (-0.03+/-3.44)e-02 (+0.11+/-1.01)e-03
REFL55_Q (-0.39+/-4.74)e-04 (-1.36+/-0.50)e-03 (+0.16+/-2.80)e-04 (+0.10+/-4.92)e-05
BH55_I (-1.90+/-5.00)e-03 (+0.80+/-8.82)e-03 (-0.50+/-2.29)e-03 (-4.61+/-9.52)e-04
BH55_Q (-0.31+/-4.86)e-02 (-3.13+/-3.04)e-02 (-0.06+/-1.07)e-02 (-1.42+/-2.11)e-03
BHDC_DIFF (-5.56+/-0.21)e-02 (+0.10+/-1.64)e-02 (+0.10+/-1.40)e-03 (+0.77+/-2.27)e-04
BHDC_SUM (+1.75+/-3.90)e-03 (-1.22+/-3.22)e-03 (-0.35+/-1.52)e-03 (-0.36+/-6.42)e-04
Sensing matrix with the following demodulation phases (counts/m)
{'AS55': -168.5, 'REFL55': 92.32, 'BH55': -110.0}
Sensors DARM @307.88 Hz CARM @309.21 Hz MICH @311.1 Hz LO1 @315.17 Hz
AS55_I (-0.30+/-1.25)e+11 (+2.18+/-0.73)e+11 (-0.07+/-3.59)e+10 (+0.08+/-5.02)e+09
AS55_Q (-0.39+/-3.03)e+11 (-1.27+/-8.32)e+10 (+0.20+/-5.23)e+09 (-0.06+/-7.66)e+08
REFL55_I (+0.04+/-1.14)e+12 (+3.05+/-0.05)e+12 (-0.01+/-1.26)e+11 (+0.42+/-3.82)e+09
REFL55_Q (-0.34+/-4.12)e+09 (-9.20+/-3.37)e+09 (+0.06+/-1.02)e+09 (+0.04+/-1.86)e+08
BH55_I (-1.65+/-4.34)e+10 (+0.54+/-5.95)e+10 (-1.83+/-8.36)e+09 (-1.74+/-3.59)e+09
BH55_Q (-0.27+/-4.23)e+11 (-2.11+/-2.05)e+11 (-0.21+/-3.91)e+10 (-5.36+/-7.98)e+09
BHDC_DIFF (-4.83+/-0.18)e+11 (+0.07+/-1.11)e+11 (+0.35+/-5.09)e+09 (+2.90+/-8.56)e+08
BHDC_SUM (+1.52+/-3.39)e+10 (-0.82+/-2.17)e+10 (-1.29+/-5.56)e+09 (-0.14+/-2.42)e+09
NOTE that some of sensing matrix element (e.g. DARM to AS55_Q) is wrong, because of ill defined sign in C1:CAL-SENSMAT_XXXX_XXXX_AMPMON channels.
Locked GPS times:
- 1357511737 to 1357513050
- 1357513448 to 1357518188 (intentionally unlocked)
Sensitivity estimate:
- See Attachment #5 and #6 (high frequency zoomed), dashed traces with darker colors are of AS55_Q FPMI from 40m/17369.
- DARM_IN1 was calibrated using DARM content in BHD_DIFF, with 1 / (1.91 * 4.83e11 counts/m) = 1.086e-12 m/counts (which should be similar to DARM_IN1 calibration for AS55_Q, because we are balancing the error signals going to DARM_IN1, and it is as expected; see 40m/17369).
- DARM_OUT calibration is the same as 40m/17369; ETM plant = 10.91e-9 / f^2 m/counts.
- LO phase noise was estimated using BH55_Q, with the collowing calibration factor (BH55_Q calibrated into LO1 motion, into BHD_DIFF, and then into DARM).
1/5.36e9*2.9e8/4.83e11 = 1.15e-13 m/counts
- Seismic noise was estimated using C1:IOO-MC_F_DQ, with the same calibration factor found in 40m/16975.
- Dark noise was estimated using DARM_IN1 when the PSL shutter is closed.
Discussion:
- Sensitivity below ~10 Hz is probably limited by seismic noise
- Noise above 1 kHz might be limited by dark noise of BHD_DIFF, but not below 1 kHz. For AS55_Q FPMI, sensitivity above ~300 Hz was limited by dark noise.
- LO phase noise is not limiting the sensitivity, from the estimated noise using BH55_Q.
- Both BH55_Q and BHD_DIFF have funny structure like forest above ~100 Hz. This might be from suspensions in the AS path and LO path to BHD. It could also be that calibration lines for measuring sensing matrix were too much (BHD FPMI sensitivity was measured with calibration lines around ~310 Hz on).
Next:
- Update the c1cal model to put correct signs to C1:CAL-SENSMAT_XXXX_XXXX_AMPMON channels.
- Measure BHD FPMI sensitivity with calibration lines off.
- Find better LO phase to improve the sensitivity.
- Lock LO phase with RF+audio and RF44 and compare the sensitivity.
- Move on to PRFPMI BHD. |
Attachment 1: Screenshot_2023-01-11_16-17-16_MICH.png
|
|
Attachment 2: Screenshot_2023-01-11_16-12-40_CARM.png
|
|
Attachment 3: Screenshot_2023-01-11_16-21-40_LOPHASE.png
|
|
Attachment 4: Screenshot_2023-01-11_16-10-54_DARM.png
|
|
Attachment 5: FPMI_calibrated_noise_20230111.pdf
|
|
Attachment 6: FPMI_calibrated_noise_20230111_HF.pdf
|
|
835
|
Thu Aug 14 15:51:35 2008 |
josephb | Summary | Cameras | FOUND! The Missing Standoff! | We used a zoom lens on the GC750 to take this picture of the standoff while inside a plastic rubber-glove bag. The standoff with bag is currently scotch-taped to the periodic table of the elements. |
Attachment 1: standoff.png
|
|
11110
|
Fri Mar 6 14:29:59 2015 |
manasa | Update | General | FOL troubleshooting | [EricQ, Manasa]
Domenica:
Since Domenica was not picking up an IP address and hence not responding to pings or ssh even after power cycling, I pulled it out from the IOO rack and connected it to a monitor. After a bunch of hit and trials, we figured out that the problem was related to the power adapter of the Rpi discussed here : http://www.raspberrypi.org/forums/viewtopic.php?f=28&t=39055
The power adapter has been swapped and this issue has been resolved. Domenica has been remounted on the IOO rack but left with the top lid off for the timebeing.
RF amplifier box:
The frequency counter was setup to read the beat frequency after amplification of the RFPD signals. The output as seen on the frequency counter screen as well as the epics values was intermittent. Also, locking the arms and changing the end laser frequencies did not change the frequency counter outputs. It looks like the RF amplifier is oscillating (Oscillating frequencies were ~107 MHz and 218 MHz) and the input circuit needs to be modified and the connections need better insulation.
As of now, I have connected the RFPD outputs to the frequency counter sans any amplification and we are able to get a readout of the beat frequency at > 40MHz (Frequency counter requires much higher amplitude signals for frequencies lower than that). |
11123
|
Mon Mar 9 14:43:31 2015 |
manasa | Update | General | FOL troubleshooting | Figuring out the problem with frequency counter readouts:
RF amplifier box:
The frequency counter was setup to read the beat frequency after amplification of the RFPD signals. The output as seen on the frequency counter screen as well as the epics values was intermittent. Also, locking the arms and changing the end laser frequencies did not change the frequency counter outputs. It looks like the RF amplifier is oscillating (Oscillating frequencies were ~107 MHz and 218 MHz) and the input circuit needs to be modified and the connections need better insulation.
As of now, I have connected the RFPD outputs to the frequency counter sans any amplification and we are able to get a readout of the beat frequency at > 40MHz (Frequency counter requires much higher amplitude signals for frequencies lower than that).
|
The frequency counter has 4 ranges of operation:
Range 1 : 1 - 40MHz
Range 2: 40 - 190MHz
Range 3: 190 - 1400MHz
Range 4: 1400 - 6000MHz
I set up the beat frequency readout in various configurations to troubleshoot and have recorded my observation (attachments). The amplifiers in all cases is just a single ZFL-500LN.
There seems to be a problem with the RF amplifier and the frequency counter when they are set together. I am not able to figure it out and stuck right here 
Attachment 1: schematic for readout and corresponding observations
Attachment 2: Oscilloscope screen snapshot for schematic 3
Attachment 3: Spectrum analyzer screen snapshot for schematic 3
More info: RFPD used is Thorlabs FPD310 and frequency counter is Mini Circuits UFC-6000. The RF amplifier has decoupling capacitors soldered across the power supply. |
Attachment 1: FOL_trouble.pdf
|
|
Attachment 2: IMAG0341.jpg
|
|
Attachment 3: IMAG0342.jpg
|
|
11128
|
Tue Mar 10 17:48:07 2015 |
manasa | Update | General | FOL troubleshooting | [Koji, Manasa]
This problem was solved by changing the Frequency Counter range settings.
Frequency counter automatic range setting has been modified and now the range can be manually set depending on the input frequency. New codes have been written to do this. The scripts will be polished and wrapped up shortly.
Quote: |
Figuring out the problem with frequency counter readouts:
RF amplifier box:
The frequency counter was setup to read the beat frequency after amplification of the RFPD signals. The output as seen on the frequency counter screen as well as the epics values was intermittent. Also, locking the arms and changing the end laser frequencies did not change the frequency counter outputs. It looks like the RF amplifier is oscillating (Oscillating frequencies were ~107 MHz and 218 MHz) and the input circuit needs to be modified and the connections need better insulation.
As of now, I have connected the RFPD outputs to the frequency counter sans any amplification and we are able to get a readout of the beat frequency at > 40MHz (Frequency counter requires much higher amplitude signals for frequencies lower than that).
|
The frequency counter has 4 ranges of operation:
Range 1 : 1 - 40MHz
Range 2: 40 - 190MHz
Range 3: 190 - 1400MHz
Range 4: 1400 - 6000MHz
I set up the beat frequency readout in various configurations to troubleshoot and have recorded my observation (attachments). The amplifiers in all cases is just a single ZFL-500LN.
There seems to be a problem with the RF amplifier and the frequency counter when they are set together. I am not able to figure it out and stuck right here 
Attachment 1: schematic for readout and corresponding observations
Attachment 2: Oscilloscope screen snapshot for schematic 3
Attachment 3: Spectrum analyzer screen snapshot for schematic 3
More info: RFPD used is Thorlabs FPD310 and frequency counter is Mini Circuits UFC-6000. The RF amplifier has decoupling capacitors soldered across the power supply.
|
|
11135
|
Wed Mar 11 19:48:25 2015 |
Koji | Update | General | FOL troubleshooting | There is a frequency counter code written by the summer student.
The code needed some cleaning up.
It's still there in /opt/rtcds/caltech/c1/scripts/FOL as armFC.c
This code did not provide unified way to send commands to the FCs.
Therefore I made a code to change the frequency range of the FCs
by removing unused variables and instructions, adding more comments,
adding reasonable help messages and trouble shooting feedbacks.
Obviously these codes only run on domenica (raphsberry Pi host)
/opt/rtcds/caltech/c1/scripts/FOL/change_frange
change_frange : change the freq range of the frequency counter UFC-6000
Usage: ./change_frange DEVICE VALUE
DEVICE: '/dev/hidraw0' for Xarm, '/dev/hidraw1' for Yarm
VALUE:
0 - automatic
1 - 1MHz to 40MHz
2 - 40MHz to 190MHz
3 - 190MHz to 1400MHz
4 - 1400MHz to 6000MHz
|
11137
|
Thu Mar 12 11:57:38 2015 |
Koji | Update | General | FOL troubleshooting | BTW, during this trouble shoot, we looked at the IR beatnote spectrum between the Xend and the PSL.
It showed a set of sidebands at ~200kHz, which is the modulation frequency.
There was another eminent component present at ~30kHz.
I'm afraid that there is some feature like large servo bump, a mechanical resonance, or something else, at 30kHz.
We should check it. Probably it is my job. |
11650
|
Tue Sep 29 19:38:09 2015 |
gautam | Update | General | FOL fiber box revamp | The new 2x2 fiber couplers arrived today so Eric gave me an overview on the changes to be made to the existing configuration of the FOL fiber box. I removed the box from the table after ensuring that the PDs were powered OFF and removing and capping all fiber leads on the front panel. Here is a summary of the changes made.
- On-Off positions for the rocker switches corrected - these switches for the power to the PDs were installed such that the "1" position was OFF. I flipped both the switches such that the "1" position now corresponds to ON (see Attachment #1).
- All the couplers/beam combiners/splitters were initially removed.
- I then re-configured the layout as per the schematic (Attachment #2). I only needed to use one of the 4 new 2x2 couplers ordered. I think the 1x2 couplers are appropriate for mixing the PSL and AUX beams, as if we use a 2x2 coupler, half of the mixed light goes nowhere? Indeed, if we had one more such coupler, we could do away with the 2x2 coupler I am now using to divide the PSL light into two.
- The spec-sheets on the inside of the top cover were updated to reflect the new hardware (Attachment #3).
- The old hardware from the box that was not used, along with their spec-sheets, are stored temporarily in a Thorlabs lab snacks box (all the fibers have been capped).
- The finished layout is shown in Attachment #4.
I then ran a quick check to see what the power levels were at the input to the PDs, using the fiber coupled power meter. However, I found that there was no light in the fiber marked "PSL light in" (the power meter read out "Sig. Low"). The X arm Aux light had an input power of 1.12 mW, which after the various coupling losses etc went down to 63 uW just before the PD. The corresponding figures for the Y arm are 200 uW and 2.2 uW. I am not too sure of how the AUX light is coupled into fibers so I am not trying to tweak the alignment to see if I get more power. |
Attachment 1: IMG_0017.JPG
|
|
Attachment 2: FOL_schematic.pdf
|
|
Attachment 3: IMG_0018.JPG
|
|
Attachment 4: IMG_0016.JPG
|
|
11652
|
Wed Sep 30 13:07:13 2015 |
gautam | Update | General | FOL fiber box revamp | Eric pointed out that the 1x2 couplers that were used in the previous arrangement and which I recycled, were in fact NOT appropriate - they are not 50-50 couplers but 90-10 couplers, which explains the measured power levels I quoted here.
I switched out these for a pair of the newly arrived 2x2 couplers, and have also replaced the datasheets on the inside of the top cover. I then redid the power level measurements, and got some sensible values this time (see Attachment #1 for revised layout and measured power levels, numbers in red are powers for PSL light, numbers in green are for AUX laser light, and all numbers are in mW). I did find that the 90-10 splitter in the PSL+Y path was not working (though the one in the PSL+X path seems to be working fine), and hence, have not quoted power levels at the output of these splitters. For now, I guess we can bypass the splitters and take the PSL+AUX light from the 2x2 couplers directly to the PDs. |
Attachment 1: FOL_schematic.pdf
|
|
11670
|
Tue Oct 6 16:56:40 2015 |
gautam | Update | General | FOL fiber box revamp | [gautam, ericq]
We had a look at the IR beat (PSL+Xarm) today using the new FOL fiber box, and compared it to the green beat signal for the same combination. We first switched out the green Y beat input into the RF amplifiers on the PSL table with the PSL+Xarm IR beat input (so in all the plots, the BEATY channels really correspond to the IR beat for PSL+X). The IR and green beat notes were found without much difficulty, and we compared the beat signal PSDs for the green and IR signals (see Attachment #1 - arms were locked to green and the X slow control was turned on). The pink trace (labeled REF1) corresponds to the green beat signal, and was in good agreement with an earlier reference trace Eric had saved for the same signal. The teal trace (labeled REF0) corresponds to the the IR beat signal monitored simultaneously.
We then went back to the PSL table to check the amplitude of the signal from the broadband fiber PDs using the Agilent network analyzer. An initial measurement yielded a beat note (@~50MHz) at ~-22dbm (17mV rms). We figured that by bypassing the 90-10 splitter in this path, we could get a stronger signal. But after switching out the fiber connections we found that the signal amplitude had fallen to ~-27dbm (10mV rms). As per my earlier measurements here, we expect ~600uW of light on the PD, and a quick calculation suggested the signal should be more like 60mV, so we used the fiber power meter to check the power levels after each of the couplers again. We then found that the fiber connector on the front panel of the box for the PSL input wasnt ideal (the laser power after the first 50-50 coupler was only ~250 uW, though the input was ~1.2 mW). The power after the first coupler also fluctuated unpredictably (<100 uW to 350 uW) in response to slightly tightening/loosening the fiber connections on the front panel. I then switched the PSL input to one of the two unused fiber connectors on the front panel (meant for the 10% of the beat signal for the DC readout), and found that this input behaved much better, with ~450 uW of power available after the first 50-50 coupler. The power going into the beat PD was also measured to be ~550uW, closer to what was expected. The beat signal peak now was ~-14dbm (~30mV rms).
We then once again repeated the comparison between green and IR beat signals - but while in the control room, I noticed that the beat signal amplitude on the network analyzer in the control room was fluctuating by nearly 1.5 divisions on the vertical scale - not sure what the reason for this is. A look at the PSD of the IR beat with higher power incident on the PD was also not encouraging (see blue trace in Attachment #1), it seems to have gotten worse in the 10-30 Hz range. We also looked at the coherence between the beat spectrum and the beat note amplitude in order to look for any linear coupling between the two, but from Attachment #2, we cannot explain the disparity between the green and IR beat spectra. This warrants further investigation.
Everything on the PSL table has now been restored to the configurations before these investigations (i.e. the Y+PSL green beat cable has been reconnected to the RF amplifier, and both green beat PDs have been powered back ON. The fiber PDs are powered OFF) |
Attachment 1: 20151006_Xbeat_psd.pdf
|
|
Attachment 2: 20151006_Xbeat_coherence.pdf
|
|
11102
|
Thu Mar 5 12:27:27 2015 |
manasa | Update | General | FOL fiber box on the PSL table | Working around the PSL table
I have put the FOL fiber box on the PSL table. The fibers carrying AUX and PSL light are connected and the RFPDs have been powered up. I can see the beat frequency on the frequency counters; but for some reason Domenica (that brings the frequency counter values on the medm screens) is not visible on the network even after hard reboot of the raspberry pi. I am neither able to ping nor ssh into the machine. I have to pull the module out and add a monitor cable to troubleshoot (my bad I should have left the monitor cable with the raspberry pi in the first place). Anyways, I have handed over the IFO to Q and will play with things again sometime later.
The fiber box on the PSL table is only left there temporariliy till I have things working. It will be stowed on the rack properly later.
In case someone wants the fiber box out of the PSL table, please make sure to power down the RFPDs using the black rocker switches on the side of the box and then disconnect the cables and fibers. |
10352
|
Fri Aug 8 14:27:18 2014 |
Akhil | Update | Computer Scripts / Programs | FOL Scripts | The scripts written for interfacing the FC with R Pi, building EPICS database, piping data into EPICS channels,PID loop for FOL are contained in :
/opt/rtcds/caltech/c1/scripts/FOL
The instructions to run these codes on R Pi( controls@domenica) will be available on FOL 40m wiki page.
Also instructions regarding EPICS installation on R Pi and building an EPICS SoftIoc to streamline data from hardware devices into channels will be updated shortly.
|
10376
|
Wed Aug 13 16:12:55 2014 |
Harry | Update | General | FOL Layout Diagram | Per Q's request, I've made up a diagram of the complete FOL layout for general reference.

|
10273
|
Fri Jul 25 17:28:31 2014 |
Harry | Update | General | FOL Box and PER Update | Purpose
We're putting together a box to go into the 1X2 rack, to facilitate the frequency counters, and Raspberry Pi that will be used in FOL.
Separately, I am working on characterizing the Polarization Extinction Ratio of the PM980 fibers, for further use in FOL.
What's Been Done
The frequency counters have been mounted on the face of the box, and nylon spacers installed in the bottom, which will insulate the RPi in the future, once it's finally installed.

In regard to the PER setup, there is an issue, in that the mounts which hold the collimators rotate, so as to align the axes of the fibers with the polarization of the incoming light.
This rotational degree of freedom, however, isn't "sticky" enough, and rotates under the influence of the stress in the fiber. (It's not much, but enough.)
This causes wild fluctuations in coupled power, making it impossible to make accurate measurements of PER.
What's Next
In the FOL box's case, we've ordered a longer power cable for the raspberry pi (the current one is ~9 inches long).
Once it arrives, we will install the RPi, and move the box into its place in the rack.
In the case of the PER measurement, we've ordered more collimator mounts//adapters, which will hopefully give better control over rotation.
|
1498
|
Mon Apr 20 05:18:42 2009 |
Yoichi | Configuration | Locking | FM6 and FM10 of LSC-MC were restored | During tonight's locking work, I realized that FM6 and FM10 (both resonant gains around 20Hz) were actually activated by cm_step.
So I restored those filters from the svn history.
Instead, I removed a bunch of unused filters from LSC-DEMOD and LSC-DEMOD_A (moving zero filters) to off load c1lsc.
As for the locking itself, the DARM loop becomes unstable at around arm power = 30. I may have to add a filter to give a broader phase bubble. |
10226
|
Thu Jul 17 02:57:32 2014 |
Andres | Update | 40m Xend Table upgrade | FInish Calculation on Current X-arm mode Matching | Data and Calculation for the Xarm Current Mode Matching
Two days ago, Nick, Jenne, and I took a measurement for the Green Transmission for the X-arm. I took the data and I analyzed it. The first figure attached below is the raw data plotted. I used the function findpeaks in Matlab, and I found all the peaks. Then, by taking close look at the plot, I chose two peaks as shown in the second figure attached below. I took the ratio of the TEM00 and the High order mode, and I average them. This gave me a Mode Matching of 0.9215, which this value is pretty close to the value that I predicted by using a la Mode in http://nodus.ligo.caltech.edu:8080/40m/10191, which is 0.9343. Nick and I measured the reflected power when the cavity is unlocked and when the cavity is locked, so we measured the PreflUnLocked=52+1µW and PreflOnLocked=16+2µW and the backgroundNoise=0.761µW. Using this information we calculated Prefl/Pin=0.297. Now, since Prefl/Pin=|Eref/Ein|2, we looked at the electric fields component by using the reflectivity of the mirror we calculated 0.67. The number doesn't agree, but this is because we didn't take into account the losses when making this calculation. I'm working in the calculation that will include the losses.
Today, Nick and I ordered the lenses and the mirrors. I'm working in putting together a representation of how much improvement the new design will give us in comparison to the current setup.
|
Attachment 1: RawDataForTheModeGreenScan.png
|
|
Attachment 2: ResultForModeMatching.png
|
|
Attachment 3: DataAndCalculationOfModeMismatch.zip
|
10237
|
Fri Jul 18 16:52:56 2014 |
Andres | Update | 40m Xend Table upgrade | FInish Calculation on Current X-arm mode Matching |
Quote: |
Data and Calculation for the Xarm Current Mode Matching
Two days ago, Nick, Jenne, and I took a measurement for the Green Transmission for the X-arm. I took the data and I analyzed it. The first figure attached below is the raw data plotted. I used the function findpeaks in Matlab, and I found all the peaks. Then, by taking close look at the plot, I chose two peaks as shown in the second figure attached below. I took the ratio of the TEM00 and the High order mode, and I average them. This gave me a Mode Matching of 0.9215, which this value is pretty close to the value that I predicted by using a la Mode in http://nodus.ligo.caltech.edu:8080/40m/10191, which is 0.9343. Nick and I measured the reflected power when the cavity is unlocked and when the cavity is locked, so we measured the PreflUnLocked=52+1µW and PreflOnLocked=16+2µW and the backgroundNoise=0.761µW. Using this information we calculated Prefl/Pin=0.297. Now, since Prefl/Pin=|Eref/Ein|2, we looked at the electric fields component by using the reflectivity of the mirror we calculated 0.67. The number doesn't agree, but this is because we didn't take into account the losses when making this calculation. I'm working in the calculation that will include the losses.
Today, Nick and I ordered the lenses and the mirrors. I'm working in putting together a representation of how much improvement the new design will give us in comparison to the current setup.
|
We want to be able to graphically see how much better it is the new optical table setup in comparison to the current optical table setup. In other words, we want to be able to see how displacement of the beam and how much angle change can be obtained at the ETM from changing the mirrors angles independently. Depending on the spread of the mirrors' vectors we can observe whether the Gouy phase is good. In the plot below, the dotted lines correspond to the current set up, and we can see that the lines are not spread from each other, which essentially mean that changing the angles of the two mirrors just contribute to small change in angle and in the displacement of the beam at the ETM, and therefore the Gouy phase is not good. Now on the other hand. The other solid lines correspond to the new setup mirrors. We can observe that the spread of the line of mirror 1 and mirror 4 is almost 90 degrees, which just implies that there is a good Gouy phase different between these two mirrors. For the angles chosen in the plot, I looked at how much the PZT yaw the mirrors from the elog http://nodus.ligo.caltech.edu:8080/40m/8912. In this elog, they give a plot in mrad/v for the pitch and yaw, so I took the range that the PZT can yaw the mirrors, and I converted into mdegrees/v and then I plotted as shown below. I plot for the current setup and for the new setup in the same plot. The matlab code is also attached below. |
Attachment 1: OldAndNewSetupPlotsOfDisplacementAndAngleAtTheETM.png
|
|
Attachment 2: OldSetUpDisplacementAndNewSetup.m.zip
|
17711
|
Mon Jul 24 23:01:10 2023 |
Koji | Summary | CDS | FIXED: rossa can't boot | Compared the network settings between some of the machines.
It seemed that we can write network settings into /etc/network/interfaces. (Comment lines omitted)
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
pianosa has the same content, but I found chiara has much more elaborated lines. So I decided to put the details there.
source /etc/network/interfaces.d/*
auto lo
iface lo inet loopback
auto eno1
iface eno1 inet static
address 192.168.113.215
netmask 255.255.255.0
gateway 192.168.113.2
dns-nameservers 192.168.113.104
dns-search martian
dns-domain martian
And just in case ifup was ran
$ sudo /sbin/ifup eno 1
This made rossa connected to the wired network as rebooted.
So reactivated the NFS line in /etc/fstab
Rebooting pianosa brought it back to the nominal operation. Victory. |
17738
|
Mon Jul 31 22:25:07 2023 |
Koji | Update | PEM | FIXED: Strange behavior of ITMX and ITMY probably due to DAC issue | [Hiroki, Yehonathan, Yuta, Koji]
Summary
- The c1sus DAC0 issue seemed to be the driving capability of the DAC card.
- The DAC issue was solved by replacing 2x Anti-Imaging modules (D000186 RevB) with 2x Rev C equipped with input amps.
- This may mean that the previous DAC card failure was also the same issue. (i.e. The card could still be OK)
- This change may have increased the coil driver responses of the vertex suspensions by a factor of 2
as the conventional RevB had the single-end inputs, but Rev C has the differential ones.
- Enabling the full actuation range caused significant misalignment of the suspensions. But the IFO was realigned and now looks much more normal than before.
- The ITMX OPLEV servos could be engaged with the conventional gain. No instability is observed anymore.
- The BS OPLEV gains were too high. |G|=0.4 -> 0.2 to maintain stability.
- Confirmed Yarm ASS functioning well as before
- Xarm ASS is partially functioning as before. Some gains were reduced to keep the stability, but still, careful inspection is necessary.
Details
- The monitor outputs of the Anti Anti-Imaging modules (D000186 RevB) were checked while the corresponding coil EXC channels were excited via awggui.
- When the DAC was fully driven, we observed the output was only limited to the negative side of the full scale. The negative side was clipped at -5V. (See Attachment 1). This was the case for all 16 channels for DAC0. We found no issue for DAC1 and DAC2.
- Disconnect the DAC0 SCSI cable from the SCSI-IDC40 adapter unit to isolate the DAC from the analog chain. Could we observe the bipolar signal? Indeed yes, we saw the bipolar signal. (Attachment 2). (We use a SCSI-DSUB9 adapter board.)
- We wondered if this was the failure of the AI modules. Replacing the left D000186 with the same Rev B spare. No change.
- Checked if power cycling the AI module reset it. As this was done with the DAC connected, we lost positive and negative outputs of the DAC (OMG!)
- Rebooting c1sus solved the issue, and indeed the DAC exhibited a bipolar signal even with the Rev B cards.
- We thought we solved the issue! Came back to the control room and worked on a boot fest. After about an hour of struggle, all the RTS are back online. (GJ!)
- But we found that we are back to the original state. (same as Attachment 1)
Some break ....
- Went back to the rack. My thought was that the DAC could work when it was not connected to the AI modules. Once it is connected to the AI modules, the positive side disappears. Probably can't DAC drive all the load?
- Disconnected the 2nd IDC40 cable from the adapter (i.e., 2nd AI isolated). This brings the bipolar signals back! (Attachment 3)
- If the 1st IDC40 was this connected while the 2nd cable was connected, this brings the bipolar signals back to the later half 8chs (Attachment 4)
- We have a couple of D000186 Rev C spares. I checked the circuit diagram of Rev C (it is not on DCC but in Jay's circuit diagram archive on 40m BOX). I found that each channel of Rev C has an INA134. This is not a true instrumentation amplifier but a differential amplifier with R=25kOhm. So it was promising.
- Indeed, replacing a RevB with a RevC already solved the issue. Just in case, I've replaced both RevBs for DAC0.
- I wondered what the input impedance of RevB was. The Frequency Devices Filter module has the min input impedance of 10kOhm. However, the Rev B input is single-ended. I believe the negative side is shorted to GND! This probably mean that the gain with Rev C is x2 of the conventional one
- IFO alignment / locking / oplev test: as described at the summary.
|
Attachment 1: PXL_20230731_233855623.MP.jpg
|
|
Attachment 2: PXL_20230731_235323818.jpg
|
|
Attachment 3: PXL_20230801_033401354.MP.jpg
|
|
Attachment 4: PXL_20230801_033332906.MP.jpg
|
|
17746
|
Wed Aug 2 14:50:22 2023 |
Koji | Update | PSL | FIXED: Remote switching of PSL shutter is not working | [JC Koji]
c1vac was hard rebooted. This fixed the PSL shutter issue.
- Checked the Unibritz Shutter Controller for the PSL shutter. It seems that it responds to the NO/NC switch no matter how the REMOTE/LOCAL switch is. (Normal action)
- Checked the BNC input to the shutter controller. There was no trigger/gate or whatever signal from Acromag.
- c1psl reboot did not help
- Upon this action, a snapshot was taken.
- Stopped/restarted modbusIOC.services. Did not work.
- Rebooted c1psl. Didn't work.
- c1psl was shutdown and c1psl acromag chassis was power-cycled. This didn't help. c1psl was burtrestored.
- Stopped modbusIOC.services. Shutdown c1psl.
- Power cycled the Acromag chassis.
- After a while, c1psl was turned on. The epics channels came back.
- Confirmed modbusIOC.services was running.
- Burtrestored the snapshot.
- I remembered that the vacuum pressure could make an interlock action on the PSL shutter.
Found that c1vac values were all whited out. c1vac was not reachable from Martian.
- Went to c1vac and found it was frozen. Pushed hard reset.
- c1vac came back, and it closed the gate valve V1 (main GV for the main vacuum manifold). We opened V1 from the vac control screen.
- Confirmed the PSL shutter is now operational.
|
17786
|
Tue Aug 15 17:11:17 2023 |
Koji | Update | IOO | FIXED: PMC issue | Summary
The problem in the PMC Frequency Reference Card (35.5MHz) was fixed.
The last opamp stage of the attenuator control (LT1125) had the negative rail, and the EOM drive level was maximally attenuated.
The chip was replaced, and the card was back in the euro crate. The PMC is locking as before.
Diagnosis
Yesterday, I already found that the modulation level was basically zero and had no response to the level slider. Today, the card was extracted from the rack and tested on the workbench. It required +/-24V supplies for the main power and +10V supply for high-power RF amps (SMA-202). Additionally, a function generator to supply a DC voltage is needed to tune the attenuation level manually. (Attachment 1)
Some notes:
- This circuit was previously fixed in 2015 by Yutaro and me: [ELOG 11763]
- Even before that, Jenne and Rana looked at the board for LO fix in 2014 [ELOG 10160]
- The boards DCC number is D980353 however the 35.5MHz version is D000419 and the 40m version has a dedicated DCC number D1400221
Initial look was horrible because some greasy material was covering some parts on the boards (I had the same thing in 2015 too). It turned out that it is a leaked thermal paste from the heat sink at the bottom side. (Attachment 2)
Firstly, the suspicious ERA-5SM was checked. The attenuator control path was suspiciously dead (-5V) but the function generator was attached to the attenuator control pin, the ERA-5SM in the EOM path received the signal and it was working fine(!). The ERA-5SM received 20mVpp and spit out 270mVpp, observed with a 1/10 probe. The LO path one was also alive, receiving 173mVpp_max and yielding 200mVpp.
Then, the high-power amps (SMA-202) were checked. This is an old chip which is not commercially available anymore. So I thought I would be reluctant to replace it was broken. But the amps were just fine. With the above condition, the LO and PC paths' outputs were ~9Vpp and 6.1Vpp.
Now I went into the attenuator control part. And it turned out that the last stage (low pass filter part) was broken. By replacing the LT1125, the attenuator control chain recovered the function. (Attachment 3)
Calibration
The attenuator control voltage was applied to the board, then this control voltage was measured together with the RF output powers (LO and PC). (Attachment 4)
LO power is about 16dBm~17dBm and almost independent with the setting (as expected).
PC power changes from -24dBm to +25dBm.
Attachment 5 is the relationship between the PC RF power and the MODEL Voltage measured in analog.
Restore the setting:
The module was returned to the rack. One thing we should take care of is that the external 10V should be disconnected when the output is not terminated with 50Ohm loads. This is to protect SMA-202 from reflection damage.
Once the board was secured and the 10V supply was connected, the MODET voltage was dependent on the slider setting. The PC output RF power was calibrated against the RFADJ setting. (Attachment 6) It is consistent with the above analog measurements.
The RF level (C1:PSL-PMC_RFADJ ) was set to 6.0. This imposes the PC output of +14dBm. C1:PSL-PMC_MODET came back to ~-0.34, which is consistent with the number before the trouble.
I also checked the LO power in situ. The direct output was ~17dBm and the 9dB attenuator made the LO level down to 8dBm. The mixer is ZAD-6, which is 7dBm LO level. So it looks fine.
PDH error signal
The amplitude of the PDH error signal was observed at the IF output of the frequency mixer. The cavity was swept around one of the resonances. It showed a clear PDH error shape with the P-P amplitude of ~130mV. (Attachment 7)
By the way, the PMC error offset slider was swept while the cavity was locked. The error signal indicated
The error signal / Offset slider value
-5.23mV / +10V
-6.11mV / 0V
-7.03mV / -10V
It seems that there is a 1/104 reduction factor between the slider value and the actual applied voltage. |
Attachment 1: IMG_6630.JPG
|
|
Attachment 2: IMG_6627.JPG
|
|
Attachment 3: IMG_6628.JPG
|
|
Attachment 4: RF_pow.pdf
|
|
Attachment 5: RF_cal.pdf
|
|
Attachment 6: EPICS_RF_ADJ.pdf
|
|
Attachment 7: PMC_PDH_Error.jpg
|
|
13124
|
Wed Jul 19 00:59:47 2017 |
gautam | Update | General | FINESSE model of DRMI (no arms) | Summary:
I've been working on improving the 40m FINESSE model I set up sometime last year (where the goal was to model various RC folding mirror scenarios). Specifically, I wanted to get the locking feature of FINESSE working, and also simulate the DRMI (no arms) configuration, which is what I have been working on locking the real IFO to. This elog is a summary of what I have from the last few days of working on this.
Model details:
- No IMC included for now.
- Core optics R and T from the 40m wiki page.
- Cavity lengths are the "ideal" ones - see the attached ipynb for the values used.
- RF modulation depths from here. But for now, the relative phase between f1 and f2 at the EOM is set to 0.
- I've not included flipped folding mirrors - instead, I put a loss of 0.5% on PR3 and SR3 in the model to account for the AR surface of these optics being inside the RCs.
- I've made the AR surfaces of all optics FINESSE "beamsplitters" - there was some discussion on the FINESSE mattermost channel about how not doing this can lead to slightly inaccurate results, so I've tried to be more careful in this respect.
- I'm using "maxtem 1" in my FINESSE file, which means TEM_mn modes up to (m+n=1) are taken into account - setting this to 0 makes it a plane wave model. This parameter can significantly increase the computational time.
Model validation:
- As a first check, I made the PRM and SRM transparent, and used the in-built routines in FINESSE to mode-match the input beam to the arm cavities.
- I then scanned one arm cavity about a resonance, and compared the transmisison profile to the analytical FP cavity expression - agreement was good.
- Next, I wanted to get a sensing matrix for the DRMI (no arms) configuration (see attached ipynb notebook).
- First, I make the ETMs in the model transparent
- I started with the phases for the BS, PRM and SRM set to their "naive" values of 0, 0 and 90 (for the standard DRMI configuration)
- I then scanned these optics around, used various PDs to look at the points where appropriate circulating fields reached their maximum values, and updated the phase of the optic with these values.
- Next, I set the demod phase of various RFPDs such that the PDH error signal is entirely in one quadrature. I use the RFPDs in pairs, with demod phases separated by 90 degrees. I arbitrarily set the demod phase of the Q phase PD as 90 + phase of I phase PD. I also tried to mimic the RFPD-IFO DoF pairing that we use for the actual IFO - so for example, PRCL is controlled by REFL11_I.
- Confident that I was close enough to the ideal operating point, I then fed the error signals from these RFPDs to the "lock" routine in FINESSE. The manual recommends setting the locking loop gain to 1/optical gain, which is what I did.
- The tunings for the BS and RMs in the attached kat file are the result of this tuning.
- For the actual sensing matrix, I moved each of PRM, BS and SRM +/-5 degrees (~15nm) around each resonance. I then computed the numerical derivative around the zero crossing of each RFPD signal, and then plotted all of this in some RADAR plots - see Attachment #1.
Explanation of Attachments and Discussion:
- Attachment #1 - Computed sensing matrix from this model. Compare to an actual measurement, for example here - the relative angle between the sensing matrix elements dont exactly line up with what is measured. EQ suggested today that I should look into tuning the relative phase between the RF frequencies at the EOM. Nevertheless, I tried comparing the magnitudes of the MICH sensing element in AS55 Q - the model tells me that it should be ~7.8*10^5 W/m. In this elog, I measured it to be 2.37*10^5 W/m. On the AS table, there is a 50-50 BS splitting the light between the AS55 and AS110 photodiodes which is not accounted for in the model. Factoring this in, along with the fact that there are 6 in-vaccuum steering mirrors (assume 98% reflectivity for these), 3 in air steering mirrors, and the window, the sensing matrix element from the model starts to be in the same ballpark as the measurement, at ~3*10^5 W/m. So the model isn't giving completely crazy results.
- Attachment #2 - Example of the signals at various RFPDs in response to sweeping the PRM around its resonance. To be compared with actual IFO data. Teal lines are the "I" phase, and orange lines are "Q" phase.
- Attachment #3 - FINESSE kat file and the IPython notebook I used to make these plots.
- Next steps
- More validation against measurements from the actual IFO.
- Try and resolve differences between modeled and measured sensing matrices.
- Get locking working with full IFO - there was a discussion on the mattermost thread about sequential/parallel locking some time ago, I need to dig that up to see what is the right way to get this going. Probably the DRMI operating point will also change, because of the complex reflectivities of the arm cavities seen by the RF sidebands (this effect is not present in the current configuration where I've made the ETMs transparent).
GV Edit: EQ pointed out that my method of taking the slope of the error signal to compute the sensing element isn't the most robust - it relies on choosing points to compute the slope that are close enough to the zero crossing and also well within the linear region of the error signal. Instead, FINESSE allows this computation to be done as we do in the real IFO - apply an excitation at a given frequency to an optic and look at the twice-demodulated output of the relevant RFPD (e.g. for PRCL sensing element in the 1f DRMI configuration, drive PRM and demodulate REFL11 at 11MHz and the drive frequenct). Attachment #4 is the sensing matrix recomputed in this way - in this case, it produces almost identical results as the slope method, but I think the double-demod technique is better in that you don't have to worry about selecting points for computing the slope etc.
|
Attachment 1: DRMI_sensingMat.pdf
|
|
Attachment 2: DRMI_errSigs.pdf
|
|
Attachment 3: 40m_DRMI_FINESSE.zip
|
Attachment 4: DRMI_sensingMat_19Jul.pdf
|
|
13385
|
Tue Oct 17 23:07:52 2017 |
gautam | Update | CDS | FEs unresponsive | While working on the IFO tonight, I noticed that the blinky status lights on c1iscex and c1iscey were frozen (but those on the other 3 FEs seemed fine). But all other lights on the CDS overview screen were green I couldn't access testpoints from these machines, and the EPICS readbacks for models on these FEs (e.g. Oplev servo inputs outputs etc) were frozen at some fixed value. This lasted for a good 5 minutes at least. But the blinky lights started blinking again without me doing anything. Not sure what to make of this. I am also not sure how to diagnose this problem, as trending the slow EPICS records of the CPU execution cycle time (for example) doesn't show any irregularity. |
13386
|
Wed Oct 18 01:41:32 2017 |
jamie | Update | CDS | FEs unresponsive |
Quote: |
While working on the IFO tonight, I noticed that the blinky status lights on c1iscex and c1iscey were frozen (but those on the other 3 FEs seemed fine). But all other lights on the CDS overview screen were green I couldn't access testpoints from these machines, and the EPICS readbacks for models on these FEs (e.g. Oplev servo inputs outputs etc) were frozen at some fixed value. This lasted for a good 5 minutes at least. But the blinky lights started blinking again without me doing anything. Not sure what to make of this. I am also not sure how to diagnose this problem, as trending the slow EPICS records of the CPU execution cycle time (for example) doesn't show any irregularity.
|
So this wasn't just an EPICS freeze? I don't see how this had anything to do with any of the work I did earlier today. I didn't modify any of the running front ends, didn't touch either of the end station machines or the DAQ, and didn't modify the network in any way. I didn't leave anything running.
If you couldn't access test points then it sounds like it was more than just EPICS. It sounds like maybe the end machines somehow fell of the network momentarily. Was there anything else going on at the time? |
13387
|
Wed Oct 18 02:09:32 2017 |
gautam | Update | CDS | FEs unresponsive | I was looking at the ASDC channel on dataviewer, and toggling various settings like whitening gain. At some point, the signal just froze. So I quit dataviewer and tried restarting it, at which point it complained about not being able to connect to FB. This is when I brought up the CDS_OVERVIEW medm screen, and noticed the frozen 1pps indicator lights. There was certainly something going on with the end FEs, because I was able to ping the machine, but not ssh into it. Once the 1pps lights came back, I was able to ssh into c1iscex and c1iscey, no problems.
Could it be that some of the mx processes stalled, but the systemctl routine automatically restarted them after some time?
Quote: |
So this wasn't just an EPICS freeze? I don't see how this had anything to do with any of the work I did earlier today. I didn't modify any of the running front ends, didn't touch either of the end station machines or the DAQ, and didn't modify the network in any way. I didn't leave anything running.
If you couldn't access test points then it sounds like it was more than just EPICS. It sounds like maybe the end machines somehow fell of the network momentarily. Was there anything else going on at the time?
|
|
13388
|
Wed Oct 18 09:21:22 2017 |
jamie | Update | CDS | FEs unresponsive |
Quote: |
I was looking at the ASDC channel on dataviewer, and toggling various settings like whitening gain. At some point, the signal just froze. So I quit dataviewer and tried restarting it, at which point it complained about not being able to connect to FB. This is when I brought up the CDS_OVERVIEW medm screen, and noticed the frozen 1pps indicator lights. There was certainly something going on with the end FEs, because I was able to ping the machine, but not ssh into it. Once the 1pps lights came back, I was able to ssh into c1iscex and c1iscey, no problems.
Could it be that some of the mx processes stalled, but the systemctl routine automatically restarted them after some time?
|
An mx_stream glitch would have interrupted data flowing from the front end to the DAQ, but it wouldn't have affected the heartbeat. The heartbeat stop could mean either that the front end process froze, or the EPICS communication stopped. The fact that everything came back fine after a couple of minutes indicates to me that the front end processes all kept running fine. If they hadn't I'm sure the machines would have locked up. The fact that you couldn't connect to the FE machine is also suspicious.
My best guess is that there was a network glitch on the martian network. I don't know how to account for the fact that pings still worked, though. |
13394
|
Wed Oct 18 23:11:53 2017 |
gautam | Update | CDS | FEs unresponsive | This happened again just now - it was roughly this time when this happened last night as well.
There was certainly an EPICS freeze of the kind we were used to seeing prior to replacing the martian wireless router sometime in late 2015 (or early 2016?). I was trying to run the dither alignment servos on the Y-arm at this time, and all the StripTool traces flatlined.
I took the opportunity to try accessing testpoints from the iscey ADCs - specifically C1:SUS-TRY_OUT, and it seemed to work just fine. However, I couldn't ssh into c1iscey.
Looking at the dmesg once I was able to ssh in eventually (~2 minutes deadtime tonight, I feel like it was longer yesterday but can't quantify), I see the following: not sure if there are any clues in here, or whether this is the correct log to check. But there are many instances of the nfs server related message in the log. Note that the system time-stamp corresponds to when this freeze happened.
[5461308.784018] nfs: server 192.168.113.201 not responding, still trying
[5461412.936284] nfs: server 192.168.113.201 OK
[5461412.937130] systemd[1]: Starting Journal Service...
[5461412.947947] systemd-journald[20281]: Received SIGTERM from PID 1 (systemd).
[5461412.996063] systemd[1]: Unit systemd-journald.service entered failed state.
[5461413.002627] systemd[1]: systemd-journald.service has no holdoff time, scheduling restart.
[5461413.008983] systemd[1]: Stopping Journal Service...
[5461413.014664] systemd[1]: Starting Journal Service...
[5461413.044262] systemd[1]: Started Journal Service.
[5461413.694838] systemd-journald[400]: Received request to flush runtime journal from PID 1
|
1038
|
Fri Oct 10 00:34:52 2008 |
rob | Omnistructure | Computers | FEs are down |
The front-end machines are all down. Another cosmic-ray in the RFM, I suppose. Whoever comes in first in the morning should do the all-boot described in the wiki. |
1039
|
Fri Oct 10 10:20:42 2008 |
Alberto | Omnistructure | Computers | FEs are down |
Quote: |
The front-end machines are all down. Another cosmic-ray in the RFM, I suppose. Whoever comes in first in the morning should do the all-boot described in the wiki. |
Yoichi and I went along the arms turning off and on all the FE machines. Then, from the control room we rebooted them all following the procedures in the wiki. Everything is now up again.
I restored the full IFO, re-locked the mode cleaner. |
15998
|
Tue Apr 6 11:13:01 2021 |
Jon | Update | CDS | FE testing | I/O chassis assembly
Yesterday I installed all the available ADC/DAC/BIO modules and adapter boards into the new I/O chassis (c1bhd, c1sus2). We are still missing three ADC adapter boards and six 18-bit DACs. A thorough search of the FE cabinet turned up several 16-bit DACs, but only one adapter board. Since one 16-bit DAC is required anyway for c1sus2, I installed the one complete set in that chassis.
Below is the current state of each chassis. Missing components are highlighted in yellow. We cannot proceed to loopback testing until at least some of the missing hardware is in hand.
C1BHD
Component |
Qty Required |
Qty Installed |
16-bit ADC |
1 |
1 |
16-bit ADC adapter |
1 |
0 |
18-bit DAC |
1 |
0 |
18-bit DAC adapter |
1 |
1 |
16-ch DIO |
1 |
1 |
C1SUS2
Component |
Qty required |
Qty Installed |
16-bit ADC |
2 |
2 |
16-bit ADC adapter |
2 |
0 |
16-bit DAC |
1 |
1 |
16-bit DAC adapter |
1 |
1 |
18-bit DAC |
5 |
0 |
18-bit DAC adapter |
5 |
5 |
32-ch DO |
6 |
6 |
16-ch DIO |
1 |
1 |
Gateway for remote access
To enable remote access to the machines on the test stand subnet, one machine must function as a gateway server. Initially, I tried to set this up using the second network interface of the chiara clone. However, having two active interfaces caused problems for the DHCP and FTS servers and broke the diskless FE booting. Debugging this would have required making changes to the network configuration that would have to be remembered and reverted, were the chiara disk to ever to be used in the original machine.
So instead, I simply grabbed another of the (unused) 1U Supermicro servers from the 1Y1 rack and set it up on the subnet as a standalone gateway server. The machine is named c1teststand. Its first network interface is connected to the general computing network (ligo.caltech.edu) and the second to the test-stand subnet. It has no connection to the Martian subnet. I installed Debian 10.9 anticipating that, when the machine is no longer needed in the test stand, it can be converted into another docker-cymac for to run additional sim models.
Currently, the outside-facing IP address is assigned via DHCP and so periodically changes. I've asked Larry to assign it a static IP on the ligo.caltech.edu domain, so that it can be accessed analogously to nodus. |
1308
|
Mon Feb 16 10:18:13 2009 |
Alberto | Update | LSC | FE system rebooted |
Quote: |
Quote: |
I didn't get a chance to do much testing since the sus controller (susvme1) went nuts. In retrospect, this could be due to something in the script, so maybe we should try a burt restore to Friday afternoon next time someone wants to look at it. |
I tried the burtrestore today, it didn't work. Also tried some switching of timing cables, and multiple reboots, to no avail. This will require some more debugging. We might try diagnosing the clock driver and fanout modules, the penteks, and we can also try rebooting the whole FE system. |
I rebooted the whole FE system and now c1susvme1 and c1susvme2 are back on.
I can't restart the MC autolocker on c1susvme2 because it doesn't let me ssh in. I tried to reboot it a few times but it didn't work. Once you restart it, it becomes inaccessible and doesn't even respond to pinging. Although the controls for the MC mirrors are on.
The mode cleaner stays unlocked. |
1309
|
Mon Feb 16 14:12:21 2009 |
Yoichi | Update | LSC | FE system rebooted |
Quote: |
I can't restart the MC autolocker on c1susvme2 because it doesn't let me ssh in. I tried to reboot it a few times but it didn't work. Once you restart it, it becomes inaccessible and doesn't even respond to pinging. Although the controls for the MC mirrors are on.
The mode cleaner stays unlocked. |
MC autolocker runs on op340m, not on c1susvme2.
I restarted it and now MC locks fine.
Before that, I had to reboot c1iool0 and restore the alignment of the MC mirrors (for some reason, burt did not restore the alignment properly, so I used conlog). |
4249
|
Fri Feb 4 13:31:16 2011 |
josephb | Update | CDS | FE start scripts moved to scripts/FE/ from scripts/ | All start and kill scripts for the front end models have been moved into the FE directory under scripts: /opt/rtcds/caltech/c1/scripts/FE/. I modified the Makefile in /opt/rtcds/caltech/c1/core/advLigoRTS/ to update and place new scripts in that directory.
This was done by using
sed -i 's[scripts/start$${system}[scripts/FE/start$${system}[g' Makefile
sed -i 's[scripts/kill$${system}[scripts/FE/kill$${system}[g' Makefile
|
15695
|
Wed Dec 2 17:54:03 2020 |
gautam | Update | CDS | FE reboot | As discussed at the meeting, I commenced the recovery of the CDS status at 1750 local time.
- Started by attempting to just soft-restart the c1rfm model and see if that fixes the issue. It didn't and what's more, took down the c1sus machine.
- So hard reboots of the vertex machines was required. c1iscey also crashed. I was able to keep the EX machine up, but I soft-stopped all the RT models on it.
- All systems were recovered by 1815. For anyone checking, the DC light on the c1oaf model is red - this is a "known" issue and requires a model restart, but i don't want to get into that now and it doesn't disrupt normal operation.
Single arm POX/POY locking was checked, but not much more. Our IMC WFS are still out of service so I hand aligned the IMC a bit, IMC REFL DC went from ~0.3 to ~0.12, which is the usual nominal level. |
2627
|
Mon Feb 22 12:48:31 2010 |
josephb, alex, koji | Update | Computers | FE machines now coming up | Even after bringing up the fb40m, I was unable to get the front ends to come up, as they would error out with an RFM problem.
We proceeded to reboot everything I could get my hands on, although its likely it was daqawg and daqctrl which were the issue, as on the C0DAQ_DETAIL screen their status had been showing as 0xbad, but after the reboot showed up as 0x0. They had originally come up before the frame builder had been fixed, so this might have been the culprit. In the course of rebooting, I also found c1omc and c1lsc had been turned off as well, and turned them on.
After this set of reboots, we're now able to bring the front ends up one by one. |
1916
|
Mon Aug 17 02:12:53 2009 |
Yoichi | Summary | Computers | FE bootfest | Rana, Yoichi
All the FE computers went red this evening.
We power cycled all of them.
They are all green now.
Not related to this, the CRT display of op540m is not working since Friday night.
We are not sure if it is the failure of the display or the graphics card.
Rana started alarm handler on the LCD display as a temporary measure. |
8895
|
Mon Jul 22 22:06:18 2013 |
Koji | Update | CDS | FE Web view was fixed | FE Web view was broken for a long time. It was fixed now.
The problem was that path names were not fixed when we moved the models from the old local place to the SVN structure.
The auto updating script (/cvs/cds/rtcds/caltech/c1/scripts/AutoUpdate/update_webview.cron ) is running on Mafalda.
Link to the web view: https://nodus.ligo.caltech.edu:30889/FE/ |
9364
|
Mon Nov 11 12:19:36 2013 |
rana | Update | CDS | FE Web view was fixed |
Quote: |
FE Web view was broken for a long time. It was fixed now.
The problem was that path names were not fixed when we moved the models from the old local place to the SVN structure.
The auto updating script (/cvs/cds/rtcds/caltech/c1/scripts/AutoUpdate/update_webview.cron ) is running on Mafalda.
Link to the web view: https://nodus.ligo.caltech.edu:30889/FE/
|
Seems partially broken again. Not updating for most of the FE. I've commented out the cron lines for this as well as the mostly broken MEDM Snapshots job. I'm in the process of adding them to the megatron cron (since that machine is at least running 64 bit Ubuntu 12, instead of 32-bit CentOS) |
9366
|
Tue Nov 12 15:04:35 2013 |
rana | Update | CDS | FE Web view was fixed |
Quote: |
Seems partially broken again. Not updating for most of the FE. I've commented out the cron lines for this as well as the mostly broken MEDM Snapshots job. I'm in the process of adding them to the megatron cron (since that machine is at least running 64 bit Ubuntu 12, instead of 32-bit CentOS)
|
https://nodus.ligo.caltech.edu:30889/medm/screenshot.html
Seems to now be working. I made several fixes to the scripts to get it working again:
- changed TCSH scripts to BASH. Used /usr/bin/env to find bash.
- fixed stdout and stderr redirection so that we could see all error messages.
- made the PERL scripts executable. most of the PERL errors are not being logged yet.
- fixed paths for the MEDM screens to point to the right directories.
- the screen cap only works on screens which pop open on the left monitor, so I edited the screens so that they open up there by default.
- moved the CRON jobs from mafalda over to megatron. Mafalda no longer is running any crons.
- op540m used to run the 3 projector StripTool displays and have its screen dumped for this web page. Now zita is doing it, but I don't know how to make zita dump her screen.
|
8483
|
Wed Apr 24 14:20:49 2013 |
Koji | Update | CDS | FE Web view not updated? | The FE web view seems not up-to-date, does it? ( maybe for a year)
https://nodus.ligo.caltech.edu:30889/FE/c1mcs_slwebview_files/index.html |
|