ID |
Date |
Author |
Type |
Category |
Subject |
12737
|
Thu Jan 19 08:25:12 2017 |
Steve | Update | SUS | MC1 SUS electronics investigation |
Quote: |
Quote: |
As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.
|
In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.

|
No change. |
Attachment 1: MC1_MC3_ITMY_ETMX_sensors.png
|
|
Attachment 2: sensors_UL.png
|
|
12739
|
Thu Jan 19 12:00:10 2017 |
gautam | Update | SUS | MC1 SUS electronics investigation |
Going through the last ~20 hours of data, the MC1 sensor channels look glitch free the entire period. However, there is a ~10min period around 1PM UTC today when there were a couple of glitches ~80 counts in size in all the MC3 sensor channels. The attached shows the full 2k data from all 10 channels (MC1 and MC3 sensors) around this time.

Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...
|
12741
|
Thu Jan 19 19:56:09 2017 |
rana | Update | SUS | MC1 SUS electronics investigation |
Might be. Or it might be in the satellite box cabling. Hard to tell without a tester. I recommend you squish the cables on there and lock the MC and get back to the usual business. I'll check on sat. box with Ben.
Quote: |
Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...
|
|
12742
|
Fri Jan 20 11:16:30 2017 |
gautam | Update | SUS | MC1 SUS electronics investigation |
Both suspensions have been relatively well behaved for the best part of the last two days, since I effected the Satellite Box swap. Today morning, I set about re-enabling the damping and locking the MC. Judging by the wall StripTool, it stayed locked for about 30 mins or so, after which the glitching returned.
Attached is a screenshot of the sensor signals from MC1 and MC3 (second trend), and also the highest band (>30Hz) BLRMS output for the same 10 channels (full data sampled at 16Hz). Note that MC1 and MC3 satellite boxes remain swapped. So the glitches now have migrated to the MC3 channels.

I need to think about whether this is just coincidence, or if me re-enabling the damping has something to do with the re-occurrence of the glitching...
Addendum 4.30pm: I've also re-aligned the Y arm. Its alignment has been stable over the last few hours, despite several mode cleaner lock losses in between, it recovers good IR transmission. The X arm has been re-aligned to green, but I can't get it locked to the IR - everytime I turn the LSC to ETMX on, there seems to be some large misalignment applied to it. c1iscaux was dead, I restarted it by keying the crate. I haven't had time to investigate the X arm locking in detail, I will continue to debug this. |
12745
|
Mon Jan 23 10:24:01 2017 |
Steve | Update | SUS | MC1 SUS electronics investigation |
Two day plot of glitching suspentions: MC3, ITMY and ETMX |
Attachment 1: 3glitchingSUS.png
|
|
16139
|
Thu May 13 19:38:54 2021 |
Anchal | Update | SUS | MC1 Satellite Amplifier Debugged |
[Anchal Koji]
Koji and I did a few tests with an OSEM emulator on the satellite amplifier box used for MC1 which is housed on 1X4. This sat box unit is S2100029 D1002812 that was recently characterized by me 15803. We found that the differential output driver chip AD8672ARZ U2A section for the UL PD was not working properly and had a fluctuating offset at no input current from the PD. This was the cause of the ordeal of the morning. The chip was replaced with a new one from our stock. The preliminary test with the OSEM emulator showed that the channel has the correct DC value.
In further testing of the board, we found that the channel 8 LED driver was not working properly. Although this channel is never used in our current cable convention, it might be used later in the future. In the quest of debugging the issue there, we replaced AD8672ARZ at U1 on channel 8. This did not solve the issue. So we opened the front panel and as we flipped the board, we found that the solder blob shorted the legs of the transistor Q1 2N3904. This was replaced and the test with the LED out and GND shorted indicated that the channel is now properly providing a constant current of 35mA (5V at the monitor out).
After the debugging, the UL channel became the least noisy among the OSEM channels! Mode cleaner was able to lock and maintain it.
We should redo the MC1 input matrix optimization and the coil balancing afterward as we did everything based on the noisy UL OSEM values. |
Attachment 1: MC1_UL_Channel_Fixed.png
|
|
15376
|
Thu Jun 4 20:54:40 2020 |
gautam | Update | SUS | MC1 Slow Bias issues |
Summary:
I found that there is an issue with the MC1 slow bias voltages.
Details:
I usually offload the DC part of the output voltage from the WFS servos to the slow bias voltage sliders, so as to preserve maximum actuation range from the fast system. However, today, I found that this servo wasn't working well at all. So I dug a little deeper. Looking at the EPICS database records:
- The user-facing channels are "PIT" and "YAW" bias voltages.
- These are converted to voltages to be sent to individual coils by some calc channels in the EPICS database record. So, for example, the voltage to be sent to the "UL" coil (Upper Left, as viewed from the AR side of the optic), is A+B, where A is the "PIT" voltage and B is the "YAW" voltage. Similar combinations of A and B are used for the other 3 face coils.
- The problem is obvious - if either A or B > 5V, then the requested voltage to be sent to the UL coil is > 10 V, while the Acromag DACs can put out a maximum of 10 V.
- As it happens, with the IFO currently aligned, MC1 is the only optic which faces this problem.
- Why has this not been an issue before? In fact, looking at some old data, the "PIT" and "YAW" bias voltages to MC1 were both ~1-2 V in 2018. But I confirmed that something in the region of ~5 V is required from each of the "PIT" and "YAW" channels to bring the MCREFL spot back to the center of the camera, so something has changed the DC alignment of MC1, maybe an earthquake or something? Anyway, with these settings, 2/4 coils are basically saturated, and so we can only move the optic diagonally. 😢
- Other coils that have requested output voltages > 5V (so more than half the range of the DAC) include MC2 LL (5.2V), and ETMX LL and LR (5.5 and 5.8 V respectively).
- Either a factor of 0.5 should be included in all the EPICS database records, or else, we should make the "PIT" and "YAW" sliders range only from -5 to +5 V, so that this kind of misleading info isn't wasting time.
|
15377
|
Thu Jun 4 21:32:00 2020 |
Koji | Update | SUS | MC1 Slow Bias issues |
We can limit the EPICS values giving some parameters to the channels. cf https://epics.anl.gov/tech-talk/2012/msg00147.php
But this does not solve the MC1 issue. Only we can do right now is to make the output resister half, for example. |
11446
|
Fri Jul 24 23:08:53 2015 |
Ignacio | Update | PEM | MC1 accelerators moved for future huddle test |
I have moved the MC1 accelerators and cable to the huddle test set up, in order to see how a six witness huddle test with the improved set up will do.
Here is a picture of the accelerometer set up,

Our motivation for doing this is to see if more witness signals used in the Wiener filter really does indeed improve subtraction, as it was seen from previous huddle results, specially in the region above 10 Hz. |
15445
|
Wed Jul 1 12:50:40 2020 |
gautam | Update | PEM | MC1 accelerometers plugged in |
I re-connected the 3 accelerometers located near the MC1/MC3 chamber. It was a bit tedious to get the cabling sorted - I estimate the cable is ~80m long, and the excess length had to be wound around a spool (see Attachment #1), which wasn't really a 1 person job. It's neat-ish for now, but I'm not entirely satisfied. I think we should get shorter cables (~20m), and also mount the pre-amp/power units in a rack instead of leaving it on the floor. The pre-amp settings are x100 for all three channels. The MC2 channels are powered, but are unconnected to the seismometers - it was too tedious to unroll the other spool yesterday. Apart from this, the cable for the "Z" channel had to be re-seated in the strain relief clamp.
I did not enable any of the CDS filters that convert the raw signal into physical units, so for now, these channels are just recording raw counts.
Update 7pm: the spectra in the current config are here - not sure what to make of the MC2_Z channel appearing to show lower noise?
Update July 13 2020 430pm: This afternoon, I hooked up the MC2 accelerometer channels too... |
Attachment 1: IMG_8617.JPG
|
|
Attachment 2: IMG_8616.JPG
|
|
16087
|
Tue Apr 27 10:05:28 2021 |
Anchal, Paco | Update | SUS | MC1 and MC3 F2A Filters Tested |
We extended the f2a filter implementation and diagnostics as summarized in 16086 to MC1 and MC3.
MC1
Attachment 1 shows the filters with Q=3, 7, 10. We diagnosed using Q=3.
Attachment 2 shows the test summary, exciting with broadband noise on the LSC_EXC and measuring the CSD to estimate the transfer functions.
MC3
Attachment 3 shows the filters with Q=3, 7, 10. We diagnosed using Q=3.
Attachment 4 shows the test summary, exciting with broadband noise on the LSC_EXC and measuring the CSD to estimate the transfer functions.
Our main observation (and difference) with respect to MC2 is the filters have relative success for the PIT cross-coupling and not so much for YAW. We already observed this when we tuned the DC output gains to compute the filters. |
Attachment 1: IMC_F2A_Params_MC1.pdf
|
|
Attachment 2: MC1_POStoAng_CrossCoupling.pdf
|
|
Attachment 3: IMC_F2A_Params_MC3.pdf
|
|
Attachment 4: MC3_POStoAng_CrossCoupling.pdf
|
|
17286
|
Fri Nov 18 17:00:15 2022 |
Anchal | Update | SUS | MC1 and MC3 OSEMs calibrated using MC_F |
After the MC1 osem dewhitening was fixed, I did the calibration of MC1 OSEM signals using MC_F using this notebook. A 0.1 Hz oscillation with amplitude of 1000 cts was sent to MC1 lockin2 and was kept on between 1352851381 and 1352851881. Then I read back the data from DQ channels and performed a welch with standard deviation calculation from the different segments used. From this measurement, I arrive to the following cts2um gain values that were changed in MC1 filter file. The damping remained stable after the changes:
MC1:
UL: 0.09 -> 0.105(12)
UR: 0.09 -> 0.078(9)
LR: 0.09 -> 0.065(7)
LL: 0.09 -> 0.087(10)
I followed the same method for MC3 as well to get mroe meaningful error bars. This measurement was done between 1352856980 and 1352857480 using this notebook. Here are the changes made:
MC3
UL: 0.39827 -> 0.509(57)
UR: 0.33716 -> 0.424(48)
LR: 0.335 -> 0.365(40)
LL: 0.34469 -> 0.376(43)
The larger error bars could be due to more noisy MC3 osem outputs as the satellite amplifier gain is lower here.
|
16072
|
Thu Apr 22 12:17:23 2021 |
Anchal, Paco | Update | SUS | MC1 and MC3 Suspension Optimization Summary |
MC1 Coil Balancing DC and AC Gains
|
POS (DC coil Gain) |
PIT (DC coil Gain) |
YAW (DC coil Gain) |
Coil Output Gains (AC) |
UL |
0.6613 |
1 |
1 |
0.5885 |
UR |
0.7557 |
1 |
-1 |
0.1636 |
LL |
1.3354 |
-1 |
1 |
1.8348 |
LR |
1.0992 |
-1 |
-1 |
0.5101 |
Note: The AC gains were measured by keeping output matrix to ideal values of 1s. When optimizing DC gains, the AC gains were uploaded in coil ouput gains.
MC1 Diagonalized input matrix
|
UL |
UR |
LR |
LL |
SIDE |
POS |
0.1700 |
0.1125 |
0.0725 |
0.1300 |
0.4416 |
PIT |
0.1229 |
0.1671 |
-0.1021 |
-0.1463 |
0.1567 |
YAW |
0.2438 |
-0.1671 |
-0.2543 |
0.1566 |
-0.0216 |
SIDE |
0.0023 |
0.0010 |
0.0002 |
0.0015 |
0.0360 |
MC1 Suspension Damping Gains
|
Old gains |
New Gains |
SUSPOS |
120 |
270 |
SUSPIT |
60 |
180 |
SUSYAW |
60 |
180 |
MC3 Coil Balancing DC and AC Gains
|
POS (DC coil Gain) |
PIT (DC coil Gain) |
YAW (DC coil Gain) |
Coil Output Gains (AC) |
UL |
1.1034 |
1 |
1 |
0.8554 |
UR |
1.1034 |
1 |
-1 |
-0.9994 |
LL |
0.8845 |
-1 |
1 |
-0.9809 |
LR |
0.8845 |
-1 |
-1 |
1.1434 |
Note: The AC gains were measured by keeping output matrix to ideal values of 1s. When optimizing DC gains, the AC gains were uploaded in coil ouput gains.
MC3 Input matrix (Unchanged from previous values)
|
UL |
UR |
LR |
LL |
SIDE |
POS |
0.28799 |
0.28374 |
0.21201 |
0.21626 |
-0.40599 |
PIT |
2.65780 |
0.04096 |
-3.2910 |
-0.67420 |
-0.72122 |
YAW |
0.60461 |
-2.7138 |
0.01363 |
3.33200 |
0.66647 |
SIDE |
0.16601 |
0.19725 |
0.10520 |
0.07397 |
1.00000 |
MC3 Suspension Damping Gains
|
Old gains |
New Gains |
SUSPOS |
200 |
500 |
SUSPIT |
12 |
35 |
SUSYAW |
8 |
12 |
|
17361
|
Fri Dec 16 14:52:42 2022 |
Paco | Summary | SUS | MC1 and MC3 coil dewhitening filters added, location corrected |
I corrected the filter module location for 28 Hz ELP filter on MC1 and MC3 coil output filter banks to FM8 (from FM9). FM9 is always reserved for SimDW (which is simulated dewhitening filter and is supposed to be a copy of the dewhitening filter on the analog side). FM10 is also reserved for InvDW filter which performs the anti-dewhitening before DAC. This filter module, FM10, shoudl remain on always. When FM9 is on (SimDW), the analog dewhitening turns off and we get a flat digital response as well. When FM9 is off, the analog dewhitening is turned on. Nominal operation configuration right now is to not use the coil dewhitening and keep FM9 and FM10 on always.
|
15431
|
Thu Jun 25 15:11:00 2020 |
gautam | Update | SUS | MC1 coil driver resistance quartered |
I implemented this change today. We only had 100 ohm, 3W resistors in stock (no 200 ohm with adequate power rating). Assuming 10 V is dropped across this resistor, the power dissipation is V^2/R ~ 1 W, so we should have sufficient margin. DCC entry has been updated with new schematic and photo of the component side of the board. Note that the series resistance of the fast actuation path was untouched.
As expected, the requested voltage no longer exceeds the Acromag DAC range, it is now more like 2.5 V. However, I still notice that the MC REFL spot moves somewhat diagonally on the camera image - so maybe the coil gains are seriously imbalanced? Anyway, the WFS control signals can once again be safely offloaded to the slow bias voltages once again, preserving the fast ADC range for other actuation.
The Johnson noise of the series resistor has now increased by a factor of 2, from ~6.4 pA/rtHz to 12.8 pA/rtHz. Assuming a current to force coefficient of 1.6 mN/A per coil, the length noise of the cavity is expected to be 12.8e-12 * 0.064/0.25/(2*pi*100)^2 ~ 8e-18 m/rtHz at 100 Hz. In frequency units, this is 80 uHz/rtHz. I think our IMC noise is at least 10 times higher than this at 100 Hz (in any case, the noise of the coil driver is NOT dominated by the series resistance). Attachment #1 confirms that there isn't any significant MCF noise increase, and I will check with the arm cavity too. Nevertheless, we should, if possible, align the optic better and use as high a series resistance as possible.
The watchdog for MC1 was disabled and the board was pulled out for this work. After it was replaced, the IMC re-locks readily.
Quote: |
But this does not solve the MC1 issue. Only we can do right now is to make the output resister half, for example.
|
|
Attachment 1: MCF.pdf
|
|
17222
|
Thu Nov 3 14:00:29 2022 |
Anchal | Update | SUS | MC1 coil strengths balanced |
I balanced the face coil strengths of MC1 using following steps:
- At all points, keep sum(abs(coil_gains)) = 4
- After reading coil gains, remove the signs. Do the operations as below, and before writing put back the signs.
- Butterfly to POS decoupling:
- Drive butterfly mode at 13 Hz using LOCKIN2 on MC1 and look at C1:IOO-MC_F_DQ for position fluctuations
- Subtract 0.05 times the BUT vector to coil strengths to see the effect on C1:IOO-MC_F_DQ using diaggui exponential averaging of 5, BW=1.
- Use Newton-Raphson from here to reach to no POS actuation when driving butterfly mode.
- POS to PIT decouping:
- Drive LOCKIN2 in POS mode at 13 Hz and look for PIT signal at C1:IOO-MC_TRANS_P_DQ using diaggui exponential averaging of 5, BW=1.
- Subtract 0.05 times the PIT vector to coil strengths
- Use Newton-Raphson from here to reach to no PIT actuation when driving POS.
- POS to YAW decoupling:
- Drive LOCKIN2 in POS mode at 13 Hz and look for YAW signal at C1:IOO-MC_TRANS_Y_DQ using diaggui exponential averaging of 5, BW=1.
- Subtract 0.05 times the YAW vector to coil strengths
- Use Newton-Raphson from here to reach to no YAW actuation when driving POS.
By the end, I was able to see no actuation on POS when butterfly is driven with 30000 counts amplitude at 13 Hz. I was able to see no PIT or YAW actuation when POS is driven with 10000 counts at 13 Hz.
Final coil strengths found:
C1:SUS-MC1_ULCOIL_GAIN: -1.008
C1:SUS-MC1_URCOIL_GAIN: -0.98
C1:SUS-MC1_LRCOIL_GAIN: -1.06
C1:SUS-MC1_LLCOIL_GAIN: -0.952
I used this notebook while doing the above work. It has a couple of functions that could be useful in future while doing similar balancing.
|
1406
|
Mon Mar 16 12:26:59 2009 |
Yoichi | Configuration | IOO | MC1 drift |
There seems to be a large drift of MC1 even when there is no WFS feedback.
The attached plot is an example a 20min trend. You can see that MC1 OSEM signals drift significantly larger than that of MC2/MC3.
You can also be sure that there is no drifting voltage applied to the coils on the MC1 during this period.
If no one is working on the IFO today during the LV meeting, I'd like to leave the MC unlocked and see the trend of the MC1 OSEM signals.
Please do not turn on the MC auto locker unless you want to use the IFO.
If you want to do some measurements, please go ahead and lock the MC, but please write it down in the elog.
Thanks. |
Attachment 1: MC1_Drift1.pdf
|
|
1408
|
Tue Mar 17 08:44:37 2009 |
Yoichi | Configuration | IOO | MC1 drift |
I'm done with the MC1 drift measurement.
The result is attached. It is clear that MC1 is in trouble. The small drifts in the MC2/MC3 are insignificant compared to the crazy MC1 behavior.
Since there is no drift in the coil feedback voltage monitors, it is probably not a problem of the DACs.
We may be able to fix this by pushing the cables for the MC1 satellite amplifier. But it may require replacement of the coil driver.
Quote: | There seems to be a large drift of MC1 even when there is no WFS feedback.
The attached plot is an example a 20min trend. You can see that MC1 OSEM signals drift significantly larger than that of MC2/MC3.
You can also be sure that there is no drifting voltage applied to the coils on the MC1 during this period.
If no one is working on the IFO today during the LV meeting, I'd like to leave the MC unlocked and see the trend of the MC1 OSEM signals.
Please do not turn on the MC auto locker unless you want to use the IFO.
If you want to do some measurements, please go ahead and lock the MC, but please write it down in the elog.
Thanks. |
|
Attachment 1: MC1_Drift3.pdf
|
|
1440
|
Sun Mar 29 17:54:41 2009 |
Yoichi | Update | SUS | MC1 drift investigation continued |
The attached plots show the trend of the MC OSEM signals along with the voltages across the output resistors of the bias current buffers.
The channel assignments are:
MC_TMP1 = LL coil
MC_DRUM1 = UL coil
OSA_APTEMP = UR coil
OSA_SPTEMP = LR coil
Although the amplitude of the drift of MC1 is much larger than that of MC2 and MC3, the shape of the drift looks like a daily cycle (temperature ?).
This time, I reduced the MC1 bias currents to avoid saturation of the ADCs for the channels measuring the voltages across the output resistors.
This may be the reason the MC1 has been non-glitchy for the last day.
OSA_APTEMP (UR Coil) shows a step function like behavior, although it did not show up in the OSEM signals.
This, of course, should not happen.
Today, I went to the MC1 satellite box and found that the 64-pin IDE like connector was broken.
The connector is supposed to sandwich the ribbon cable, but the top piece was loose.
The connector is on the cable connecting the satellite box and the SUS rack.
I replaced the broken connector with a new one. I also swapped the MC1 and MC3 satellite boxes to see if the glitches show up in the MC3.
I restored the bias currents of the MC1 to the original values.
The probes to monitor the voltages across the output resistors are still there. For OSA_SPTEMP, which was saturating the ADC, I put a voltage divider before the ADC. Other channels were very close to saturation but still within the ADC range.
Please leave the MC unlocked at least until the Monday morning.
Also please do not touch the Pomona box hanging in front of the IOO rack. It is the voltage divider. The case is connected to the coil side of the output resistor. If you touch it, the MC1 bias current will change.
|
Attachment 1: Drift1.pdf
|
|
1441
|
Mon Mar 30 09:07:22 2009 |
rana | Update | SUS | MC1 drift investigation continued |
Maybe we can temporarily just disconnect the bias and just use the SUS sliders for bias if there's enough range? |
1444
|
Mon Mar 30 13:29:40 2009 |
Yoichi | Update | SUS | MC1 drift investigation continued |
Quote: | Maybe we can temporarily just disconnect the bias and just use the SUS sliders for bias if there's enough range? |
We could do this, but I'm suspicious of the cables between the coil driver and the coils (including the satellite box). In this case, disabling the bias won't help.
Since the MC1 has been quiet recently, I will just lock the MC and resume the locking. |
431
|
Sun Apr 20 23:39:57 2008 |
rana | Summary | SUS | MC1 electronics busted |
I spent some time trying to fix the utter programming fiasco which was our MCWFS diagonalization script.
However, it still didn't work. Loops unstable. Using the matrix in the screen snapshot is OK, however.
Finally, I realized from looking at the imaginary part of the output matrix that there was something
wrong with the MC1 drive. The attached JPG shows TFs from pit-drives of the MC mirrors to WFS1.
MC1 & MC3 are supposed to have 28 elliptic low pass filters in hardware for dewhitening. The MC2
hardware is different and so we have given it a software 28 Hz ELP to compensate. But it looks like
MC1 doesn't have the low pass (no phase lag). I tried switching its COIL FM10 filters to make it
switch but no luck.
We'll have to engage the filters to make the McWFS work right and to get the MC noise down. This
needs someone to go check out the hardware I think.
I have turned the gain way down and this has stabilized the MC REFL signal as you can see from the StripTool screen. |
Attachment 1: mcwfs.jpg
|
|
435
|
Tue Apr 22 10:59:24 2008 |
rob | Update | SUS | MC1 electronics busted |
Quote: | I spent some time trying to fix the utter programming fiasco which was our MCWFS diagonalization script.
However, it still didn't work. Loops unstable. Using the matrix in the screen snapshot is OK, however.
Finally, I realized from looking at the imaginary part of the output matrix that there was something
wrong with the MC1 drive. The attached JPG shows TFs from pit-drives of the MC mirrors to WFS1.
MC1 & MC3 are supposed to have 28 elliptic low pass filters in hardware for dewhitening. The MC2
hardware is different and so we have given it a software 28 Hz ELP to compensate. But it looks like
MC1 doesn't have the low pass (no phase lag). I tried switching its COIL FM10 filters to make it
switch but no luck.
We'll have to engage the filters to make the McWFS work right and to get the MC noise down. This
needs someone to go check out the hardware I think.
I have turned the gain way down and this has stabilized the MC REFL signal as you can see from the StripTool screen. |
This was just because the XYCOM was set to switch the "dewhites" based on FM9 rather than FM10. To check whether the hardware ellipDW filters were engaged, I drove MC1 & MC3 in position (using the MCL bank), and looked at the transfer functions MC2_MCL/MC1_MCL and MC2_MCL/MC3_MCL. This method uses the mode cleaner length servo to enable a relatively clear transfer function measurement of the ellipDW, modulo the loop gain of MCL and the fact that it's really hard to measure an ELP cascaded with a suspension. The hardware and the switching appear to be working fine.
It's now set up such that the hardware is ENGAGED when the coil FM10 filters are OFF, and I deleted all the FM10 filters from the coils of MC1 and MC3. Since we don't switch these filters on and off regularly, I see no need to waste precious SUS processor power on filters that just calculate "1". |
17463
|
Tue Feb 14 10:49:04 2023 |
yuta | Summary | BHD | MC1 electronics diagram and cable diconnection tests |
Below are summary of electronics around MC1 and cable disconnection tests.
These suggest that the 60 Hz noise is probably from somewhere between DAC and the coil driver.
For now, we can work on IFO with SimDW off.
MC1 local damping electronics diagram:
Vacuum Flange
|| DB25 cable x2
Satellite Amp Chassis (LIGO-S2100029, LIGO-D1002818)
|| DB9 split cable
Suspension PD Whitening and Interface Board (LIGO-D000210)
||| 4pin LEMO x3
Anti-aliasing filter
|
ADC
|
CDS
(SimDW is zpk([35.3553+i*35.3553;35.3553-i*35.3553;250],[4.94975+i*4.94975;4.94975-i*4.94975;2500],1,"n") gain(1.020); InvDW is the inverse)
|
DAC
|
SOS Dewhitening and Anti-Image Filter (LIGO-D000316) Shared with MC3
(has 2ea. 800 Hz LPF & 5th order, 1 dB ripple, 50 dB atten, 28Hz elliptic LPF that can be turned on or bypassed)
||||| SMA-LEMO cable x5
("test in" are used; inputs can be disconnected with watchdogs)
SOS Coil Driver Module (LIGO-D010001, LIGO-D1700218)
(HV offsets from Acromag are added at the output (independent from watchdogs))
|| DB9 split cable
Satellite Amp Chassis
Disconnecting cables:
- Disconnecting cables between Satellite Amp Chassis and Suspension PD Whitening and Interface Board didn't help reducing 60 Hz noise.
- Disconnecting LEMO cables between Suspension PD Whitening and Interface Board and Anti-aliasing filter didn't help reducing 60 Hz noise.
- Turning off C1:SUS-MC1_SUSPOS/PIT/YAW/SIDE outputs didn't help reducing 60 Hz noise.
- Turning off SimDW reduced 60 Hz noise.
- Turning off watchdogs reduced 60 Hz noise.
Dewhitening filters:
- When 60 Hz frequency noise was high, SimDW was on, but InvDW was off, which is in a weired state.
- Now, all the MC suspensions have SimDW turned off and InvDW turned on (which supposed to turn on analog dewhitening filter, which is probably 28 Hz ELP which has a notch at 60 Hz)
- Probably, when realtime model modifications for BH44 was made on Jan 17, coil dewhitening filter situation was not burt restored correctly, and we started to notice 60 Hz noise (which was already there but didn't notice because of dewhitening).
- See 40m/17431 for the timeline, possibly related elogs 40m/17359, 40m/17361 about MC1 dewhitening switching on Dec 14-16.
Next:
- Check if analong dewhitening filter actually has 28 Hz ELP by measuring transfer functions
- Design SimDW and InvDW to correctly take into account of real dewhitening filters |
14852
|
Thu Aug 22 12:54:06 2019 |
Koji | Update | CDS | MC1 glitch removed (for now) and IMC locking recovered |
I have checked the MC1 satellite box and made a bunch of changes. For now, the glitches coming from the satellite box is gone. I quickly tested the MC1 damping and the IMC locking. The IMC was locked as usual. I still have some cleaning up but will work on them today and tomorrow.
Attachment 1: Result
The noise level of the satellite box was tested with the suspension simulator (i.e., five pair of the LED and PD in a plastic box).
Each plot shows the ASD of the sensor outputs 1) before the modification, 2) after the change, and 3) with the satellite box disconnected (i.e., the noise from the PD whitening filter in the SUS rack).
Before the modification, these five signals showed significant (~0.9) correlation each other, indicating that the noise source is common. After the modification, the spectra are lowered down to the noise level of the whitening filters, and there is no correlation observed anymore. EXCEPT FOR the LR sensor: It seems that the LR has additional noise issue somewhere in the downstream. This is a separate issue.
Attachment 2: Photo of the satellite box before the modification
The thermal environment in the box is terrible. They are too hot to touch. You can see that the flat ribbon cable was burned. The amps, buffers, and regulators generate much heat.
Attachment 3: Where the board was modified
- (upper left corner) Every time I touched C51, the diode output went to zero. So C51 was replaced with WIMA 10uF (50V) cap.
- (lower left area) I found a clear indication of the glitch coming from the PD bias path (U3C). So I first replaced another 10uF (C50) with WIMA 10uF (50V). This did not change the glitch. So I replaced U3 (LT1125). This U3 had unused opamp which had railed to the supply voltage. Pins 14 and 15 of U3 were shorted to ground.
- (lower right corner) Similarly to U3, U6 also had two opamps which are railed due to no termination. U6 was replaced, and Pins 11, 12, 14, and 15 were shorted to ground.
- (middle right) During the course of the search, I suspected that the LR glitch comes from U5. So U5 was replaced to the new chip, but this had no effect.
Attachment 4: Thermal degradation of the internal ribbon cable
Because of the heat, the internal ribbon cable lost the flexibility. The cable is cracked and brittle. It now exposes some wires. This needs to be replaced. I'll work on this later this week.
Attachment 5: Thermal degradation of the board
Because of the excessive heat for those 20years, the bond between the board and the patten were degraded. In conjunction with extremely thin wire pattern, desoldering of the components (particularly LT1125s) was very difficult. I'd want to throw away this board right now if it were possible...
Attachment 6: Shorting the unused opamps
This shows how the pieces of wires were soldered to ground vias to short the unused opamps.
Attachment 7: Comparison of the noise level with the sus simulator and the actual MC1 motion
After the satellite box fix, the sensor outputs were measured with the suspension connected. This shows that the suspension is moving much more than the noise level around 1Hz. However, at the microseismic frequency there is also most no mergin. Considering the use of the adaptive feedforward, we need to lower the noise of the satellite box as well as the noise of the whitening filters.
=> Use better chips (no LT1125, no current buffers), use low noise resistors, better thermal environment.
|
Attachment 1: satellite_box.pdf
|
|
Attachment 2: before.jpg
|
|
Attachment 3: after.jpg
|
|
Attachment 4: P_20190821_194035.jpg
|
|
Attachment 5: P_20190821_174240.jpg
|
|
Attachment 6: P_20190821_194013.jpg
|
|
Attachment 7: comparison_satellite_box.pdf
|
|
14853
|
Thu Aug 22 20:56:51 2019 |
Koji | Update | CDS | MC1 glitch removed (for now) and IMC locking recovered |
The internal ribbon cable for the MC1 satellite box was replaced with the one in the spare box. The MC1 box was closed and reinstalled as before. The IMC is locking well.
Now the burnt cable was disassembled and reassembles with a new cable. It is now in the spare box.
The case closed (literally) |
12664
|
Mon Dec 5 15:05:37 2016 |
gautam | Update | LSC | MC1 glitches are back |
For no apparent reason, the MC1 glitches are back. Nothing has been touched near the PD whitening chassis today, and the trend suggests the glitching started about 3 hours ago.. I had disabled the MC1 watchdog for a while to avoid the damping loop kicking the suspension around when these glitches occur, but have re-enabled it now. IMC is holding lock for some minutes... I was hoping to do another round of ringdowns tonight, but if this persists, its going to be difficult...

|
13187
|
Thu Aug 10 21:01:43 2017 |
gautam | Update | SUS | MC1 glitches debugging |
I have squished cables in all the places I can think of - but MC1 has been glitching regularly today. Before starting to pull electronics out, I am going to attempt a more systematic debugging in the hope I can localize the cause.
To this end, I've disabled the MC autolocker, and have shutdown the MC1 watchdog. I plan to leave it in this state overnight. From this, I hope to look at the free-swinging optic spectra to see that this isn't a symptom of something funky with the suspension itself.
Some possible scenarios (assuming the free swinging spectra look alright and the various resonances are where we expect them to be):
- With the watchdog shutdown, the PIT/YAW bias voltages still goto the coil (low-passed by 4 poles @1Hz). So if the glitching happens in this path, we should see it in both the shadow sensors and the DC spot positions on the WFS.
- If the glitching happens in the shadow sensor readout electronics/cabling, we should see it in the shadow sensor channels, but NOT in the DC spot positions on the WFS (as the watchdog is shutdown, so there should be no actuation to the coils based on OSEM signals).
- If we don't see any glitches in WFS spot positions or shadow sensors, then it is indicative of the problem being in the coil driver board / dewhitening board/anti-aliasing board.
- I am discounting the problem being in the Satellite box, as we have switched around the MC1 satellite box multiple times - the glitches remain on MC1 and don't follow a Satellite Box. Of course there is the possibility that the cabling from 1X5/1X6 to the Satellite box is bad.
MC1 has been in a glitchy mood today, with large (MC-REFL spot shifts by ~1 beam diameter on the CCD monitor) glitches happening ~every 2-3 hours. Hopefully it hasn't gone into an extended quiet period. For reference, I've attached the screen-grab of the MC-QUAD and MC-REFL as they are now.
GV 9.20PM: Just to make sure of good SNR in measuring the pendulum eigenfreqs, I ran /opt/rtcds/caltech/c1/scripts/SUS/freeswing MC1 in a terminal . The result looked rather violent on the camera but its already settling down. The terminal output:
The following optics were kicked:
MC1
Thu Aug 10 21:21:24 PDT 2017
1186460502
Quote: |
Happened again just now, although the characteristics of the glitch are very different from the previous post, its less abrupt. Only actuation on MC1 at this point was local damping.
|
|
Attachment 1: MC_QUAD_10AUG2017.jpg
|
|
Attachment 2: MCR_10AUG2017.jpg
|
|
13195
|
Fri Aug 11 12:32:46 2017 |
gautam | Update | SUS | MC1 glitches debugging |
Attachment #1: Free swinging sensor spectra. I havent done any peak fitting but the locations of the resonances seem consistent with where we expect them to be.
The MC_REFL spot appears to not have shifted significantly (so slow bias voltages are probably not to blame). Now I have to look at trend data to see if there is any evidence of glitching.
I'm not sure I understand the input matrix though - the matrix elements would have me believe that the sensing of POS in UL is ~5x stronger than in UR and LL, but the peak heights don't back that up.
Attachment #3: Second trend over 5hours (since frame writing was re-enabled this morning). Note that MC1 is still free-swinging but there is no evidence of steps of ~30cts which have been observed some days ago. Also, from my observations yesterday, MC1 glitched multiple times over a few hours timescale. More data will have to be looked at, but as things stand, Hypothesis #3 below looks the best.
Quote: |
Some possible scenarios (assuming the free swinging spectra look alright and the various resonances are where we expect them to be):
- With the watchdog shutdown, the PIT/YAW bias voltages still goto the coil (low-passed by 4 poles @1Hz). So if the glitching happens in this path, we should see it in both the shadow sensors and the DC spot positions on the WFS.
- If the glitching happens in the shadow sensor readout electronics/cabling, we should see it in the shadow sensor channels, but NOT in the DC spot positions on the WFS (as the watchdog is shutdown, so there should be no actuation to the coils based on OSEM signals).
- If we don't see any glitches in WFS spot positions or shadow sensors, then it is indicative of the problem being in the coil driver board / dewhitening board/anti-aliasing board.
- I am discounting the problem being in the Satellite box, as we have switched around the MC1 satellite box multiple times - the glitches remain on MC1 and don't follow a Satellite Box. Of course there is the possibility that the cabling from 1X5/1X6 to the Satellite box is bad.
|
|
Attachment 1: MC1_freeswinging.pdf
|
|
Attachment 2: MC1_inmatrix.png
|
|
Attachment 3: MC1_sensors.png
|
|
1442
|
Mon Mar 30 12:29:17 2009 |
Yoichi | Configuration | | MC1 glitches not seen during the weekend |
The attached is the MC trend for the past 12 hours.
There is no MC1 glitches in the OSEM signals. Moreover, the total amplitude of the drift is smaller than it used to be (now the amplitude is less than 100 but it used to be a few hundreds).
There still is a small step in the OSEM signals at around 6AM this morning, but the amount of the jump is very insignificant.
The cause of the glitch in the TMP1, DRUM1 and APTEMP (LL, UL and UR coils respectively) at 7AM is not known.
Since the MC1 has been behaving OK during the weekend, I removed the probes from the MC1 coil driver board and locked the MC.
Hopefully the replacement of the broken connector fixed the problem, but I'm not sure. |
Attachment 1: MC_drift.pdf
|
|
13168
|
Sat Aug 5 11:04:07 2017 |
gautam | Update | SUS | MC1 glitches return |
See Attachment #1, which is full (2048Hz) data for a 3 minute stretch around when I saw the MC1 glitch. At the time of the glitch, WFS loops were disabled, so the only actuation on MC1 was via the local damping loops. The oscillations in the MC2 channels are the autolocker turning on the MC2 length tickle.
Nikhil and I tried the usual techniques of squishing cables at the satellite box, and also at 1X4/1X5, but the glitching persists. I will try and localize the problem this weekend. This thread details investigations the last time something like this happened. In the past, I was able to fix this kind of glitching by replacing the (high speed) current buffer IC LM6321M. These are present in a two places: Satellite box (for the shadow sensor LED current drive), and on the coil driver boards. I think we can rule out the slow machine ADCs that supply the static PIT and YAW bias voltages to the optic, as that path is low-passed with a 4th order filter @1Hz, while the glitches that show up in the OSEM sensor channels do not appear to be low-passed, as seen in the zoomed in view of the glitch in Attachment #2 (but there is an LM6321 in this path as well). |
Attachment 1: MC1_glitch_Aug42017.png
|
|
Attachment 2: MC1_glitch_zoomed.png
|
|
13178
|
Wed Aug 9 15:15:47 2017 |
gautam | Update | SUS | MC1 glitches return |
Happened again just now, although the characteristics of the glitch are very different from the previous post, its less abrupt. Only actuation on MC1 at this point was local damping. |
Attachment 1: MC1_glitch.png
|
|
13418
|
Wed Nov 8 14:28:35 2017 |
gautam | Update | General | MC1 glitches return |
There hasn't been a big glitch that misaligns MC1 by so much that the autolocker can't lock for at least 3 months, seems like there was one ~an hour ago.
I disabled autolocker and feedback to the PSL, manually aligned MC1 till the MC_REFL spot looked right on the CCD to me, and then re-engaged the autolocker, all seems to have gone smoothly.
|
Attachment 1: MC1_glitchy.png
|
|
Attachment 2: 6AFDA67D-79B1-469C-A58A-9EC5F8F01D32.jpeg
|
|
13284
|
Fri Sep 1 08:25:08 2017 |
Steve | Update | SUS | MC1 glitching |
MC1, MC2 and MC3 damping turned off to see glitching action at 9:57am
Quote: |
There was a pretty large glitch in MC1 about an hour ago. The misalignment was so large that the autolocker wasn't able to lock the IMC. I manually re-aligned MC1 using the bias sliders, and now IMC locks fine. Attached is a 90 second plot of 2K data from the OSEMs showing the glitch. Judging from the wall StripTool, the IMC was well behaved for ~4 hours before this glitch - there is no evidence of any sort of misalignment building up, judging from the WFS control signals.
|
|
Attachment 1: MC1glitching.png
|
|
Attachment 2: MC1kicks.png
|
|
13286
|
Fri Sep 1 16:27:39 2017 |
gautam | Update | SUS | MC1 glitching |
I re-enabled the MC SUS damping and IMC locking for some IFO work just now.
Quote: |
MC1, MC2 and MC3 damping turned off to see glitching action at 9:57am
|
|
13426
|
Tue Nov 14 08:54:37 2017 |
Steve | Update | IOO | MC1 glitching |
|
Attachment 1: MC1_glitching.png
|
|
13253
|
Fri Aug 25 11:11:26 2017 |
gautam | Update | General | MC1 kicked again |
Looks like MC1 got another big kick just under 4 hours ago. None of the other optics show any evidence of a glitch so it seems unlikely that this was some sort of global event. It's been well behaved for ~2weeks now. IMC was unlocked. I manually re-aligned MC1, at which point the autolocker was able to lock the IMC.
Looking at this plot, it seems that LR and UL coils seem to have the largest kicks. UR barely saw it. Not sure what (if anything) to make of this - apparently the optic moved by ~20urad with the UR magnet approximately the pivot. |
Attachment 1: MC1_glitch.png
|
|
13283
|
Thu Aug 31 21:40:24 2017 |
gautam | Update | General | MC1 kicked again |
There was a pretty large glitch in MC1 about an hour ago. The misalignment was so large that the autolocker wasn't able to lock the IMC. I manually re-aligned MC1 using the bias sliders, and now IMC locks fine. Attached is a 90 second plot of 2K data from the OSEMs showing the glitch. Judging from the wall StripTool, the IMC was well behaved for ~4 hours before this glitch - there is no evidence of any sort of misalignment building up, judging from the WFS control signals. |
Attachment 1: MC1_glitch.png
|
|
16159
|
Tue May 25 10:22:16 2021 |
Anchal, Paco | Summary | SUS | MC1 new input matrix calculated and uploaded |
The test was succesful and brought back the IMC to lock point at the end.
We calculated new input matrix using same code in scripts/SUS/InMatCalc/sus_diagonalization.py . Attachment 1 shows the results.
The calculations are present in scripts/SUS/InMatCalc/MC1.
We uploaded the new MC1 input matrix at:
Unix Time = 1621963200
UTC |
May 25, 2021 |
17:20:00 |
UTC |
Central |
May 25, 2021 |
12:20:00 |
CDT |
Pacific |
May 25, 2021 |
10:20:00 |
PDT |
GPS Time = 1305998418
This was done by running python scripts/SUS/general/20210525_NewMC1Settings/uploadNewConfigIMC.py on allegra. Old IMC settings (before Paco and I started workin on 40m) can be restored by running python scripts/SUS/general/20210525_NewMC1Settings/restoreOldConfigIMC.py on allegra.
Everything looks as stable as before. We'll look into long term trends in a week to see if this helped at all. |
Attachment 1: SUS_Input_Matrix_Diagonalization.pdf
|
|
15434
|
Sun Jun 28 15:30:52 2020 |
gautam | Update | SUS | MC1 sat-box de-lidded |
Judging by the summary pages, some 18 hours after this change was made and the board re-installed, the MC1 shadow sensors began to report frequent glitches. I can't think of a plausible causal connection, especially given the 18 hour time lag, but also hard to believe there isn't one? As a result, the IMC is no longer able to stay locked for extended periods of time. I did the usual cable squishing, and also took off the lid to see if that helps the situation.
While the reduced series resistance means there is more current flowing through the slow path,
- There isn't actually an increase in the net current flowing through the satellite box - this change just re-allocates the current from the fast path to the slow path, but by the time it reaches the satellite box, the current is flowing through the same conductor.
- afaik, the current buffers on the coil driver aren't overdriven - they are rated for 300 mA. No individual coil is drawing more than 30 mA.
- the resistors themselves should be running sufficiently below their rated power of 3W (I estimate 2.5 V ^2 / 100 ohms ~ 60 mW).
- The highest current should be through the UL and LR coils according to the voltage outputs from the Acromag. But the UL coil doesn't show significant glitching, and the LL one does despite drawing negligible DC current.
The attached FLIR camera image re-inforces what we already know, that the thermal environment inside the satellite box is horrible. The absolute temperature calibration may be off, but it was difficult to touch the components with a bare finger, so I'd say its definitely > 70 C.
Quote: |
I implemented this change today. We only had 100 ohm, 3W resistors in stock (no 200 ohm with adequate power rating). Assuming 10 V is dropped across this resistor, the power dissipation is V^2/R ~ 1 W, so we should have sufficient margin. DCC entry has been updated with new schematic and photo of the component side of the board. Note that the series resistance of the fast actuation path was untouched.
|
|
Attachment 1: 20200628T144138.jpg
|
|
15435
|
Sun Jun 28 16:29:58 2020 |
rana | Update | SUS | MC1 sat-box de-lidded |
does the FLIR have an option to export image with a colorbar?
How about just leave the lid open? or more open? I don't know what else can be done in the near term. Maybe swap with the SRM sat box to see if that helps? |
15436
|
Sun Jun 28 17:36:35 2020 |
gautam | Update | SUS | MC1 sat-box de-lidded |
Hmm I can't seem to export with the colorbar, might be just my phone though. I tried to add some "cursors" with the temperature at a few spots, but the font color contrast is poor so you have to squint really hard to see the temperatures in the photo I attached.
I'll leave the MC1 box open overnight and see if that improves the situation, and if not, I'll switch in the SRM satellite box tomorrow.
Quote: |
does the FLIR have an option to export image with a colorbar?
How about just leave the lid open? or more open? I don't know what else can be done in the near term. Maybe swap with the SRM sat box to see if that helps?
|
|
15438
|
Mon Jun 29 11:55:46 2020 |
gautam | Update | SUS | MC1 sat-box de-lidded |
There was no improvement to the situation overnight. So, I did the following today:
- Ramped bias voltages for SRM and MC1 to 0, shutdown watchdogs.
- Switched SRM and MC1 satellite boxes. The SRM satellite box lid was opened, while the MC1 lid was left open. The boxes have also been re-labelled lest there be some confusion about which box belongs where.
- Restored watchdogs and bias voltages. Curiously, the MC1 optic now only requires half the bias voltages it did before to have the correct DC alignment for the optic. The Satellite box is just supposed to be a passive conduit for the drive current, so this is indicative of some PCB traces/cabling being damaged inside what was previously the MC1 satellite box?
IMC is now locked again, I will monitor for glitching/stability.
Update 6pm PDT: as shown in Attachment #1, there is a huge difference in the stability of the lock after the sat box swap. Let's hope it stays this way for a while...
Quote: |
I'll leave the MC1 box open overnight and see if that improves the situation, and if not, I'll switch in the SRM satellite box tomorrow.
|
|
Attachment 1: SatBoxSwap.jpg
|
|
15440
|
Mon Jun 29 20:30:53 2020 |
Koji | Update | SUS | MC1 sat-box de-lidded |
Sigh. Do we have a spare sat box? |
15712
|
Mon Dec 7 11:25:31 2020 |
gautam | Update | SUS | MC1 suspension glitchy again |
The MC1 suspension has begun to show evidence of glitches again, from Friday/Saturday. You can look at the suspension Vmon tab a few days ago and see that the excess fuzz in the Vmon was not there before. The extra motion is also clearly evident on the MCREFL spot. I noticed this on Saturday evening as I was trying to recover the IMC locking, but I thought it might be Millikan so I didn't look into it further. Usually this is symptomatic of some Satellite box issues. I am not going to attempt to debug this anymore. |
16138
|
Thu May 13 11:55:04 2021 |
Anchal, Paco | Update | SUS | MC1 suspension misbehaving |
We came in the morning with the following scene on the zita monitor:

The MC1 watchdog was tripped and seemed like IMC struggled all night with misconfigured WFS offsets. After restoring the MC1 WD, clearing the WFS offsets, and seeing the suspension damp, the MC caught lock. It wasn't long before the MC unlocked, and the MC1 WD tripped again.
We tried few things, not sure what order we tried them in:
- Letting suspension loops damp without the WFS switched on.
- Letting suspension loops damp with PSL shutter closed.
- Restoring old settings of MC suspension.
- Doing burt restore with command:
burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/May/12/08:19/c1mcsepics.snap -l /tmp/controls_1210513_083437_0.write.log -o /tmp/controls_1210513_083437_0.nowrite.snap -v <
Nothing worked. We kept seeing that ULPD var on MC1 keeps showing kicks every few minutes which jolts the suspension loops. So we decided to record some data with PSL shutter closed and just suspension loops on. Then we switched off the loops and recorded some data with freely swinging optic. Even when optic was freely swinging, we could see impulses in the MC1 OSEM UL PD var which were completely uncorrelated with any seismic activity. Infact, last night was one fo teh calmer nights seismically speaking. See attachment 2 for the time series of OSEM PD variance. Red region is when the coil outputs were disabled.
Inference:
- We think something is wrong with the UL OSEM of MC1.
- It seems to show false spikes of motion when there is no such spike present in any other OSEM PD or the seismic data itself.
- Currently, this is still the case. We sometimes get 10-20 min of "Good behavior" when everything works.
- But then the impulses start occuring again and overwhelmes the suspension loops and WFS loops.
- Note, that other optic in IMC behaved perfectly normally throughout this time.
- In the past, it seems like satellite box has been the culprit for such glitches.
- We should look into debugging this as ifo is at standstill because of this issue.
- Earlier, Gautum would post Vmon signals of coil outputs only to show the glitches. We wanted to see if switching off the loops help, so we recorded OSEM PD this time.
- In hindsight, we should probably look at the OSEM sensor outputs directly too rather than looking at the variance data only. I can do this if people are interested in looking at that too.
- We've disabled the coil ouputs in MC1 and PSL shutter is off.
Edit Thu May 13 14:47:25 2021 :
Added OSEM Sensor timeseries data on the plots as well. The UL OSEM sensor data is the only channel which is jumping hapazardly (even during free swinging time) and varying by +/- 30. Other sensors only show some noise around a stable position as should be the case for a freely suspended optic. |
Attachment 2: MC1_Glitches_Invest2.pdf
|
|
14836
|
Thu Aug 8 12:01:12 2019 |
gautam | Update | IOO | MC1 suspension oddness |
At ~1am PDT today, all the MC1 shadow sensor readbacks (fast CDS channels and Slow Acromag channels, latter not shown here) went to negative values. Of course a negative value makes no sense. After ~3 hours, they came back to positive values again. But since then, the shadow sensor RMS noise has been significantly higher in the >20 Hz band, and there are frequent glitches which kick the suspension. The IMC has been having trouble staying locked. I claim that this has to do with the Satellite box.
No action being taken now while I work on the ALS. In the past the problem has fixed itself. |
Attachment 1: MC1_suspension.png
|
|
Attachment 2: MC1_suspension.pdf
|
|
14842
|
Mon Aug 12 19:58:23 2019 |
gautam | Update | IOO | MC1 suspension oddness |
Repair plan:
- Get "spare" satellite box working --- Chub
- According to elog14441, this box has flaky connectors which probably need to be remade
- Re-make the 64-pin IDC crimped connection on the cable from the coil driver board to sat. box, at the Satellite box end --- Chub and gautam
Any other ideas? The problem persists and it's annoying that the IMC cannot be locked. |
2011
|
Mon Sep 28 02:24:05 2009 |
rana | Update | Locking | MC1/3 Dewhitening found OFF: Turned back ON |
While trying to make the OAF work, I found that the XYCOM switches for MC1/3 have been set in the bad way for awhile. This means that the hardware filters were bypassed and that MC1 & MC3 were moving around too much at high frequency and possibly causing trouble with the locking. I have put them back into the default position.
On Friday, Jenne and I were playing around with turning the dewhitening off/on to see if it efffected the OAF stability. At the time, I didn't pay too much attention to what the state was. Looks like it was in the wrong state (hardware bypassed) when we found it. For the OAF work, we generally want it in that bypassed state, but its bad because it makes noise in the interferometer. The bits in question are bits 16-23 on the XYCOM screen.
I have updated the snapshot and set the screen in the appropriate settings. I used a swept sine measurement to verify the filter state. In the attached plot, green corresponds to XYCOM green and red corresponds to red. |
Attachment 1: C1SUS_SRM_XYCOM1.png
|
|
Attachment 2: Untitled.png
|
|
9521
|
Mon Jan 6 18:32:17 2014 |
RANA | Update | IOO | MC1/3 kicked this morning at 8:30 |
The trend shows a big jolt to the MC1/3 pointing this morning at 8:30.
Was anyone working anywhere near there today? There is no elog.
If not, we will have to put a 'no janitor' sign on all of the 40m doors permanently to prevent mops misaligning our interferometer. |
Attachment 1: kicked.png
|
|