40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 138 of 357  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  14179   Thu Aug 23 15:26:54 2018 JonUpdateIMCMC/PMC trouble

I tried unsuccessfully to relock the MC this afternoon.

I came in to find it in a trouble state with a huge amount of noise on C1:PSL-FSS_PCDRIVE visible on the projector monitor. Light was reaching the MC but it was unable to lock.

  • I checked the status of the fast machines on the CDS>FE STATUS page. All up.
  • Then I checked the slow machine status. c1iscaux and c1psl were both down. I manually reset both machines. The large noise visible on C1:PSL-FSS_PCDRIVE disappeared.
  • After the reset, light was no longer reaching the MC, which I take to mean the PMC was not locked. On the PSL>PMC page, I blanked the control signal, reenabled it, and attempted to relock by adjusting the servo gain as Gautam had showed me before. The PMC locks were unstable, with each one lasting only a second or so.
  • Next I tried restoring the burt states for c1iscaux and c1psl from a snapshot taken earlier today, before the machine reboots. That did not solve the problem either.
  14180   Thu Aug 23 16:05:24 2018 KojiUpdateIMCMC/PMC trouble

I don't know what had been wrong, but I could lock the PMC as usual.
The IMC got relocked by AutoLocker. I checked the LSC and confirmed at least Y arm could be locked just by turning on the LSC servos.

  14181   Thu Aug 23 16:10:13 2018 not KojiUpdateIMCMC/PMC trouble

Great, thanks!

Quote:

I don't know what had been wrong, but I could lock the PMC as usual.
The IMC got relocked by AutoLocker. I checked the LSC and confirmed at least Y arm could be locked just by turning on the LSC servos.

 

  4274   Fri Feb 11 16:43:09 2011 steveUpdateVIDEOMC1 & 3 video monitor

I set up video monitoring of MC1 and MC3

Attachment 1: P1070415.JPG
P1070415.JPG
  14851   Tue Aug 20 19:05:24 2019 KojiUpdateCDSMC1 (and MC3) troubleshoot

Started the troubleshoot from the MC1 issue. Gautam showed me how to use the fake PD/LED pair to diagnose the satellite box without involving the suspension mechanics.

This revealed that the MC1 has frequent light level glitches which are common for five sensors. This feature does not exist in the test with the MC3 satellite box. I will open and check the MC1 satellite box to find the cause of this common glitches tomorrow. MC1 is currently shutdown and undamped.

BTW, at the MC3 test, i found that J2 of the satellite box (male Dsub) has all the pins too low (or too short?). I brought the box outside and found that the housing of this connector was half broken down. The connector was reassembled and the metal parts of the housing was bent again so that the housing can hold the connector body tightly.

The MC3 satellite box was restored and connected to the cables. As I touched this box, it is still under probation.

Attachment 1: Screenshot_from_2019-08-20_17-26-01.png
Screenshot_from_2019-08-20_17-26-01.png
Attachment 2: Screenshot_from_2019-08-20_17-43-03.png
Screenshot_from_2019-08-20_17-43-03.png
  13196   Fri Aug 11 17:36:47 2017 gautamUpdateSUSMC1 <--> MC3

About 30mins ago, I saw another glitch on MC1 - this happened while the Watchdog was shutdown.

In order to further narrow down the cause of the glitch, we switched the Coil Driver Board --> Satellite box DB(15?) connectors on the coil drivers between MC1 and MC3 coil driver boards. I also changed the static PIT/YAW bias voltages to MC1 and MC3 such that MC-REFL is now approximately back to the center of the CCD monitor.

 

Attachment 1: MC1_glitch_watchdog_shutdown.png
MC1_glitch_watchdog_shutdown.png
  13220   Wed Aug 16 19:50:17 2017 gautamUpdateSUSMC1 <--> MC3 switched back

Now that all the CDS overview lights are green, I decided to switch back the coil driver outputs to their original state so that the MC optics could be damped and the IMC relocked. I also restored the static PIT/YAW bias values to their original values.

MC1 has been quiet over the last couple of days, lets see how it behaves in the next few days. In all the glitches I have observed, if the IMC is locked and WFS loops are enabled, the loops are able to correct for the DC misalignment caused by the glitch. But the mcwfs off script is currently set up in such a way that the output history is cleared between IMC locks. I made two copies of the mcwfson/mcwfsoff scripts, called mcwfsunhold/mcwfshold respectively. They live in /opt/rtcds/caltech/c1/scripts/MC/WFS. I've also modified the autolocker script to call these modified scripts, such that when the IMC loses lock, the WFS servo outputs are held, while the input is turned off. The hope is that in this configuration, the autolocker can catch a lock even if there is a glitch on MC1.

I haven't tried locking the arms yet, but I think other IFO work discussed at the meeting (like arm loss estimation / cavity scans etc) can proceed.

Quote:

In order to further narrow down the cause of the glitch, we switched the Coil Driver Board --> Satellite box DB(15?) connectors on the coil drivers between MC1 and MC3 coil driver boards. I also changed the static PIT/YAW bias voltages to MC1 and MC3 such that MC-REFL is now approximately back to the center of the CCD monitor.

 

 

  13225   Thu Aug 17 11:17:49 2017 gautamUpdateSUSMC1 <--> MC3 switched back

Seems like this modification didn't really work. There were several large MC1 glitches, and one of them misaligned MC1 so much that the IMC didn't relock for the last ~6 hours. I re-aligned MC1 manually, and now it is locked fine.

Quote:

Now that all the CDS overview lights are green, I decided to switch back the coil driver outputs to their original state so that the MC optics could be damped and the IMC relocked. I also restored the static PIT/YAW bias values to their original values.

MC1 has been quiet over the last couple of days, lets see how it behaves in the next few days. In all the glitches I have observed, if the IMC is locked and WFS loops are enabled, the loops are able to correct for the DC misalignment caused by the glitch. But the mcwfs off script is currently set up in such a way that the output history is cleared between IMC locks. I made two copies of the mcwfson/mcwfsoff scripts, called mcwfsunhold/mcwfshold respectively. They live in /opt/rtcds/caltech/c1/scripts/MC/WFS. I've also modified the autolocker script to call these modified scripts, such that when the IMC loses lock, the WFS servo outputs are held, while the input is turned off. The hope is that in this configuration, the autolocker can catch a lock even if there is a glitch on MC1.

I haven't tried locking the arms yet, but I think other IFO work discussed at the meeting (like arm loss estimation / cavity scans etc) can proceed.

 

 

Attachment 1: MC1_misaligned.png
MC1_misaligned.png
Attachment 2: MC1_glitch.png
MC1_glitch.png
  13226   Thu Aug 17 17:33:01 2017 gautamUpdateSUSMC1 <--> MC3 switched back

that's why the Autolocker clears the outputs; we don't want to be holding the offsets from the last ms of lock when it was all messed up; instead it would be best to have a slow (~mHz) relief script that takes the WFS controls and puts them onto the MC SUS sliders. This would then re-align the MC to the input beam rather than the input to the MC. Which is not the best idea.

Quote:

Seems like this modification didn't really work.

 

  1703   Thu Jun 25 21:00:30 2009 ClaraUpdatePEMMC1 Accelerator set moved again; new XLR cables

I moved the MC1 set of accelerators. Might have bumped things. If things aren't working, look around the MC1 chamber.

Also, I constructed two new XLR cables, but have not tested them yet.

  3154   Thu Jul 1 14:28:39 2010 JenneUpdatePEMMC1 Accelerometers in place

Kevin sent me an email with top secret info on where one of the other accelerometer cubes was hiding (it was with his shaker setup on the south side of the SP table), so I took it and put the 3 MC1 accelerometers in their 3-axis configuration. 

Also, I changed the orientation of both sets of 3 axis accelerometers to reflect a Right Handed configuration, to go along with the new and improved IFO configuration.  Previously (including last night), the MC2 accelerometers were together in a Left Handed configuration.

  13878   Tue May 22 17:26:25 2018 gautamUpdateIOOMC1 Coil Driver pulled out

I have pulled out MC1 coil driver board from its Eurocrate, so IMC is unavailable until further notice. Plans:

  1. Thick film --> Thin Film
  2. AD797 --> Op27
  3. Remove Pots in analog actuation path.
  4. Measure noise
  5. Route HPF signal (UL DAQ Mon) to front panel. I think we should use the SMA connectors. That way, we have DC and AC voltage monitors available for debugging.

If there are no objections, I will execute Step #5 in the next couple of hours. I'm going to start with Steps 1-4.

  13880   Tue May 22 23:28:01 2018 gautamUpdateIOOMC1 Coil Driver pulled out

This work is now complete. MC1 coil driver board has been reinstalled, local damping of MC1 restored, and IMC has been locked. Detailed report + photos to follow, but measurement of the noise (for one channel) on the electronics workbench shows a broadband noise level of 5nV/rtHz (yes) around 100Hz, which is lower than what was measured here and consistent with what we expect from LISO modeling (with fast input terminated with 50ohm, slow input grounded).

Quote:

I have pulled out MC1 coil driver board from its Eurocrate, so IMC is unavailable until further notice.

 

  13883   Wed May 23 17:58:48 2018 gautamUpdateIOOMC1 Coil Driver pulled out
  • Marked up schematic + photo post changes uploaded to DCC page.
  • There was a capacitor in the DAQ monitor path making a 8kHz corner that I now removed (since the main point of this front panel HPF monitor point is to facilitate easy coil driver noise debugging, and I wanted to be able to use the SR785 out to high frequencies without accounting for an additional low pass). Transfer function from front panel LEMO input to front panel LEMO monitor is shown in Attachment #1.
  • Voltage noise measured at DB25 output (with the help of a breakout cable and SR560 G=100) with front panel LEMO input terminated to 50ohm, Bias input grounded, and pin1 of U21A grounded (i.e. watchdog enabled state) is shown in Attachment #2. This measurement was taken on the electronics bench.
  • Inside the lab (i.e. coil driver board plugged into eurocrate), the noise measured in the same way looks identical to what was measured in elog13870.
  • I tried repeating the measurement by powering the board using an bench power supply and grounding the bias input voltage near 1X6, and the strange noise profile persists. So this supports the hypothesis that some kind of environmental pickup is causing this noise profile. Needs more investigation. 

In any case, if it is indeed true that the optic sees this current noise, the place to make the measurement is probably the Sat. Box. Who knows what the pickup is over the ~15m of cable from 1X6 to the optic.

Quote:

Detailed report + photos to follow

 

Attachment 1: MC1_monitorTF.png
MC1_monitorTF.png
Attachment 2: MC1_ULnoise.pdf
MC1_ULnoise.pdf
  13896   Wed May 30 10:17:46 2018 gautamUpdateIOOMC1 Coil Driver pulled out

[rana,gautam]

Summary:

Last night, Rana fact-checked my story about the coil driver noise measurement. Conclusions:

  1. There is definitely pickup of strong lines (see Attachment #1. These are hypothesized to come from switching power supplies). Moreover, they breathe. Checkout Rana's twitter page for the video.
  2. The lines are almost (but not quite) at integer multiples of 19.5 kHz. The cause of this anharmonicity is to be puzzled out.
  3. When the coil driver board is located ~1m away from the SR785 and the bench supply powering it, even though the lines are visible in the spectrum, the low frequency shape does not show the weird broad features I reported here. The measured noise floor level is ~5nV/rtHz, which is consistent with LISO noise + SR560 input noise (see Attachment #2). However, there is still some excess noise at 100 Hz above what the LISO model leads us to expect. 
  4. The location of the coil driver board and SR560 relative to the SR785 and the bench power supply I used to power the coil driver board can increase the line heights by ~x50. 
  5. The above changes the shape of the low frequency part of the spectrum as well, and it looks more like what is reported in elog13870. The hypothesis is that the high frequency lines are downconverted in the SR560.

Note: All measurements were made with the fast input of the coil driver board terminated with 50ohms and bias input shorted to ground with a crocodile clip cable.

Next steps:

The first goal is to figure out where this pickup is happening, and if it is actually going to the optic. To this end, I will put a passive 100 kHz filter between the coil driver output and the preamp (Busby Box instead of SR560). By getting a clean measurement of the noise floor with the coil driver board in the Eurocrate (with the bias input driven), we can confirm that the optic isn't being buffeted by the excess coil driver noise. If we confirm that the excess noise is not a measurement artefact, we need to think about were the pickup is actually happening and come up with mitigation strategies.

RXA: good section EMI/RFI in Op Amp Applications handbook (2006) by Walt Jung. Also this page: http://www.electronicdesign.com/analog/what-was-noise

Attachment 1: EM_pickup.pdf
EM_pickup.pdf
Attachment 2: coilDriverNoiseComparison.pdf
coilDriverNoiseComparison.pdf
  16157   Mon May 24 19:14:15 2021 Anchal, PacoSummarySUSMC1 Free Swing Test set to trigger

We've set a free swing test to trigger at 3:30 am tomorrow for MC1. The script for tests is running on tmux session named 'freeSwingMC1' on rossa. The script will run for about 4.5 hrs and we'll correct the input matrix tomorrow from the results. If anyone wants to work during this time (3:30 am to 8:00 am), you can just kill the script by killing tmux session on rossa. ssh into rossa and type tmux kill-session -t freeSwingMC1.

Quote:
 

We should redo the MC1 input matrix optimization and the coil balancing afterward as we did everything based on the noisy UL OSEM values.

 

  16209   Thu Jun 17 11:45:42 2021 Anchal, PacoUpdateSUSMC1 Gave trouble again

TL;DR

MC1 LL Sensor showed signs of fluctuating large offsets. We tried to find the issue in the box but couldn't find any. On power cycling, the sensor got back to normal. But in putting back the box, we bumped something and c1susaux slow channels froze. We tried to reboot it, but it didn't work and the channels do not exist anymore.


Today morning we came to find that IMC struggled to lock all night (See attachment 1). We kind of had an indication yesterday evening that MC1 LL Sensor PD had a higher variance than usual and Paco had to reset WFS offsets because they had integrated the noise from this sensor. Something similar happened last night, that a false offset and its fluctuation overwhelmed WFS and MC1 got misaligned making it impossible for IMC to get lock.

In the morning, Paco again reset the WFS offsets but not we were sure that the PD variance from MC1 LL osem was very high. See attachment 2 to see how only 1 OSEM is showing higher noise in comparison to the other 4 OSEMs. This behavior is similar to what we saw earlier in 16138 but for UL sensor. Koji and I fixed it in 16139 and we tested all other channels too.

So, Paco and I, went ahead and took out the MC1 satellite amplifier box S2100029 D1002812, opened the top, and checked all the PD channel testpoints with no input current. We didn't find anything odd. Next we checked the LED dirver circuit testpoints with LED OUT and GND shorted. We got 4.997V on all LED MON testpoints which indicate normal functioning.

We just hooked back everything on the MC1 satellit box and checked the sensor channels again on medm screens. To our surprise, it started functioning normally. So maybe, just a power cycling was required but we still don't know what caused this issue.

BUT when I (Anchal) was plugging back the power cables and D25 connectors on the back side in 1X4 after moving the box back into the rack, we found that the slow channels stopped updating. They just froze!

We got worried for some time as the negative power supply indicator LEDs on the acromag chassis (which is just below the MC1 satellite box) were not ON. We checked the power cables and had to open the side panel of the 1X4 rack to check how the power cables are connected. We found that there is no third wire in the power cables and the acromag chassis only takes in single rail supply. We confirmed this by looking at another acromag chassis on Xend. We pasted a note on the acromag chassis for future reference that it uses only positive rails and negative LED monitors are not usually ON.

Back to solving the frozen acromag issue, we conjectured that maybe the ethernet connection is broken. The DB25 cables for the satellite box are bit short and pull around other cables with it when connected. We checked all the ethernet cabling, it looked fine. On c1susaux computer, we saw that the monitor LED for ethernet port 2 which is connected to acromag chassis is solid ON while the other one (which is probably connection to the switch) is blinking.

We tried doing telnet to the computer, it didn't work. The host refused connection from pianosa workstation. We tried pinging the c1susaux computer, and that worked. So we concluded that most probably, the epics modbus server hosting the slow channels on c1susaux is unable to communicate with acromag chassis and hence the solid LED light on that ethernet port instead of a blinking one. We checked computer restart procedure page for SLOW computers on wiki and found that it said if telnet is not working, we can hard reboot the computer.

We hard reboot the computer by long pressing the power button and then presssing it back on. We did this process 3 times with the same result. The ethernet port 2 LED (Acromag chassis) would blink but the ethernet port 1 LED (connected to switch) would not turn ON. We now can not even ping the machine now, let alone telnet into it. All SUS slow monitor channels are not present now ofcourse. We also tried once pressing the reset button (which the manual said would reboot the machine), but we got the same outcome.

Now, we decided to stop poking around until someone with more experience can help us on this.


Bottomline: We don't know what caused the LL sensor issue and hence it has not been fixed. It can happen again. We lost all C1SUSAUX slow channels which are the OSEM and COIL slow monitor channels for PRM, BS, ITMX, ITMY, MC1, MC2 and MC3.

Attachment 1: SummaryScreenShot.png
SummaryScreenShot.png
Attachment 2: MC1_LL_SENSOR_DEAD.png
MC1_LL_SENSOR_DEAD.png
  12657   Fri Dec 2 11:56:42 2016 gautamUpdateLSCMC1 LEMO jiggled

I noticed 2 periods of frequent IMC locklosses on the StripTool trace, and so checked the MC1 PD readout channels to see if there were any coincident glitches. Turns out there wasnt yes BUT - the LR and UR signals had changed significantly over the last couple of days, which is when I've been working at 1X5. The fast LR readback was actually showing ~0, but the slow monitor channel had been steady so I suspected some cabling shenanigans.

Turns out, the problem was that the LEMO connector on the front of the MC1 whitening board had gotten jiggled ever so slightly - I re-jiggled it till the LR fast channel registered similar number of counts to the other channels. All looks good for now. For good measure, I checked the 3 day trend for the fast PD readback for all 8 SOS optics (40 channels in all, I didn't look at the ETMs as their whitening boards are at the ends), and everything looks okay... This while situation seems very precarious to me, perhaps we should have a more robust signal routing from the OSEMs to the DAQ that is more immune to cable touching etc...

  4888   Sun Jun 26 22:38:20 2011 ranaUpdateCDSMC1 LR dead for > 1 month; now revived temporarily

 Since the MC1 LRSEN channel is not wasn't working, my input matrix diagonalization hasn't worked today wasn't working. So I decided to fix it somehow.

I went to the rack and traced the signal: first at the LEMO monitor on the whitening card, secondly at the 4-pin LEMO cable which goes into the AA chassis.

The signal existed at the input to the AA chassis but not in the screen. So I pressed the jumper wire (used to be AA filter) down for the channel corresponding to the MC1 LRSEN channel.

It now has come back and looks like the other sensors. As you can see from this plot and Joe's entry from a couple weeks ago, this channel has been dead since May 17th.

The ELOG reveals that Kiwamu caught Steve doing some (un-elogged) fooling around there. Burnt Toast -> Steve.

bt.jpg

993190663   =      free swinging ringdown restarted again

Attachment 1: lrsen.png
lrsen.png
  5938   Fri Nov 18 01:12:14 2011 SureshUpdateCDSMC1 LR dead for > 1 month; now revived temporarily

[Den, Mirko, Suresh]

    We were investigating why there is no correlation between MC1 osem signals and seismic motion.   During this we noticed a recurrence of this old problem of MC1_LR sensor being dead.  I went and pressed down the chip holders where the AA filters used to sit and which now hold the jumper wire.  The board is large and flexible it is quite likely some solder joint is broken on the MC1_LR path on this board.

   The signal came back to life and is okay now. But it can break off again any time.

 

 

Quote:

 Since the MC1 LRSEN channel is not wasn't working, my input matrix diagonalization hasn't worked today wasn't working. So I decided to fix it somehow.

I went to the rack and traced the signal: first at the LEMO monitor on the whitening card, secondly at the 4-pin LEMO cable which goes into the AA chassis.

The signal existed at the input to the AA chassis but not in the screen. So I pressed the jumper wire (used to be AA filter) down for the channel corresponding to the MC1 LRSEN channel.

It now has come back and looks like the other sensors. As you can see from this plot and Joe's entry from a couple weeks ago, this channel has been dead since May 17th.

The ELOG reveals that Kiwamu caught Steve doing some (un-elogged) fooling around there. Burnt Toast -> Steve.

bt.jpg

993190663   =      free swinging ringdown restarted again

 

  4776   Wed Jun 1 11:31:50 2011 josephbUpdateCDSMC1 LR digital reading close to zero, readback ~0.7 volts

There appears to be a bad cable connection somewhere on the LR sensor path for the MC1 optic.

The channel C1:SUS-MC1_LRPDMon is reading back 0.664 volts, but the digital sensor channel, C1:SUS-MC1_LRSEN_INMON, is reading about -16.  This should be closer to +1000 or so.

We've temporarily turned off the LRSEN filter module output while this is being looked into.

I briefly went out and checked the cables around the whitening and AA boards for the suspension sensors, but even after wiggling and making sure everything was plugged in solidly.  There was one semi-loose connection, but it wasn't on the MC1 board, but I pushed it all the way in anyways.  The monitor point on the AA board looks correct for the LR channels, although ITMX LR struck me as being very low at about -0.05 Volts.

According to data viewer, the MC1 LR sensor channel went bad roughly two weeks ago, around 00:40 on 5/18 UTC, or 17:40 on 5/17 PDT.

 

UPDATE:

It appears the AA board (or possibly the SCSI cable connected to it) is the problem in the chain.

  16894   Mon Jun 6 21:01:22 2022 yutaUpdateIMCMC1 OSEM sensor sign flipped, MC1/2/3 free swinging overnight for inmat diagonalization

[Tomislav Andric, Rana, Yuta]

We put -1 to MC1 OSEM sensor gains and re-tuned MC1 damping.
We also kicked MC1, MC2, MC3 tonight for input matrix diagonalization.

MC1 damping investigations:
 We put -1 to MC1 OSEM sensor gains so that UL/UR/LR/LL/SDSEN_OUT will be positive like other optics.
 OSEM damping filter gains were adjusted.
 We have also checked if having +1 for all UL/UR/LR/LL/SDCOIL_GAIN is correct or not. It has been like this at least for the past year.
 It should be -1 for UR and LL to account for magnets, but if we did put -1 or them, kick in C1:SUS-MC1_PIT_OFFSET mostly gave yaw kick and kick in C1:SUS-MC1_YAW_OFFSET mostly give pitch kick.
 So, we reverted them to be +1.

Input matrix diagonalization:
 We also kicked MC1, MC2, MC3 tonight input matrix diagonalization.
 Kick was done manually at the following times local.
  - MC1 20:08 June 6th, 2022
  - MC2 20:24 June 6th, 2022
  - MC3 20:21 June 6th, 2022
 We will leave watchdogs shutdown to free swing overnight (damping loops are "on").
 This will help get better angular sensor from OSEMs to calibrate WFS signals.

Next:
 - Investigate why MC1 coils gains have +1 for all
 - Calculate input matrix. Make sure SUSPOS/PIT/YAW/SIDE_IN will be in the units of um or urad.

Suggestions:
 - Add filter ramp time of 1sec for all by default
 - Make null stream channel from input matrix for diagnostics

Attachment 1: Screenshot_2022-06-06_21-05-28.png
Screenshot_2022-06-06_21-05-28.png
  16895   Mon Jun 6 22:08:55 2022 KojiUpdateIMCMC1 OSEM sensor sign flipped, MC1/2/3 free swinging overnight for inmat diagonalization

Note that MC1 has a new style sat amp because the old one collapsed. The sign flip might have been the result of the replacement

 

  1144   Tue Nov 18 19:37:23 2008 YoichiUpdateIOOMC1 OSEM signals sign flipped and c1susvme1 restart problem
Around 2PM, MC1 started to swing crazily.
The damping feedback was not working and it was actually exciting the mirror wildly.
It turned out that the sign of the UR and UL OSEM signals flipped at that time.
Restarting c1sosvme fixed the problem.

While I was looking for the cause of the problem, c1susvme1 and c1susvme2 failed several times.
I don't know if it is related to this problem.
Now it is not trivial to restart c1susvme1. It fails to restart if you just power cycle it.
Alberto and I had to connect an LCD and a keyboard to it to see what was going on. After pushing the reset button on the front panel,
I had to press Ctrl+x. Otherwise, the state LED of c1susvme1 stays red and nothing happens.
After Ctrl+x, the boot screen came up but the boot sequence failed and an error message something the following was shown:
"PXE Boot failed, check the cable"
So I swapped the network cable with c1susvme2, which was already up and running.
This time, c1susvme1 started fine and surprisingly, c1susvme2 stayed alive.
Currently, both c1susvme1 and c1susvme2 are up and running with the LAN cables swapped.
We have to check the LAN cables.
  17238   Mon Nov 7 20:00:37 2022 AnchalUpdateSUSMC1 OSEMs output is weird

Following up, I tried to do this exercise with MC1 and MC3. While MC3 shows expected minute corrections to the previous value, MC1 showed much alrger corrections which led me to investigate further. Koji suggested to take a transfer function between MC_F and the OSEM outputs for both MC1 and MC3 the same way to see if something is different. And Koji was absolutely right. MC1 MC_F to OSEM outptu transfer function has a frequency dependent value, with a slope of ~0.6. Very weird. I'm holding on to doing OSEM calibration on both MC1 and MC3 until we know better on what is happening. See attached transfer functions.

Reminder, MC1 is using new satellite amplifier box, but OSEM outputs are read through single ended PDMon outputs rather than the differential ended PD Output port, because rest of the MC1 electronics is still last generation and the whitening board for them take in single ended input.

Attachment 1: MC1_MC_F_OSEM_TF.pdf
MC1_MC_F_OSEM_TF.pdf MC1_MC_F_OSEM_TF.pdf
Attachment 2: MC3_MC_F_OSEM_TF.pdf
MC3_MC_F_OSEM_TF.pdf MC3_MC_F_OSEM_TF.pdf
  17251   Wed Nov 9 20:01:38 2022 AnchalUpdateSUSMC1 OSEMs output is weird

I took a coil to OSEM transfer function for MC1 osems  (LL, UR) today and again the slope of the transfer function was -1.4 instead of -2 as expected. I compared this with MC3 coil to osem transfer function (LL) which correctly had the slope of -2. See attachments 1 and 2 for the results. This measurement was taken with PSL shutter closed and local damping loops turned off.

As I mentioned earlier, MC1 is using new satellite amplifier box (S2100029-v2) whose transfer function data exists and was actually measured by me in 40m/15776. Using this transfer function data, and the foton 3:30 (FM1) filter, I tried to recreate the product transfer function that should happen if both filters are working correctly. Attachment 3 shows these transfer function plots. I overlayed on top of this the measured transfer function of OSEM to position displacement as done in 40m/17238 by making the magnitude equal at 1 Hz. It is suspicious how nicely the measured transfer function overlay with the satellite amplifier measured transfer function, both in magnitude and phase. I'll investigate more tomorrow.

 

Attachment 1: MC1_COIL_to_OSEM_TF.pdf
MC1_COIL_to_OSEM_TF.pdf MC1_COIL_to_OSEM_TF.pdf
Attachment 2: MC3_COIL_to_OSEM_TF.pdf
MC3_COIL_to_OSEM_TF.pdf MC3_COIL_to_OSEM_TF.pdf
Attachment 3: MC1_UR_OSEM_TF.pdf
MC1_UR_OSEM_TF.pdf
  17256   Fri Nov 11 11:29:11 2022 AnchalUpdateSUSMC1 OSEMs output is weird

Late elog; original time Thursday, Nov 10 16:00 2022

MC1 is using a new satellite amplifier which was a whitening circuit on it with 3 Hz zero and 30 Hz pole. But to read out this signal, we use the old whitening board as it serves as the interface board with the ADC too. This is D000210 Whitening and Interface Board. This board has a switchable whitening filter which our RTS models supply GND as the switch input. It was not immediately clear to me if the GND input to this switch means whitening is ON or not.

I disconnected inputs and outputs to the whitening Board used for MC1 OSEM PDs, and I used a moku:go to measure the transfer function for the UR channel. This confirmed that whitening is turned ON on this interface board as well, which means the MC1 OSEM signals are whitened twice, while digitally we have been dewhitening only once. To fix this there are two possible solutions:

  • We turn on another identical dewhitening filter in MC1 OSEM input filter modules (a 3:30 at FM3)
  • We can change the MC1 Simulink model to stop keeping whitening on by default.
Attachment 1: MC1_UR_WhtnBrd_TF.png
MC1_UR_WhtnBrd_TF.png
  17259   Fri Nov 11 19:20:23 2022 ranaUpdateSUSMC1 OSEMs output is weird

I turned on the extra un-whitening filter (not the same as dewhitening) which Anchal has installed in the XXSEN filter banks of MC1. Seems good, so I'm leaving them on.


Anchal determined that the new satellite for MC1 was whitening, and also the old one was whitening, which made the whole thing non-white. So, I turned on the FM3 filter. I then checked that the ADCs were not saturating by looking at the spectrum of the IN1 channels (before the un-whitening). They are very far from saturating, but we should trend the ADC overflows on MC1 to make sure that this is not an issue (someone besides Tega should ask Tega how to add these to the summary pages so that not only Tega can edit summary pages).

In the attached plot, we can see that the reference trace for the unwhitened MC1 (FM1 ON, FM3 OFF) (black) sensor looks noisier than the others at 16 Hz, where we expect the suspensions to be mostly the same. This is because the analog whitening (amplification) was not being compensated properly. With FM1 and FM3 ON (RED) we can see that the spectra line up nicely below 20 Hz. Above 20 Hz the MC1 sensor is quieter than the others because the ADC noise is being reduced more.

Clearly, the other sensors could use some more whitening. If we find a reason to need lower damping noise in the future, let's remember this elog and remember that we ought to do proper signal conditioning on all our OSEMs. For now, probably doesn't matter.

Attachment 1: secretIMCsecrets.png
secretIMCsecrets.png
  12725   Mon Jan 16 23:25:07 2017 gautamUpdateSUSMC1 SUS electronics investigation

[rana,gautam]

Summary:

  • MC1 glitchy behaviour is back
  • Found a broken LEMO cable, left unplugged for the night -> to be repaired tomorrow
  • Further diagnosis to follow

During the course of Rana's inspection of the general state of the IFO, he commented that there seemed to be several seismic-related IMC lock losses in the time that he had been observing it. This issue looked suspiciously like the the MC1 glitches I had noticed sometime late last year, especially since each time the IMC would unlock, we could see significant amounts of motion on MC REFL. To diagnose, we did the following:

  1. Closed PSL shutter
  2. Ramped down the gains of the MC1 damping loops by factor of 1000 in ~4 secs using z step
  3. Shut down the watchdog for MC1
  4. Observed dataviewer traces for glitches

Sure enough, there were several glitches that occurred in all 5 sensor channels. These glitches varied in size from a few counts (the smaller ones) to 60-70counts for the bigger ones. In the past, squishing the LEMO connector on the front of the PD whitening board (D000210) had apparently made the glitching go away. So tonight, for starters, we squished everything else - Sat. Box connectors, the breakout board from Sat. Box to whitening board on the back of 1X6, and the DB connector on the front of the whitening board. This had no effect - the glitching remained consistent.

Next, Rana pulled out two of the three 4pin LEMOs, and left only those coresponding to UL/LL plugged in - but the glitching persisted in these two channels. We then pulled out the board. It was installed in 1998, but has a sticker on it that says "fixed in 2003". Not sure what the fix was. Visual inspection of the circuit didn't show anything obviously faulty, but it did look like the two MAX333A quad switches (these control whether the whitening is bypassed or not) had been replaced at some point. There are other undesirable features, such as the use of thick film resistors, but nothing that would explain the glitchy behaviour.

Next, we re-inserted the whitening board back into its original slot in the Eurocrate, but switched the cables (both D sub and LEMO, but only on the whitening board end) between the boards for MC1 and MC3 (i.e. MC1 cables were routed through the whitening board that was originally used for MC3, and vice-versa). But the glitches remained consistent on the MC1 channels. So it looks like the board is not a likely culprit.

Finally, we went in and squished all the cables from the PD whitening board to the ADC (via an AA filter board). For some of the LEMO cables from the whitening board, the LEMO backshells were not properly tightened. Rana fixed these before putting them back in. Some of the connectors were also not pushed in tightly enough, Rana heard the click when he pushed them in. The cables from the adaptor board to the ADC itself looked fine, it was screwed on at both ends, and all these connections looked snug enough. In the interest of completeness, Rana also pushed in the backplane connectors on the Eurocrate (these supply the signals from the BIO cards to switch the whitening ON/OFF). The one corresponding to MC1 was indeed a little loose.

Coming back to the control room, we saw that the MC1 LR sensor was dead. After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints. Could this explain the glitchy behaviour? Perhaps, but the glitches remained in the 3 channels that were connected. Anyways, I will repair this cable tomorrow, and we can see if this has fixed the problem or not..


Some misc points:

  1. Regarding the adaptor boards that take the PD signals from the satellite box and route it to the whitening board, there are some clamps that hold the IDE connectors in place for MC1, MC2 and MC3 boards, but not for the others (see attached picture). Steve, can we install clamps for all of the boards? [taken care of, see here]
  2. The whitening boards are not screwed in place into the Eurocrate. This should be rectified.

PSL shutter is closed, MC1 watchdog is shutdown for the night.

Attachment 1: 20170116_231625.png
20170116_231625.png
Attachment 2: IMG_7175.JPG
IMG_7175.JPG
Attachment 3: IMG_7174.JPG
IMG_7174.JPG
  12728   Tue Jan 17 21:29:52 2017 gautamUpdateSUSMC1 SUS electronics investigation

 

Quote:
 

After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints.

The faulty cable has been re-soldered (with heat shrink) and replaced. All 5 sensor signals appear normal on dataviewer now. I am leaving things in this state for the night, let us see if the glitches return overnight.

PSL shutter remains closed

  12731   Wed Jan 18 11:40:54 2017 gautamUpdateSUSMC1 SUS electronics investigation

After the repair of the faulty LEMO cable, I left MC1 with it's watchdog off overnight. Unfortunately, it looks like the problem still persists. The first attachment shows a second trend plot for the past 15 hours. Towards the left end of the plot, you can see where I re-connected the LEMO cable for the LR/UR channels.

A couple of months ago, I added a BLRMS block for the IMC optics that calculates BLRMS for the shadow sensor output as well as the coil output. Looking at this trend overnight, I noticed that the glitches appear in the coil outputs as well, as shown in the plot below, which is for a 1 hour stretch last night (I used the full data from a 16Hz coil output channel and not the BLRMS, I am not sure if there is a DQ'ed version of the coil outputs?).

Zooming in further to one of these glitches, we can see that the glitches in the coil and shadow sensor signals are in fact coincident.

But given that the watchdog was turned off all this time, the only voltage going to the coils should be the DC bias voltages. So does this not support the hypothesis that the problem lies in the part of the signal chain that supplies the bias voltage to the coils?

Never mind, the "coil output" channel isn't a true readback of the voltage to the coil, but is the calculated damping output (which is not sent to the coils when the watchdog is shutdown...

  12734   Wed Jan 18 14:23:47 2017 gautamUpdateSUSMC1 SUS electronics investigation

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

  12736   Wed Jan 18 18:44:53 2017 gautamUpdateSUSMC1 SUS electronics investigation
Quote:

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.

  12737   Thu Jan 19 08:25:12 2017 SteveUpdateSUSMC1 SUS electronics investigation
Quote:
Quote:

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.

No change.

Attachment 1: MC1_MC3_ITMY_ETMX_sensors.png
MC1_MC3_ITMY_ETMX_sensors.png
Attachment 2: sensors_UL.png
sensors_UL.png
  12739   Thu Jan 19 12:00:10 2017 gautamUpdateSUSMC1 SUS electronics investigation

Going through the last ~20 hours of data, the MC1 sensor channels look glitch free the entire period. However, there is a ~10min period around 1PM UTC today when there were a couple of glitches ~80 counts in size in all the MC3 sensor channels. The attached shows the full 2k data from all 10 channels (MC1 and MC3 sensors) around this time.

Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...

 

  12741   Thu Jan 19 19:56:09 2017 ranaUpdateSUSMC1 SUS electronics investigation

Might be. Or it might be in the satellite box cabling. Hard to tell without a tester. I recommend you squish the cables on there and lock the MC and get back to the usual business. I'll check on sat. box with Ben.

Quote:

 

Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...

 

  12742   Fri Jan 20 11:16:30 2017 gautamUpdateSUSMC1 SUS electronics investigation

Both suspensions have been relatively well behaved for the best part of the last two days, since I effected the Satellite Box swap. Today morning, I set about re-enabling the damping and locking the MC. Judging by the wall StripTool, it stayed locked for about 30 mins or so, after which the glitching returned.

Attached is a screenshot of the sensor signals from MC1 and MC3 (second trend), and also the highest band (>30Hz) BLRMS output for the same 10 channels (full data sampled at 16Hz). Note that MC1 and MC3 satellite boxes remain swapped. So the glitches now have migrated to the MC3 channels.

I need to think about whether this is just coincidence, or if me re-enabling the damping has something to do with the re-occurrence of the glitching...


Addendum 4.30pm: I've also re-aligned the Y arm. Its alignment has been stable over the last few hours, despite several mode cleaner lock losses in between, it recovers good IR transmission. The X arm has been re-aligned to green, but I can't get it locked to the IR - everytime I turn the LSC to ETMX on, there seems to be some large misalignment applied to it. c1iscaux was dead, I restarted it by keying the crate. I haven't had time to investigate the X arm locking in detail, I will continue to debug this.

  12745   Mon Jan 23 10:24:01 2017 SteveUpdateSUSMC1 SUS electronics investigation

Two day plot of glitching suspentions: MC3, ITMY and ETMX

Attachment 1: 3glitchingSUS.png
3glitchingSUS.png
  16139   Thu May 13 19:38:54 2021 AnchalUpdateSUSMC1 Satellite Amplifier Debugged

[Anchal Koji]

Koji and I did a few tests with an OSEM emulator on the satellite amplifier box used for MC1 which is housed on 1X4. This sat box unit is S2100029 D1002812 that was recently characterized by me 15803. We found that the differential output driver chip AD8672ARZ U2A section for the UL PD was not working properly and had a fluctuating offset at no input current from the PD. This was the cause of the ordeal of the morning. The chip was replaced with a new one from our stock. The preliminary test with the OSEM emulator showed that the channel has the correct DC value.

In further testing of the board, we found that the channel 8 LED driver was not working properly. Although this channel is never used in our current cable convention, it might be used later in the future. In the quest of debugging the issue there, we replaced AD8672ARZ at U1 on channel 8. This did not solve the issue. So we opened the front panel and as we flipped the board, we found that the solder blob shorted the legs of the transistor Q1 2N3904. This was replaced and the test with the LED out and GND shorted indicated that the channel is now properly providing a constant current of 35mA (5V at the monitor out).


After the debugging, the UL channel became the least noisy among the OSEM channels! Mode cleaner was able to lock and maintain it.

We should redo the MC1 input matrix optimization and the coil balancing afterward as we did everything based on the noisy UL OSEM values.

Attachment 1: MC1_UL_Channel_Fixed.png
MC1_UL_Channel_Fixed.png
  15376   Thu Jun 4 20:54:40 2020 gautamUpdateSUSMC1 Slow Bias issues

Summary:

I found that there is an issue with the MC1 slow bias voltages. 

Details:

I usually offload the DC part of the output voltage from the WFS servos to the slow bias voltage sliders, so as to preserve maximum actuation range from the fast system. However, today, I found that this servo wasn't working well at all. So I dug a little deeper. Looking at the EPICS database records:

  • The user-facing channels are "PIT" and "YAW" bias voltages.
  • These are converted to voltages to be sent to individual coils by some calc channels in the EPICS database record. So, for example, the voltage to be sent to the "UL" coil (Upper Left, as viewed from the AR side of the optic), is A+B, where A is the "PIT" voltage and B is the "YAW" voltage. Similar combinations of A and B are used for the other 3 face coils.
  • The problem is obvious - if either A or B > 5V, then the requested voltage to be sent to the UL coil is > 10 V, while the Acromag DACs can put out a maximum of 10 V
  • As it happens, with the IFO currently aligned, MC1 is the only optic which faces this problem. 
  • Why has this not been an issue before? In fact, looking at some old data, the "PIT" and "YAW" bias voltages to MC1 were both ~1-2 V in 2018. But I confirmed that something in the region of ~5 V is required from each of the "PIT" and "YAW" channels to bring the MCREFL spot back to the center of the camera, so something has changed the DC alignment of MC1, maybe an earthquake or something? Anyway, with these settings, 2/4 coils are basically saturated, and so we can only move the optic diagonally. 😢 
  • Other coils that have  requested output voltages > 5V (so more than half the range of the DAC) include MC2 LL (5.2V), and ETMX LL and LR (5.5 and 5.8 V respectively).
  • Either a factor of 0.5 should be included in all the EPICS database records, or else, we should make the "PIT" and "YAW" sliders range only from -5 to +5 V, so that this kind of misleading info isn't wasting time.
  15377   Thu Jun 4 21:32:00 2020 KojiUpdateSUSMC1 Slow Bias issues

We can limit the EPICS values giving some parameters to the channels. cf https://epics.anl.gov/tech-talk/2012/msg00147.php

But this does not solve the MC1 issue. Only we can do right now is to make the output resister half, for example.

  11446   Fri Jul 24 23:08:53 2015 IgnacioUpdatePEMMC1 accelerators moved for future huddle test

I have moved the MC1 accelerators and cable to the huddle test set up, in order to see how a six witness huddle test with the improved set up will do. 

Here is a picture of the accelerometer set up,

Our motivation for doing this is to see if more witness signals used in the Wiener filter really does indeed improve subtraction, as it was seen from previous huddle results, specially in the region above 10 Hz.

  15445   Wed Jul 1 12:50:40 2020 gautamUpdatePEMMC1 accelerometers plugged in

I re-connected the 3 accelerometers located near the MC1/MC3 chamber. It was a bit tedious to get the cabling sorted - I estimate the cable is ~80m long, and the excess length had to be wound around a spool (see Attachment #1), which wasn't really a 1 person job. It's neat-ish for now, but I'm not entirely satisfied. I think we should get shorter cables (~20m), and also mount the pre-amp/power units in a rack instead of leaving it on the floor. The pre-amp settings are x100 for all three channels. The MC2 channels are powered, but are unconnected to the seismometers - it was too tedious to unroll the other spool yesterday. Apart from this, the cable for the "Z" channel had to be re-seated in the strain relief clamp.

I did not enable any of the CDS filters that convert the raw signal into physical units, so for now, these channels are just recording raw counts.

Update 7pm: the spectra in the current config are here - not sure what to make of the MC2_Z channel appearing to show lower noise?

Update July 13 2020 430pm: This afternoon, I hooked up the MC2 accelerometer channels too...

Attachment 1: IMG_8617.JPG
IMG_8617.JPG
Attachment 2: IMG_8616.JPG
IMG_8616.JPG
  16087   Tue Apr 27 10:05:28 2021 Anchal, PacoUpdateSUSMC1 and MC3 F2A Filters Tested

We extended the f2a filter implementation and diagnostics as summarized in 16086 to MC1 and MC3.


MC1

Attachment 1 shows the filters with Q=3, 7, 10. We diagnosed using Q=3.

Attachment 2 shows the test summary, exciting with broadband noise on the LSC_EXC and measuring the CSD to estimate the transfer functions.


MC3

Attachment 3 shows the filters with Q=3, 7, 10. We diagnosed using Q=3.

Attachment 4 shows the test summary, exciting with broadband noise on the LSC_EXC and measuring the CSD to estimate the transfer functions.


Our main observation (and difference) with respect to MC2 is the filters have relative success for the PIT cross-coupling and not so much for YAW. We already observed this when we tuned the DC output gains to compute the filters.

Attachment 1: IMC_F2A_Params_MC1.pdf
IMC_F2A_Params_MC1.pdf IMC_F2A_Params_MC1.pdf IMC_F2A_Params_MC1.pdf
Attachment 2: MC1_POStoAng_CrossCoupling.pdf
MC1_POStoAng_CrossCoupling.pdf MC1_POStoAng_CrossCoupling.pdf MC1_POStoAng_CrossCoupling.pdf MC1_POStoAng_CrossCoupling.pdf MC1_POStoAng_CrossCoupling.pdf MC1_POStoAng_CrossCoupling.pdf
Attachment 3: IMC_F2A_Params_MC3.pdf
IMC_F2A_Params_MC3.pdf IMC_F2A_Params_MC3.pdf IMC_F2A_Params_MC3.pdf
Attachment 4: MC3_POStoAng_CrossCoupling.pdf
MC3_POStoAng_CrossCoupling.pdf MC3_POStoAng_CrossCoupling.pdf MC3_POStoAng_CrossCoupling.pdf MC3_POStoAng_CrossCoupling.pdf MC3_POStoAng_CrossCoupling.pdf MC3_POStoAng_CrossCoupling.pdf
  17286   Fri Nov 18 17:00:15 2022 AnchalUpdateSUSMC1 and MC3 OSEMs calibrated using MC_F

After the MC1 osem dewhitening was fixed, I did the calibration of MC1 OSEM signals using MC_F using this notebook. A 0.1 Hz oscillation with amplitude of 1000 cts was sent to MC1 lockin2 and was kept on between 1352851381 and 1352851881. Then I read back the data from DQ channels and performed a welch with standard deviation calculation from the different segments used. From this measurement, I arrive to the following cts2um gain values that were changed in MC1 filter file. The damping remained stable after the changes:

MC1:
UL: 0.09 -> 0.105(12)
UR: 0.09 -> 0.078(9)
LR: 0.09 -> 0.065(7)
LL: 0.09 -> 0.087(10)

I followed the same method for MC3 as well to get mroe meaningful error bars. This measurement was done between 1352856980 and 1352857480 using this notebook. Here are the changes made:

MC3
UL: 0.39827 -> 0.509(57)
UR: 0.33716 -> 0.424(48)
LR: 0.335 -> 0.365(40)
LL: 0.34469 -> 0.376(43)

The larger error bars could be due to more noisy MC3 osem outputs as the satellite amplifier gain is lower here.

  16072   Thu Apr 22 12:17:23 2021 Anchal, PacoUpdateSUSMC1 and MC3 Suspension Optimization Summary
MC1 Coil Balancing DC and AC Gains
  POS (DC coil Gain) PIT (DC coil Gain) YAW (DC coil Gain) Coil Output Gains (AC)
UL 0.6613 1 1 0.5885
UR 0.7557 1 -1 0.1636
LL 1.3354 -1 1 1.8348
LR 1.0992 -1 -1 0.5101

Note: The AC gains were measured by keeping output matrix to ideal values of 1s. When optimizing DC gains, the AC gains were uploaded in coil ouput gains.


MC1 Diagonalized input matrix
  UL UR LR LL SIDE
POS 0.1700 0.1125 0.0725 0.1300 0.4416
PIT 0.1229 0.1671 -0.1021 -0.1463 0.1567
YAW 0.2438 -0.1671 -0.2543 0.1566 -0.0216
SIDE 0.0023 0.0010 0.0002 0.0015 0.0360

MC1 Suspension Damping Gains
  Old gains New Gains
SUSPOS 120 270
SUSPIT 60 180
SUSYAW 60 180


MC3 Coil Balancing DC and AC Gains
  POS (DC coil Gain) PIT (DC coil Gain) YAW (DC coil Gain) Coil Output Gains (AC)
UL 1.1034 1 1 0.8554
UR 1.1034 1 -1 -0.9994
LL 0.8845 -1 1 -0.9809
LR 0.8845 -1 -1 1.1434

Note: The AC gains were measured by keeping output matrix to ideal values of 1s. When optimizing DC gains, the AC gains were uploaded in coil ouput gains.


MC3 Input matrix (Unchanged from previous values)
  UL UR LR LL SIDE
POS 0.28799 0.28374 0.21201 0.21626 -0.40599
PIT 2.65780 0.04096 -3.2910 -0.67420 -0.72122
YAW 0.60461 -2.7138 0.01363 3.33200 0.66647
SIDE 0.16601 0.19725 0.10520 0.07397 1.00000

MC3 Suspension Damping Gains
  Old gains New Gains
SUSPOS 200 500
SUSPIT 12 35
SUSYAW 8 12
  17361   Fri Dec 16 14:52:42 2022 PacoSummarySUSMC1 and MC3 coil dewhitening filters added, location corrected

I corrected the filter module location for 28 Hz ELP filter on MC1 and MC3 coil output filter banks to FM8 (from FM9). FM9 is always reserved for SimDW (which is simulated dewhitening filter and is supposed to be a copy of the dewhitening filter on the analog side). FM10 is also reserved for InvDW filter which performs the anti-dewhitening before DAC. This filter module, FM10, shoudl remain on always. When FM9 is on (SimDW), the analog dewhitening turns off and we get a flat digital response as well. When FM9 is off, the analog dewhitening is turned on. Nominal operation configuration right now is to not use the coil dewhitening and keep FM9 and FM10 on always.

 

  15431   Thu Jun 25 15:11:00 2020 gautamUpdateSUSMC1 coil driver resistance quartered

I implemented this change today. We only had 100 ohm, 3W resistors in stock (no 200 ohm with adequate power rating). Assuming 10 V is dropped across this resistor, the power dissipation is V^2/R ~ 1 W, so we should have sufficient margin. DCC entry has been updated with new schematic and photo of the component side of the board. Note that the series resistance of the fast actuation path was untouched.

As expected, the requested voltage no longer exceeds the Acromag DAC range, it is now more like 2.5 V. However, I still notice that the MC REFL spot moves somewhat diagonally on the camera image - so maybe the coil gains are seriously imbalanced? Anyway, the WFS control signals can once again be safely offloaded to the slow bias voltages once again, preserving the fast ADC range for other actuation.

The Johnson noise of the series resistor has now increased by a factor of 2, from ~6.4 pA/rtHz to 12.8 pA/rtHz. Assuming a current to force coefficient of 1.6 mN/A per coil, the length noise of the cavity is expected to be 12.8e-12 * 0.064/0.25/(2*pi*100)^2 ~ 8e-18 m/rtHz at 100 Hz. In frequency units, this is 80 uHz/rtHz. I think our IMC noise is at least 10 times higher than this at 100 Hz (in any case, the noise of the coil driver is NOT dominated by the series resistance). Attachment #1 confirms that there isn't any significant MCF noise increase, and I will check with the arm cavity too. Nevertheless, we should, if possible, align the optic better and use as high a series resistance as possible.

The watchdog for MC1 was disabled and the board was pulled out for this work. After it was replaced, the IMC re-locks readily.

Quote:

But this does not solve the MC1 issue. Only we can do right now is to make the output resister half, for example.

Attachment 1: MCF.pdf
MCF.pdf
  17222   Thu Nov 3 14:00:29 2022 AnchalUpdateSUSMC1 coil strengths balanced

I balanced the face coil strengths of MC1 using following steps:

  • At all points, keep sum(abs(coil_gains)) = 4
  • After reading coil gains, remove the signs. Do the operations as below, and before writing put back the signs.
  • Butterfly to POS decoupling:
    • Drive butterfly mode at 13 Hz using LOCKIN2 on MC1 and look at C1:IOO-MC_F_DQ for position fluctuations
    • Subtract 0.05 times the BUT vector to coil strengths to see the effect on C1:IOO-MC_F_DQ using diaggui exponential averaging of 5, BW=1.
    • Use Newton-Raphson from here to reach to no POS actuation when driving butterfly mode.
  • POS to PIT decouping:
    • Drive LOCKIN2 in POS mode at 13 Hz and look for PIT signal at C1:IOO-MC_TRANS_P_DQ using diaggui exponential averaging of 5, BW=1.
    • Subtract 0.05 times the PIT vector to coil strengths
    • Use Newton-Raphson from here to reach to no PIT actuation when driving POS.
  • POS to YAW decoupling:
    • Drive LOCKIN2 in POS mode at 13 Hz and look for YAW signal at C1:IOO-MC_TRANS_Y_DQ using diaggui exponential averaging of 5, BW=1.
    • Subtract 0.05 times the YAW vector to coil strengths
    • Use Newton-Raphson from here to reach to no YAW actuation when driving POS.

By the end, I was able to see no actuation on POS when butterfly is driven with 30000 counts amplitude at 13 Hz. I was able to see no PIT or YAW actuation when POS is driven with 10000 counts at 13 Hz.

Final coil strengths found:

C1:SUS-MC1_ULCOIL_GAIN: -1.008
C1:SUS-MC1_URCOIL_GAIN: -0.98  
C1:SUS-MC1_LRCOIL_GAIN: -1.06
C1:SUS-MC1_LLCOIL_GAIN: -0.952

I used this notebook while doing the above work. It has a couple of functions that could be useful in future while doing similar balancing.

 

  1406   Mon Mar 16 12:26:59 2009 YoichiConfigurationIOOMC1 drift
There seems to be a large drift of MC1 even when there is no WFS feedback.
The attached plot is an example a 20min trend. You can see that MC1 OSEM signals drift significantly larger than that of MC2/MC3.
You can also be sure that there is no drifting voltage applied to the coils on the MC1 during this period.

If no one is working on the IFO today during the LV meeting, I'd like to leave the MC unlocked and see the trend of the MC1 OSEM signals.
Please do not turn on the MC auto locker unless you want to use the IFO.
If you want to do some measurements, please go ahead and lock the MC, but please write it down in the elog.
Thanks.
Attachment 1: MC1_Drift1.pdf
MC1_Drift1.pdf
ELOG V3.1.3-