40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 305 of 339  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  12518   Mon Sep 26 19:48:09 2016 LydiaUpdateSUSITMX stuck again, ITMY whitening issue

This afternoon around 2:45, ITMX started ringing up at ~.9Hz for about a minute and then got stuck again. When I noticed this evening, I tried to free it with the alignment sliders but was unable to see any signal on UL or UR. It also looks like the damping for ITMY was turned off at the same time ITMX got stuck (not at the start of its ring up). SRM also has a spike in its motion at this time, and another one minute later that ended up with the LR OSEM at a much higher level, though the mirror does not appear to be stuck. We didn't see any strange behavior from any of the other optics.

Teng and I were working on diagnosing a problem with the ITMY UL whitening, but by the time we disconnected any applicable cables, the damping for ITMY was already off. Later we unplugged the ITMX PD whitening cables after verifying that the ITMX damping was also already off. This problem may have occured earlier, while Teng, Eric, and I were examining and pushing in the cables at 1X5 without unplugging anything.

We found that the reason for the bad phase on the ITMY free swing data is because the whitening filter for UL is not being properly turned on. We are in the process of investigating the source of this problem. Right now all the cables to the PD whitening boxes in 1X5 are switched between ITMY and ITMX.
 

Attachment 1: 44.png
44.png
Attachment 2: 26.png
26.png
  12519   Tue Sep 27 08:49:47 2016 SteveUpdateSUSseismic activity is up

The earth quake shook ITMX free for a  short while.

 

Attachment 1: 4.3mSaltonSee.png
4.3mSaltonSee.png
Attachment 2: ITMXstuck.png
ITMXstuck.png
  12520   Tue Sep 27 18:04:50 2016 LydiaUpdateSUSITMX slow channels down, ITMY diagonalization update

[Teng, Lydia]

When we plugged in the back cables yesterday on the whitening boxes after switching them, two of the ITMX PDMon channels (UR and LR) got stuck at 0. This caused me to believe ITMX was still stuck even when it was freed. However, it was left in a stuck state overnight and freed again today after this issue was discovered. The alignment sliders have been set to 0 as a safety net to keep ITMX from getting stuck again if c1susaux is restarted again. We switched the cables back and the problem was still there.

The ITMY UL whitening filter problem, which the cables were originally switched to diagnose, was also still there. Ericq suggested we turn off all the whitening filters in order to get diagonalization data that would not show a phase difference between coils. We ran the diagonalization again with all the dewhitening filters off and got much cleaner results, with no visible cross-coupling peaks remaining between the degrees of freedom (see attachemnt 1). We did not apply this matrix to the damping, however, because there are elements which have the wrong sign compared to the ideal matrix. Significant adjustments to the output matrix will probably need to be made if this result is to be used. We also verified that the phase problem had been solved in DTT, where we saw the same sign discrepancies as in the matrix below. 

Damping can be turned back on, using the old, non-diagonalized matrix currently in effect. There is enough free swing data to diagonalize ITMY now, so feel free to mess with it. 

Matrix (wrong signs red, suspiciously small elements orange):

           pit     yaw     pos         side    butt
UL    1.633   0.138   1.224   0.136   0.984  
UR   -0.202  -1.768   1.179   0.132  -1.028  
LR   -2.000   0.094   0.776   0.107   1.001  
LL   -0.165   2.000   0.821   0.111  -0.987  
SD    0.900   1.131  -1.708   1.000  -0.107  

 

Attachment 1: ITMY_diagsuccess.pdf
ITMY_diagsuccess.pdf
  12523   Thu Sep 29 16:19:29 2016 LydiaUpdateSUSFree swing eigenmodes

[Lydia, Teng]

Motivated by the strange pitch/yaw coupling behavior we ran into while doing diagonalization, we looked at the oplev pitch and yaw free swing spectra for all 4 test masses (see attachment 1). We saw the same behavior there: At the peak frequencies for the angular degress of freedom, the oplevs saw significant contributions from both pitch and yaw. We also examined the phase between pitch and yaw at these peaks and found that consistently, pitch and yaw were in phase at one of the resonance frequencies and out of phase at the other (ignoring the pos and side peaks). 

This corresponds physically to angular motion about some axis that is diagonal, ie not perfectly vertical or horizontal. If we trust the oplev calibration, and Eric says that we do, then the angle of this axis of rotation with the horizontal (pitch axis) is

 \theta = \arctan \frac{Y\left ( f_{peak} \right )}{P\left ( f_{peak} \right )}  

Where Y and P are yaw and pitch ASD values. This will always give an angle between 0 and 90 degrees; which quadrant the axis of rotation occupies can be dermined by looking at the phase between pitch and yaw at the same frequencies. 0 phase means that the axis of rotation lies somewhere less than 90 degrees counterclockwise from the horizontal as viewed from the AR face of the optic, and a phase of 180 degrees means the axis is clockwise from horizontal (see attachment 2). Qualitatively, these features show up the same way for segments of data taken at different times. In order to get some quantitative sense of the error in these angles, we found them using spectrogram values with a bandwidth of 0.02 Hz averaged over 4000 seconds. 

Results (all numbers in degrees unless otherwise specified):

ITMY
peak 1 ( 0.692  Hz):
mean: 24.991
std: 1.23576
ptich/yaw phase: -179.181
peak 2 ( 0.736  Hz):
mean: 21.7593
std: 0.575193
pitch/yaw phase: 0.0123677

 

ITMX
peak 1 ( 0.502  Hz):
mean: 17.4542
std: 0.745867
ptich/yaw phase: -179.471
peak 2 ( 0.688  Hz):
mean: 74.822
std: 0.455678
pitch/yaw phase: -0.43991

 

ETMX
peak 1 ( 0.73  Hz):
mean: 43.1952
std: 1.54336
ptich/yaw phase: -0.227034
peak 2 ( 0.85  Hz):
mean: 60.7117
std: 0.29474
pitch/yaw phase: -179.856

ETMY
peak 1 ( 0.724  Hz):
mean: 78.2868
std: 2.18966
ptich/yaw phase: 6.03312
peak 2 ( 0.844  Hz):
mean: 26.0415
std: 2.10249
pitch/yaw phase: -176.838

ETMY and ITMX both show a more significant (~4x) contribution from pitch on one peak, and from yaw on the other. This is reflected in the fact that they each have one angle somewhat close to 0 (below 30 degrees) and one close to 90 (above 60 degrees). The other two test masses don't follow this rule, meaning that the 2 angular frequency peaks do not correspond to pitch and yaw straightforwardly. 

Also, besides ITMX, the axes of rotation are at least several degrees away from being perpendicular to each other. 

 

Attachment 1: 05.png
05.png
Attachment 2: SUS_eigenmodes.png
SUS_eigenmodes.png
  12536   Thu Oct 6 15:42:51 2016 LydiaUpdateSUSOutput matrix diagonalization

Summary: At the 40m meeting yesterday, Eric Q. gave the suggestion that we accept the input matrix weirdness and adjust the output matrix by driving each coil individually so that it refers to the same degrees of freedom. After testing this strategy, I don't think it will work. 

Yesterday evening I tested this idea by driving one ITMY coil at a time, and measuring the response of each of the free swing modes at the drive frequency. I followed more or less the same procedure as the standard diagonalization: responses to each of the possible stimuli are compared to build a matrix, which is inverted to describe the responses given the stimuli. For the input matrix, the sensor readings are the responses and the free swing peaks are the stimuli. For the output matrix, the sensors transformed by the diagonalized input matrix as the responses of the dofs which are compared, and the drive frequency peak associated with a coil output is the stimulus. However, the normalization still happens to each dof independently, not to each coil independently. 

The output matrix I got had good agreement with the ITMY input matrix in the previous elog: for each dof/osem the elements had the same sign in both input and output matrices, so there are no positive feedback loops. The relative magnitude of the elements also corresponded well within rows of the input matrix. So the input and output matrices, while radically different from the ideal, were consistent with each other and referred to the same dof basis. So, I applied these new matrices (both input and output) to the damping loops to test whether this approach would work. 

drive-generated output matrix: 

      UL      UR      LR       LL      SD
pit    1.701  -0.188  -2.000  -0.111   0.452  
yaw    0.219  -1.424   0.356   2.000   0.370  
pos    1.260   1.097   0.740   0.903  -0.763  
sid    0.348   0.511   0.416   0.252   1.000  
but    0.988  -1.052   0.978  -0.981   0.060

However, when Gautam attempted to lock the Y arm, we noticed that this change significantly impacted alignment. The alignment biases were adjusted accordingly and the arm was locking. But when the dither was run, the lock was consistently destroyed. This indicates that the dither alignment signals pass through the SUS screen output matrix. If the output matrix pitch and yaw columns refer instead to the free swing eigenmodes, anything that uses the output matrix and attempts to align pitch and yaw will fail. So, the ITMY matrices were restored to their previous values: a close to ideal input matrix and naive output matrix. We could try to change everything that is affected by the output matrices to be independent of a transformation to the free swing dof basis, and then implement this strategy. But to me, that seems like an unneccessary amount of changes with unpredictable consequences in order to fix something that isn't really broken. The damping works fine, maybe even better, when the input matrix is set by the output matrix: we define pitch, for example, to be "The mode of motion produced by a signal to the coils proportional to the pitch row of the naieve output matrix," and the same for the other dofs. Then you can drive one of these "idealized" dofs at a time and measure the sensor responses to find the input matrix. (That is how the input matrix currently in use for ITMY was found, and it seems to work well.) 

 

  12540   Fri Oct 7 20:56:15 2016 KojiUpdateSUSOutput matrix diagonalization

I wanted to see what is the reason to have such large coupling between pitch and yaw motions.

The first test was to check orthogonality of the bias sliders. It was done by monitoring the suspension motion using the green beam.
The Y arm cavity was aligned to the green. The damping of ITMY was all turned off except for SD.
Then ITMY was misaligned by the bias sliders. The ITMY face CCD view shows that the beam is reasonably orthogonally responding to the pitch and yaw sliders.
I also confirmed that the OPLEV signals also showed reasonablly orthogonal responce to the pitch and yaw misalignment.

=> My intuition was that the coils (including the gain balance) are OK for a first approximation.

Then, I started to excite the resonant modes. I agree that it is difficult to excite a pure picth motion with the resonance.


So I wanted to see how the mixing is frequency dependent.

The transfer functions between ITMY_ASCPIT/YAW_EXC to ITMY_OPLEV_PERROR/YERROR were measured.

The attached PDFs basically shows that the transfer functions are basically orthogonal (i.e. pitch exc goes to pitch, yaw exc goes to yaw) except at the resonant frequency.

I think the problem is that the two modes are almost degenerate but not completely. This elog shows that the resonant freq of the ITMY modes are particularly close compared to the other suspensions.
If they are completely degenerate, the motion just obeys our excitation. However, they are slightly split. Therefore, we suffer from the coupled modes of P and Y at the resonant freq.
However, the mirror motion obeys the exitation at the off resonance as these two modes are similar enough.

This means that the problem exists only at the resonant frequencies. If the damping servos have 1/f slope around the resonant freqs (that's the usual case), the antiresonance due to the mode coupling does not cause servo instability thank to the sufficient phase margin.

In conclusion, unfortunately we can't diagnalize the sensors and actuators using the natural modes because our assumption of the mode purity is not valid.
We can leave the pitch/yaw modes undiagnalized or just believe the oplevs as a relatively reliable reference of pitch and yaw and set the output matrix accordingly.

 

The figures will be rotated later.

Attachment 1: 161007_P.pdf
161007_P.pdf
Attachment 2: 161007_Y.pdf
161007_Y.pdf
  12541   Mon Oct 10 09:31:25 2016 SteveUpdateSUSPRM damping restored

Local earth quake 3.7 mag  trips PRM

ETMY_UL glitch

What about the MC?

 

Attachment 1: 3.7mLomaLinda.png
3.7mLomaLinda.png
Attachment 2: ETMY_UL_glitch.png
ETMY_UL_glitch.png
Attachment 3: MC_glitcing?.png
MC_glitcing?.png
  12546   Tue Oct 11 00:43:50 2016 ericqUpdateSUSPRM LR problematic again

Tonight, and during last week's locking, we noticed something intermittently kicking the PRM. I've determined that PRM's LR OSEM is problematic again. The signal is coming in and out, which kicks the OSEM damping loops. I've had the watchdog tripped for a little bit, and here's the last ten minutes of the free swinging OSEM signal:

Here's the hour trend of the PRM OSEMS over the last 7 days a plot of just LR since the fix on the 9th of September.

It looks like it started misbehaving again on the evening of the 5th, which was right when we were trying to lock... Did we somehow jostle the suspension hard enough to knock the foil cap back into a bad spot?

  12549   Tue Oct 11 10:15:04 2016 SteveUpdateSUSPRM LR problematic again

It started here

 

Attachment 1: PRMalfoiled.png
PRMalfoiled.png
  12550   Tue Oct 11 10:38:51 2016 SteveUpdateSUS wire standoffs update

100 Sapphire prizms arrived.

 

Attachment 1: A028_-_20161010_170319.jpg
A028_-_20161010_170319.jpg
Attachment 2: A027_-_20161010_170202.jpg
A027_-_20161010_170202.jpg
  12551   Tue Oct 11 13:30:49 2016 gautamUpdateSUSPRM LR problematic again

Perhaps the problem is electrical? The attached plot shows a downward trend for the LR sensor output over the past 20 days that is not visible in any of the other 4 sensor signals. The Al foil was shorting the electrical contacts for nearly 2 months, so perhaps some part of the driver circuit needs to be replaced? If so a Satellite Box swap should tell us more, I will switch the PRM and SRM satellite boxes. It could also be a dying LED on the OSEM itself I suppose. If we are accessing the chamber, we should come up with a more robust insulating cap solution for the OSEMs rather than this hacky Al foil + kapton arrangement.

The PRM and SRM Satellite boxes have been switched for the time being. I had to adjust some of the damping loop gains for both PRM and SRM and also the PRM input matrix to achieve stable damping as the PRM Satellite box has a Side sensor which reads out 0-10V as opposed to the 0-2V that is usually the case. Furthermore, the output of the LR sensor going into the input matrix has been turned off.

 

  12552   Wed Oct 12 13:34:28 2016 gautamUpdateSUSPRM LR problematic again

Looks like what were PRM problems are now seen in the SRM channels, while PRM itself seems well behaved. This supports the hypothesis that the satellite box is problematic, rather than any in-vacuum shenanigans.

Eric noted in this elog that when this problem was first noticed, switching Satellite boxes didn't seem to fix the problem. I think that the original problem was that the Al foil shorted the contacts on the back of the OSEM. Presumably, running the current driver with (close to) 0 load over 2 months damaged that part of the Satellite box circuitry, which lead to the subsequent observations of glitchy behaviour after the pumpdown. Which begs the question - what is the quick fix? Do we try swapping out the LM6321 in the LR LED current driver stage? 

GV Edit Nov 2 2016: According to Rana, the load of the high speed current buffer LM6321 is 20 ohms (13 from the coil, and 7 from the wires between the Sat. Box and the coil). So, while the Al foil was shorting the coil, the buffer would still have seen at least 7 ohms of load resistance, not quite a short circuit. Moreover, the schematic suggests that that the kind of overvoltage protection scheme suggested in page 6 on the LM6321 datasheet has been employed. So it is becoming harder to believe that the problem lies with the output buffer. In any case, we have procured 20 of these discontinued ICs for debugging should we need them, and Steve is looking to buy some more. Ben Abbot will come by later in the afternoon to try and help us debug.

  12553   Wed Oct 12 15:01:22 2016 steveUpdateSUSSOS ITM baffles plates are ready

The two 40 mm apeture baffles at the ends were replaced by 50 mm one. ITM baffles with 50 mm apeture are baked ready for installation.

Quote:

 Green welding glass 7" x 9"   shade #14 with 40 mm hole and mounting fixtures are ready to reduce scatter light on SOS

PEEK 450CA shims and U-shaped clips  will keep these plates damped.

 

 

Attachment 1: baffle7x9_1.5.jpg
baffle7x9_1.5.jpg
Attachment 2: baffle_holder.jpg
baffle_holder.jpg
Attachment 3: baffle_top_view.jpg
baffle_top_view.jpg
  12569   Wed Oct 19 08:28:11 2016 SteveUpdateSUSITMY_UL

Everybody is happy, except ITMY_UL or satalite box.

Gautam shows perfect form in the OMC chamber.

Attachment 1: 12hrs.png
12hrs.png
Attachment 2: vent79.jpg
vent79.jpg
  12590   Tue Nov 1 09:03:08 2016 SteveUpdateSUSseismic activity is up

Salton See is shaking again.

 

Attachment 1: seismicActivity.png
seismicActivity.png
  12598   Thu Nov 3 16:30:42 2016 LydiaConfigurationSUSETMX to coil matrix expanded

[ericq, lydia]

Background: 

We believe the optimal OSEM damping would use an input matrix diagonalized to the free swing modes of the optic, and an output matrix which drives the coils appropriately to damp these free swing modes. As was discovered, a free swinging optic does not necessarily have eigenmodes that match up perfectly with pitch and yaw, however in the current state the "TO_COIL"  output matrix that determines the drive signals in response to the diagonlized sensor output also controls the drive signals for the oplevs, LSC/ASC, and alignment biases. So attempts to diagonalize the output matrix to agree with the input matrix have resulted in problems elsewhere. (See previous elog). So, we want to expand the "TO_COIL" matrices to treat the OSEM sensor inputs separately from the others.

Today:

  • We modified the ETMX suspension model (c1scx) to use a modified copy of the sus_single_control block (sus_single_control_mod) that has 3 additional input columns. These are for the sensing modes determined by the input matrix, and are labeled "MODAL POS", "MODAL PIT", and "MODAL YAW." 
    • The regular POS, PIT, and YAW columns no longer include the diagonalized OSEM sensor signals for ETMX.
    • The suspension screen now out of date; it doesn't show the new columns under Output Filters and the summed values displayed for each damping loop do not incluse the OSEM damping.
  • The new matrix can be acessed at /opt/rtcds/caltech/c1/medm/c1scx/C1SUS_ETMX_TO_COIL.adl (see Attachment 1). For now, it has the naive values in the new columns so the damping behavior is the same. 
  • In trying to get a properly generated MEDM screen for the larger matrix, we discovered that the Simulink block for TO_COIL specifies in its description a custom template for the medm autogeneration. We made a new verion of that template with extra columns and new labels, which can be reused for the other suspensions. These templates are in /opt/rtcds/userapps/release/sus/c1/medm/templates, the new one is SUS_TO_COIL_MTRX_EXTRA.adl
  • I will be setting the new column values to ones that represent the diagonlized free swing modes given by the input matrix. Hopefully this will improve OSEM damping without getting in the way of anything else. If this works well, the other SUS models can be changed the same way. 
Attachment 1: 01.png
01.png
  12601   Mon Nov 7 08:00:11 2016 SteveUpdateSUSSRM -PRM sat. amp swap

I just realized that Gautam set this test up and turned damping off......He will explane the details

 

Attachment 1: SRM.jpg
SRM.jpg
Attachment 2: SRM-UR_OK.png
SRM-UR_OK.png
  12602   Mon Nov 7 16:05:55 2016 gautamUpdateSUSPRM Sat. Box. Debugging

Short summary of my Sat. Box. debugging activities over the last few days. Recall that the SRM Sat. Box has been plugged into the PRM suspension for a while now, while the SRM has just been hanging out with no electrical connections to its OSEMs.

As Steve mentioned, I had plugged in Ben's extremely useful tester box (I have added these to the 40m Electronics document sub-tree on the DCC) into the PRM Sat. Box and connected it to the CDS system over the weekend for observation. The problematic channel is LR.  Judging by Steve's 2 day summary plots, LR looks fine. There is some unexplained behavior in the UR channel - but this is different from the glitchy behaviour we have seen in the LR channel in the past. Moreover, subsequent debugging activities did not suggest anything obviously wrong with this channel. So no changes were made to UR. I then pulled out the PRM sat.box for further diagnostics, and also, for comparison, the SRM sat. box which has been hooked up to the PRM suspension as we know this has been working without any issues. 

Tracing out the voltages through the LED current driver circuit for the individual channels, and comparing the performance between PRM and SRM sat. boxes, I narrowed the problem down to a fault in either the LT1125CSW Quad Op-Amp IC or the LM6321M current driver IC in the LR channel. Specifically, I suspected the output of U3A (see Attachment #1) to be saturated, while all the other channels were fine. Looking at the spectrum at various points in the circuit with an SR785, I could not find significant difference between channels, or indeed, between the PRM/SRM boxes (up to 100kHz). So I decided to swap out both these ICs. Just replacing the OpAmp IC did not have any effect on the performance. But after swapping out the current buffer as well, the outputs of U3A and U11 matched those of the other channels. It is not clear to me what the mode of failure was, or if the problem is really fixed. I also checked to make sure that it was indeed the ICs that had failed, and not the various resistors/capacitors in the signal path. I have plugged in the PRM sat. box + tester box setup back into our CDS data acquisition for observation over a couple of days, but hopefully this does the job... I will update further details over the coming days.

I have restored control to PRM suspensions via the working SRM sat. box. The PRM Sat. Box and tester box are sitting near the BS/PRM chamber in the same configuration as Steve posted in his earlier elog for further diagnostics...


GV Edit 2230 hrs 7Nov2016: The signs from the last 6 hours has been good - see the attached minute trend plot. Usually, the glitches tend to show up in this sort of time frame. I am not quite ready to call the problem solved just yet, but I have restored the connections to the SRM suspension (the PRM and SRM Sat. Boxes are still switched). I've also briefly checked the SRM alignment, and am able to lock the DRMI, but the lock doesn't hold for more than a few seconds. I am leaving further investigations for tomorrow, let's see how the Sat. Box does overnight.

Attachment 1: D961289-B2.pdf
D961289-B2.pdf
Attachment 2: PRMSatBoxtest.png
PRMSatBoxtest.png
  12606   Tue Nov 8 11:54:38 2016 gautamUpdateSUSPRM Sat. Box. looks to be fixed

Looks like the PRM Sat. Box is now okay, no evidence of the kind of glitchy behaviour we are used to seeing in any of the 5 channels.

Quote:
 
GV Edit 2230 hrs 7Nov2016: The signs from the last 6 hours has been good - see the attached minute trend plot. Usually, the glitches tend to show up in this sort of time frame. I am not quite ready to call the problem solved just yet, but I have restored the connections to the SRM suspension (the PRM and SRM Sat. Boxes are still switched). I've also briefly checked the SRM alignment, and am able to lock the DRMI, but the lock doesn't hold for more than a few seconds. I am leaving further investigations for tomorrow, let's see how the Sat. Box does overnight.

 

  12612   Sun Nov 13 23:42:43 2016 LydiaUpdateSUSETMX output matrix data

I took data of the ETMX SUSPOS, SUSPIT and SUSYAW channels while driving each of the 4 face coils. I manually turned off all the damping except the side. 

Excitation: I used white noise bandpassed from 0.4 to 5 Hz in order to examine the responses around the resonance frequencies. To avoid ringing things up too much, I started with a very weak drive signal and gradually increased it until it seemed to have an effect on the mirror motion by looking at the oplev signals/sensor RMS values on the SUS screen; it's possible I'll need to do it again with a stronger signal if there's not enough coherence in the data. 

Finding the matrix: The plan is to estimate the transfer function of the coil drive signal with the sensed degrees of freedom (specified by the already diagonalized input matrix). This transfer function can be averaged around the resonance peak for each dof to find the elements of the matrix that converts signals to dof responses, (the "response matrix", which is the inverse of the output matrix). Each column of the response matrix gets normalized so that the degrees of freedom influence the drive signals in the right ratio. 

Other notes: 

  • I had some trouble getting the awg python library to work: I had to manually edit a CDLL statement to use the absolute path of an .so file. I wasn't sure what environment variable to set to make it look in the right folder automatically.
  • The awg ArbitraryLoop object seems to be affected by cds getdata calls (The EXC signal stopes early and then stop() hangs) so I ended up doing the excitation and data reading in 2 separate scripts. 
  • Reminder that the watchdogs must be on "Normal" for the EXC signal to make it to the coils, so the damping must be turned off manually with the watchdogs still on if you want to drive without damping. 
  12643   Mon Nov 28 10:27:13 2016 gautamUpdateSUSITMY UL glitches are back

I left the tester box plugged in from Thursday night to Sunday afternoon, and in this period, the glitches still appeared in (and only in) the UL channel.

So yesterday evening, I pulled the Sat. Box. out and checked the DC voltages at various points in the circuit using a DMM, including the output of the high current buffer that supplies the drive current to the shadow sensor LEDs. When we had similar behaviour in the PRM box, this kind of analysis immediately identified the faulty component as the high current buffer IC (LM6321M) in the bad channel, but everything seems in order for the ITMY box. 

I then checked the Satellite Amplifier Termination Board, which basically just adds 100ohm series resistors to the output of the PD readout, and all the resistors seem fine, the piece of insulating material affixed to the bottom of this board is also intact. I then used the SR785 in AC coupled mode to look at the high frequency spectrum at the same points I checked the DC voltages with the DMM (namely the drive voltage to the LEDs, and the PD readout voltages on the PCB as well as on the pins of the connector on the outside of the box after the termination board (leading to the DAQ), and nothing sticks out here in the UL channel either. Of course it could be that the glitches are intermittent, and during my tests they just weren't there...

I am hesitant to start pulling out ICs and replacing them without any obvious signs of failure from them, but I am out of debugging ideas...


One possibility is that the problem lies upstream of the Sat. Box - perhaps the UL channel in the Suspension PD Whitening and Interface Board is faulty. To test, I have now hooked up ITMY Sat. Box. + tester box to the signal chain of ETMY. If I can get the other tester box back from Ben, I will plug in the ETMY sat. box. + tester to the ITMY signal chain. This should tell us something...

Attachment 1: ITMY_satboxSpectra.pdf
ITMY_satboxSpectra.pdf
  12650   Wed Nov 30 14:53:56 2016 SteveUpdateSUSnew sus wire stored in N2 filled dessicator

The new SOS sus wire finally is stored in a nitrogen filled dessicator. This was recommended by Ca. Fine Wire to minimize the aging - oxidation.

The dessicator was pumped down with " aux-drypump " to 1 Torr and than filled up with N2 to 760 Torr. This was repeated 2x and the dessicator was sealed off.

Attachment 1: dessicatorC.jpg
dessicatorC.jpg
Attachment 2: wireN2c.jpg
wireN2c.jpg
  12720   Sat Jan 14 22:39:30 2017 ranaSummarySUSITMY is drifting ?

https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20170114/sus/susdrift/

ITMY is not like the others. Real or just OSEM madness?

  12722   Mon Jan 16 18:54:01 2017 ranaUpdateSUSBS: whitening re-engaged

Found that the BS whitening was off. Gautam says that "it has always been that way" and "there's nothing in the elog about this" and "I have no special relationship with Putin".

I looked at DV and DTT while turning the OSEM whitening back on. As expected, the sensor noise improved by 10x above 10 Hz. The time series shows no problems - its just less fuzzy now.

All OSEM spectra after the switch show on upper panel of plot. Lower panel shows comparison of BS UL before/after. To rotate the DTT PDF landscape output I typed this:

pdftk BS-white.pdf cat 1N output BSwhite.pdf

"if you see something, do something"

Attachment 1: BSwhite.pdf
BSwhite.pdf
  12725   Mon Jan 16 23:25:07 2017 gautamUpdateSUSMC1 SUS electronics investigation

[rana,gautam]

Summary:

  • MC1 glitchy behaviour is back
  • Found a broken LEMO cable, left unplugged for the night -> to be repaired tomorrow
  • Further diagnosis to follow

During the course of Rana's inspection of the general state of the IFO, he commented that there seemed to be several seismic-related IMC lock losses in the time that he had been observing it. This issue looked suspiciously like the the MC1 glitches I had noticed sometime late last year, especially since each time the IMC would unlock, we could see significant amounts of motion on MC REFL. To diagnose, we did the following:

  1. Closed PSL shutter
  2. Ramped down the gains of the MC1 damping loops by factor of 1000 in ~4 secs using z step
  3. Shut down the watchdog for MC1
  4. Observed dataviewer traces for glitches

Sure enough, there were several glitches that occurred in all 5 sensor channels. These glitches varied in size from a few counts (the smaller ones) to 60-70counts for the bigger ones. In the past, squishing the LEMO connector on the front of the PD whitening board (D000210) had apparently made the glitching go away. So tonight, for starters, we squished everything else - Sat. Box connectors, the breakout board from Sat. Box to whitening board on the back of 1X6, and the DB connector on the front of the whitening board. This had no effect - the glitching remained consistent.

Next, Rana pulled out two of the three 4pin LEMOs, and left only those coresponding to UL/LL plugged in - but the glitching persisted in these two channels. We then pulled out the board. It was installed in 1998, but has a sticker on it that says "fixed in 2003". Not sure what the fix was. Visual inspection of the circuit didn't show anything obviously faulty, but it did look like the two MAX333A quad switches (these control whether the whitening is bypassed or not) had been replaced at some point. There are other undesirable features, such as the use of thick film resistors, but nothing that would explain the glitchy behaviour.

Next, we re-inserted the whitening board back into its original slot in the Eurocrate, but switched the cables (both D sub and LEMO, but only on the whitening board end) between the boards for MC1 and MC3 (i.e. MC1 cables were routed through the whitening board that was originally used for MC3, and vice-versa). But the glitches remained consistent on the MC1 channels. So it looks like the board is not a likely culprit.

Finally, we went in and squished all the cables from the PD whitening board to the ADC (via an AA filter board). For some of the LEMO cables from the whitening board, the LEMO backshells were not properly tightened. Rana fixed these before putting them back in. Some of the connectors were also not pushed in tightly enough, Rana heard the click when he pushed them in. The cables from the adaptor board to the ADC itself looked fine, it was screwed on at both ends, and all these connections looked snug enough. In the interest of completeness, Rana also pushed in the backplane connectors on the Eurocrate (these supply the signals from the BIO cards to switch the whitening ON/OFF). The one corresponding to MC1 was indeed a little loose.

Coming back to the control room, we saw that the MC1 LR sensor was dead. After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints. Could this explain the glitchy behaviour? Perhaps, but the glitches remained in the 3 channels that were connected. Anyways, I will repair this cable tomorrow, and we can see if this has fixed the problem or not..


Some misc points:

  1. Regarding the adaptor boards that take the PD signals from the satellite box and route it to the whitening board, there are some clamps that hold the IDE connectors in place for MC1, MC2 and MC3 boards, but not for the others (see attached picture). Steve, can we install clamps for all of the boards? [taken care of, see here]
  2. The whitening boards are not screwed in place into the Eurocrate. This should be rectified.

PSL shutter is closed, MC1 watchdog is shutdown for the night.

Attachment 1: 20170116_231625.png
20170116_231625.png
Attachment 2: IMG_7175.JPG
IMG_7175.JPG
Attachment 3: IMG_7174.JPG
IMG_7174.JPG
  12728   Tue Jan 17 21:29:52 2017 gautamUpdateSUSMC1 SUS electronics investigation

 

Quote:
 

After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints.

The faulty cable has been re-soldered (with heat shrink) and replaced. All 5 sensor signals appear normal on dataviewer now. I am leaving things in this state for the night, let us see if the glitches return overnight.

PSL shutter remains closed

  12731   Wed Jan 18 11:40:54 2017 gautamUpdateSUSMC1 SUS electronics investigation

After the repair of the faulty LEMO cable, I left MC1 with it's watchdog off overnight. Unfortunately, it looks like the problem still persists. The first attachment shows a second trend plot for the past 15 hours. Towards the left end of the plot, you can see where I re-connected the LEMO cable for the LR/UR channels.

A couple of months ago, I added a BLRMS block for the IMC optics that calculates BLRMS for the shadow sensor output as well as the coil output. Looking at this trend overnight, I noticed that the glitches appear in the coil outputs as well, as shown in the plot below, which is for a 1 hour stretch last night (I used the full data from a 16Hz coil output channel and not the BLRMS, I am not sure if there is a DQ'ed version of the coil outputs?).

Zooming in further to one of these glitches, we can see that the glitches in the coil and shadow sensor signals are in fact coincident.

But given that the watchdog was turned off all this time, the only voltage going to the coils should be the DC bias voltages. So does this not support the hypothesis that the problem lies in the part of the signal chain that supplies the bias voltage to the coils?

Never mind, the "coil output" channel isn't a true readback of the voltage to the coil, but is the calculated damping output (which is not sent to the coils when the watchdog is shutdown...

  12734   Wed Jan 18 14:23:47 2017 gautamUpdateSUSMC1 SUS electronics investigation

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

  12736   Wed Jan 18 18:44:53 2017 gautamUpdateSUSMC1 SUS electronics investigation
Quote:

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.

  12737   Thu Jan 19 08:25:12 2017 SteveUpdateSUSMC1 SUS electronics investigation
Quote:
Quote:

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.

No change.

Attachment 1: MC1_MC3_ITMY_ETMX_sensors.png
MC1_MC3_ITMY_ETMX_sensors.png
Attachment 2: sensors_UL.png
sensors_UL.png
  12739   Thu Jan 19 12:00:10 2017 gautamUpdateSUSMC1 SUS electronics investigation

Going through the last ~20 hours of data, the MC1 sensor channels look glitch free the entire period. However, there is a ~10min period around 1PM UTC today when there were a couple of glitches ~80 counts in size in all the MC3 sensor channels. The attached shows the full 2k data from all 10 channels (MC1 and MC3 sensors) around this time.

Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...

 

  12741   Thu Jan 19 19:56:09 2017 ranaUpdateSUSMC1 SUS electronics investigation

Might be. Or it might be in the satellite box cabling. Hard to tell without a tester. I recommend you squish the cables on there and lock the MC and get back to the usual business. I'll check on sat. box with Ben.

Quote:

 

Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...

 

  12742   Fri Jan 20 11:16:30 2017 gautamUpdateSUSMC1 SUS electronics investigation

Both suspensions have been relatively well behaved for the best part of the last two days, since I effected the Satellite Box swap. Today morning, I set about re-enabling the damping and locking the MC. Judging by the wall StripTool, it stayed locked for about 30 mins or so, after which the glitching returned.

Attached is a screenshot of the sensor signals from MC1 and MC3 (second trend), and also the highest band (>30Hz) BLRMS output for the same 10 channels (full data sampled at 16Hz). Note that MC1 and MC3 satellite boxes remain swapped. So the glitches now have migrated to the MC3 channels.

I need to think about whether this is just coincidence, or if me re-enabling the damping has something to do with the re-occurrence of the glitching...


Addendum 4.30pm: I've also re-aligned the Y arm. Its alignment has been stable over the last few hours, despite several mode cleaner lock losses in between, it recovers good IR transmission. The X arm has been re-aligned to green, but I can't get it locked to the IR - everytime I turn the LSC to ETMX on, there seems to be some large misalignment applied to it. c1iscaux was dead, I restarted it by keying the crate. I haven't had time to investigate the X arm locking in detail, I will continue to debug this.

  12745   Mon Jan 23 10:24:01 2017 SteveUpdateSUSMC1 SUS electronics investigation

Two day plot of glitching suspentions: MC3, ITMY and ETMX

Attachment 1: 3glitchingSUS.png
3glitchingSUS.png
  12753   Wed Jan 25 10:46:58 2017 steveSummarySUSoplev laser summary updated

                    Oct.  5, 2015              ETMY He/Ne replaced by 1103P, sr P919645,  made Dec 2014, after 2 years

                   Jan. 24, 2017              ETMY He/Ne replaced by 1103P,  sr P947049,  made Apr 2016,  after 477 hrs running hot

Attachment 1: oplev_sums.png
oplev_sums.png
  12760   Fri Jan 27 14:50:04 2017 SteveUpdateSUS wire standoffs update

The 3 pieces of Sapphire v-groove test cuts are back. They look good. The suspension wire 0.0017" ( 43 micron ) is show on some of the pictures.

  12761   Fri Jan 27 15:36:17 2017 KojiUpdateSUS wire standoffs update

Very nice! I got excited.

  • Don't you ask Calum and co to check the groove size with their microscopes? Just give the samples and the wire.
  • Do we want to make a simple "guitar" setup to measure the vibration Qs with Al piece, glass prism, ungrooved Sapphire, this grooved sapphire, grooved ruby, etc?
  12802   Mon Feb 6 10:05:28 2017 SteveUpdateSUSclamped cables

The bottom 5 cable connections from Sat-Amp to Whittering Filter at 1X5 were clamped today.

Attachment 1: clamped.jpg
clamped.jpg
  12811   Wed Feb 8 10:16:39 2017 steveUpdateSUSclipping ITMX oplev

The ITMX oplev beam is clipping. It will be corected with locked arm

 

Attachment 1: ITMX_oplev_clipping.jpg
ITMX_oplev_clipping.jpg
Attachment 2: ITMX_clipping.jpg
ITMX_clipping.jpg
  12846   Thu Feb 23 09:32:20 2017 KojiUpdateSUS wire standoffs update

Kyle took high quality images of  the three sapphire prisms using the microscope @Downs. He analyzed the images to see the radius of the groove.

They all look sufficiently sharp for a 46um steel wire. Thumbs up.
I am curious to see how the wire Q is with grooved sapphires, ungrooved sapphires, grooved ruby, grooved aluminum stand off, and so on.

Attachment 1: Sapphire_prism_1(A015).png
Sapphire_prism_1(A015).png
Attachment 2: Sapphire_prism_2(A016).png
Sapphire_prism_2(A016).png
Attachment 3: Sapphire_prism_3(A014).png
Sapphire_prism_3(A014).png
  12886   Tue Mar 14 10:40:30 2017 SteveUpdateSUSOSEM filters are in

We have 50 pieces in the clean cabinet.

Attachment 1: filters.jpg
filters.jpg
  12889   Thu Mar 16 08:22:16 2017 SteveUpdateSUSETMX damping

Finally I see what kicks the sus damping off

Quote:

Huh? So should we ask them to put the container back? Or do you have some other theory about ETMX tripping that is not garbage related?

Quote:

ETMX sus damping recovered.

Note: The giant metal garbage container was moved from the south west corner of CES months ago.

 

 

Attachment 1: laser_power_glitch.png
laser_power_glitch.png
  12892   Fri Mar 17 15:30:39 2017 SteveSummarySUSoplev laser summary updated

         March  17,  2017         ETMX laser replaced at LT 3y with 1103P, sn T8070866

Attachment 1: oplev_sums.png
oplev_sums.png
  12931   Fri Apr 7 13:46:23 2017 SteveUpdateSUSETMX enclosure feedthough

ETMX enclosure feedtrouh cabeling corrected.

Attachment 1: bad.jpg
bad.jpg
Attachment 2: good.jpg
good.jpg
  12933   Mon Apr 10 09:58:35 2017 SteveSummarySUSoplev laser summary updated

 

Quote:

                    Oct.  5, 2015              ETMY He/Ne replaced by 1103P, sr P919645,  made Dec 2014, after 2 years

                   Jan. 24, 2017              ETMY He/Ne replaced by 1103P,  sr P947049,  made Apr 2016,  after 477 hrs running hot

    Jan. 26,  2017              RIN test stared with P947034, made Apr. 2016  

    Apr.  10,  2017              purchased two 1103P from Edmund:  sr P964438 & sr P964431, made 02/2017

   

  12941   Thu Apr 13 09:48:37 2017 SteveUpdateSUSITMY-UL and ETMX sensors

Why ITMY UL can not see this earth quake? SRM and PRM are misaligned. ETMX is still not well.

We have to remember to check OSEM - magnet alignment when vented.

Attachment 1: ITMY.png
ITMY.png
Attachment 2: ITMY-UL.png
ITMY-UL.png
Attachment 3: ETMX?.png
ETMX?.png
  12995   Wed May 17 08:19:59 2017 SteveUpdateSUS4.1M earthquake

Sus dampings recovered. ETMY oplev needs to be recentered.

GV May 17 11am: I shut down the BS, SRM, ITMX and ITMY watchdogs, as the coil-driver boards for these optics are presently not installed.
 

Attachment 1: eq_4.1_SantaBarbara.png
eq_4.1_SantaBarbara.png
Attachment 2: 4.1m_Isla_Vista_CA.png
4.1m_Isla_Vista_CA.png
  13030   Thu Jun 1 16:21:55 2017 SteveUpdateSUS wire standoffs update

Ruby wire standoff received from China. I looked one of them with our small USB camera.  They did a good job. The  long edges of the prism are chipped.

The v-groove cutter must avoid them. Pictures will follow.

 

  13039   Mon Jun 5 10:30:45 2017 SteveUpdateSUSruby wire standoff pictures

Atm 1 & 5, showing the ruby R ~10 mm as it is seated on Al SOS test mass

Atm. 2, 3 & 4  chipped long edges with SOS sus wire OD 43 micron as  calibration

Quote:

Ruby wire standoff received from China. I looked one of them with our small USB camera.  They did a good job. The  long edges of the prism are chipped.

The v-groove cutter must avoid them. Pictures will follow.

 

 

Attachment 1: A087_R.png.bmp
A087_R.png.bmp
Attachment 2: A097_chipped_edges.png.bmp
A097_chipped_edges.png.bmp
Attachment 3: A099_cal_wire.png.bmp
A099_cal_wire.png.bmp
Attachment 4: A101_cal_wire_43_micron.png.bmp
A101_cal_wire_43_micron.png.bmp
Attachment 5: Al_SOS_R39mm.jpg
Al_SOS_R39mm.jpg
  13123   Mon Jul 17 16:22:01 2017 SteveUpdateSUSruby wire standoff pictures

Bluebean Optical Tech Limited of Shanghai delivered 50 pieces red ruby prisms with radius.  The first prism pictures were taken at June 5

and it was retaken again as BB#1 later

More samples were selected randomly as one from each bag of 5 and labeled as BB#2.......6    

 The R10 mm radius can be seen agains the  ruler edge.  The v-groove edge was labeled with blue marker and pictures were taken

from both side of this ridge. The top view is shown as the wire laying across on it.

SOS sus wire of 43 micron OD used as calibration as it was placed close to the side that it was focused on.

The V-groove ridge surface quality was evaluated based on as scale of 1 – 10 with 10 being the most positive.

 BB# Edge quality score
1 4
2 8
3 3
4 9.5
5 2
6 9

Remaining thing to examin, take picture of the contacting ridge to SOS from the side.

Attachment 1: contacting_ridge.bmp
Attachment 2: contacting_ridge.png
contacting_ridge.png
ELOG V3.1.3-