In the morning, Steve will start opening the north BS door so that we can enter to inspect the PRM LR OSEM.
For the ITMY, I squished together the cables which are in the 'Cable Interface Board' which lives in the rack. This thing takes the 64 pin IDC from the satellite module and converts it into 2 D-sub connectors to go to the PD whitening board and the coil driver board. Lets see if the ITMY OSEM glitches change character overnight.
[Teng, Johannes, Lydia, gautam]
The modes look like they're at the right frequencies, so pointing more and more towards a LED or satellite box issue.
We peeked into the BS-PRM chamber via the ITMX chamber to see if we could shed any light on this situation. It's hard to get a picture that is in focus, but it looks quite clear that the LR LED (in the lower left when viewed from the HR side) isn't anywhere near as bright as the rest (see Attachment #1). Various hypothesis include failed LED / piece of Al foil blocking the LED / teflon aperture slipped over the LED. But looks like we can't solve this without opening up the BS-PRM chamber. The plan tomorrow is to open up the chamber, pull out the problematic coil. Once we have a better idea of what is going wrong, we can decide what the appropriate course of action is - replace the OSEM or something else.
As part of the diagnosis, I switched the PRM and SRM satellite boxes earlier today evening around 6pm. They remain in this switched state for now.
Steve, we plan to take the BS-PRM heavy door off tomorrow morning.
Edit 7.30pm: I have managed to recover Y-arm in air locking, and the transmission is up at ~0.6 again which is what we were seeing prior to touching anything on the BS-PRM table, so it looks like the tip-tilt has not gone badly astray... I have also restored the Satellite boxes so that both PRM and SRM have their designated boxes
Detailed elog to follow but summary of todays activities:
[steve, teng, johannes, lydia, gautam]
Depending on how the X arm situation is, we will finish putting back all the heavy doors on Monday and start the pumpdown
GV Edit 11.30pm:
Looks like on Monday, we will look to put the heavy doors on ITMY, ITMX and ETMX chambers, and begin the pumpdown
We ran the scripts to diagonalize the damping matrices using the free swinging data from staurday night/sunday morning. The actual entries used for damping have not been changed. However, we did generate updated matrices for all the main optics (not including the mode cleaner optics, which were not free swinging over the weekend).
Last night from 8:30 pm to 8:30 am PDT, ETMY UL signal was glitchy again. As of now it seems to have quieted back down, but we pushed on the cables on the board at the Y end to hopefully prevent it from coming back. After doing so it still seems to be behaving well.
We continued to work on the diagonalization scripts today and devised a way of choosing starting parameters that seems to work much better, and is easier to use, than tuning up to 15 parameters by hand per optic.
We still noticed phase problems with ITMY, which appear to be preventing good diagonalization (See Attachment 1). Almost every degree of freedom has a significant imaginary part in the sensing matrix. We looked at the phases of the cross spectra in DDT and saw that indeed, the OSEM signals do not have the appropriate relative phases at the peak frequencies, especially in PIT and YAW (see Attachment 2: the phase at the peak is about 30 degrees when it should be 180). These phases are different for data takes ~24 hours apart, but are still wrong. We also looked at this information for ETMY and saw the correct behavior. We temporarily moved the pitch and yaw sliders for ITMY and looked at the OSEM response on a striptool, and the signals moved in the expected way. Can anyone suggest a reason why this would be happening? Is there another stretch of data (besides this past weekend) which would be good to compare to?
Today the main optics were free swinging for several hours, so I attempted diagonalization in vacuum.
We built matrices for ITMY and ETMY by driving one degree of freedom at a time with awggui, while the damping was on. These have been applied to the damping loops.
Pitch:1158085097 Yaw: 1158086537 Pos: 1158089237 Side: 1158087977
Pitch: 1158095897 Yaw: 1158097577 Pos: 1158099377 Side: 1158100817
All is not lost. I've stuck and unstuck optics around a half dozen times. Can you please post the zoomed in time series (not trend) from around the time it got stuck? Sometimes the bias sliders have to be toggles to make the bias correct. From the OSEM trend it seems like it got a large Yaw bias. May also try to reseat the satellite box cables and the cable from the coil driver to the cable breakout board in the back of the rack.
Here's the timeseries plots. I've zoomed in to right after the problem- did you want before? We pretty much know what happened: c1susaux was restarted from the crate but the damping was on, so as soon as the machine came back online the damping loops sent a huge signal to the coils. (Also, it seems to be down again. Now we know what to do first before keying the crate.) It seems like both right side magnets are stuck, and this could probably be fixed by moving the yaw slider. Steve advised that we wait for an experienced hand to do so.
susaux is responsible for turning on/off the inputs to the coil driver, but not the actual damping loops. So rebooting susaux only does the same as turning the watchdogs on/off so it shouldn't be a big issue.
Both before and after would be good. We want to see how much bias and how much voltage from the front ends were applied. l1susaux could have put in a huge bias, but NOT a huge force from the damping loops. But I've never seen it put in a huge bias and there's no way to prevent this anyway without disconnecting cables.
I think its much more likely that its a little stuck due to static charge on the rubber EQ stop tips and that we can shake it lose with the damping loops.
ITMX is free, OSEM signals all rougly centered.
This was accomplished by rocking the static alignment (i.e. slow controls) pitch and yaw offsets until the optic broke free. This took a few volts back and forth. At this point, I tried to find a point where the optic seemed to freely swing, and hopefully have signals in all 5 OSEMS. It seemed to be free sometimes but mostly settling into two different stationary states. I realized that it was becoming torqued enough in pitch to be leaning on the top-front or top-back EQ stops. So, I slowly adjusted the pitch from one of these states until it seemed to be swinging a bit on the camera, and three OSEM signals were showing real motion. Then, I slowly adjusted the pitch and yaw alignments to get all OSEMS signals roughly centered at half of their max voltage.
I had hoped to do some ALS work, but I realized too late that we loaned our HP analyzer to Andrew. I decided instead to do some ETMX testing.
I have a script running that'll misalign both ETMs and back by about 0.5mrad with half hour rests in between. It'll be done around 6AM.
Seems like the angular position was fairly stable, though there is some change in the ETMX pitch that could be hysterisis or normal drift. I didn't mention it explicity in the previous log, but the misalignment was purely in pitch. I'll give it another shot with a bigger misalginment, and maybe a mix of pitch and yaw.
This afternoon around 2:45, ITMX started ringing up at ~.9Hz for about a minute and then got stuck again. When I noticed this evening, I tried to free it with the alignment sliders but was unable to see any signal on UL or UR. It also looks like the damping for ITMY was turned off at the same time ITMX got stuck (not at the start of its ring up). SRM also has a spike in its motion at this time, and another one minute later that ended up with the LR OSEM at a much higher level, though the mirror does not appear to be stuck. We didn't see any strange behavior from any of the other optics.
Teng and I were working on diagnosing a problem with the ITMY UL whitening, but by the time we disconnected any applicable cables, the damping for ITMY was already off. Later we unplugged the ITMX PD whitening cables after verifying that the ITMX damping was also already off. This problem may have occured earlier, while Teng, Eric, and I were examining and pushing in the cables at 1X5 without unplugging anything.
We found that the reason for the bad phase on the ITMY free swing data is because the whitening filter for UL is not being properly turned on. We are in the process of investigating the source of this problem. Right now all the cables to the PD whitening boxes in 1X5 are switched between ITMY and ITMX.
The earth quake shook ITMX free for a short while.
When we plugged in the back cables yesterday on the whitening boxes after switching them, two of the ITMX PDMon channels (UR and LR) got stuck at 0. This caused me to believe ITMX was still stuck even when it was freed. However, it was left in a stuck state overnight and freed again today after this issue was discovered. The alignment sliders have been set to 0 as a safety net to keep ITMX from getting stuck again if c1susaux is restarted again. We switched the cables back and the problem was still there.
The ITMY UL whitening filter problem, which the cables were originally switched to diagnose, was also still there. Ericq suggested we turn off all the whitening filters in order to get diagonalization data that would not show a phase difference between coils. We ran the diagonalization again with all the dewhitening filters off and got much cleaner results, with no visible cross-coupling peaks remaining between the degrees of freedom (see attachemnt 1). We did not apply this matrix to the damping, however, because there are elements which have the wrong sign compared to the ideal matrix. Significant adjustments to the output matrix will probably need to be made if this result is to be used. We also verified that the phase problem had been solved in DTT, where we saw the same sign discrepancies as in the matrix below.
Damping can be turned back on, using the old, non-diagonalized matrix currently in effect. There is enough free swing data to diagonalize ITMY now, so feel free to mess with it.
Matrix (wrong signs red, suspiciously small elements orange):
pit yaw pos side butt
UL 1.633 0.138 1.224 0.136 0.984
UR -0.202 -1.768 1.179 0.132 -1.028
LR -2.000 0.094 0.776 0.107 1.001
LL -0.165 2.000 0.821 0.111 -0.987
SD 0.900 1.131 -1.708 1.000 -0.107
Motivated by the strange pitch/yaw coupling behavior we ran into while doing diagonalization, we looked at the oplev pitch and yaw free swing spectra for all 4 test masses (see attachment 1). We saw the same behavior there: At the peak frequencies for the angular degress of freedom, the oplevs saw significant contributions from both pitch and yaw. We also examined the phase between pitch and yaw at these peaks and found that consistently, pitch and yaw were in phase at one of the resonance frequencies and out of phase at the other (ignoring the pos and side peaks).
This corresponds physically to angular motion about some axis that is diagonal, ie not perfectly vertical or horizontal. If we trust the oplev calibration, and Eric says that we do, then the angle of this axis of rotation with the horizontal (pitch axis) is
Where Y and P are yaw and pitch ASD values. This will always give an angle between 0 and 90 degrees; which quadrant the axis of rotation occupies can be dermined by looking at the phase between pitch and yaw at the same frequencies. 0 phase means that the axis of rotation lies somewhere less than 90 degrees counterclockwise from the horizontal as viewed from the AR face of the optic, and a phase of 180 degrees means the axis is clockwise from horizontal (see attachment 2). Qualitatively, these features show up the same way for segments of data taken at different times. In order to get some quantitative sense of the error in these angles, we found them using spectrogram values with a bandwidth of 0.02 Hz averaged over 4000 seconds.
Results (all numbers in degrees unless otherwise specified):
peak 1 ( 0.692 Hz):
ptich/yaw phase: -179.181
peak 2 ( 0.736 Hz):
pitch/yaw phase: 0.0123677
peak 1 ( 0.502 Hz):
ptich/yaw phase: -179.471
peak 2 ( 0.688 Hz):
pitch/yaw phase: -0.43991
peak 1 ( 0.73 Hz):
ptich/yaw phase: -0.227034
peak 2 ( 0.85 Hz):
pitch/yaw phase: -179.856
peak 1 ( 0.724 Hz):
ptich/yaw phase: 6.03312
peak 2 ( 0.844 Hz):
pitch/yaw phase: -176.838
ETMY and ITMX both show a more significant (~4x) contribution from pitch on one peak, and from yaw on the other. This is reflected in the fact that they each have one angle somewhat close to 0 (below 30 degrees) and one close to 90 (above 60 degrees). The other two test masses don't follow this rule, meaning that the 2 angular frequency peaks do not correspond to pitch and yaw straightforwardly.
Also, besides ITMX, the axes of rotation are at least several degrees away from being perpendicular to each other.
Summary: At the 40m meeting yesterday, Eric Q. gave the suggestion that we accept the input matrix weirdness and adjust the output matrix by driving each coil individually so that it refers to the same degrees of freedom. After testing this strategy, I don't think it will work.
Yesterday evening I tested this idea by driving one ITMY coil at a time, and measuring the response of each of the free swing modes at the drive frequency. I followed more or less the same procedure as the standard diagonalization: responses to each of the possible stimuli are compared to build a matrix, which is inverted to describe the responses given the stimuli. For the input matrix, the sensor readings are the responses and the free swing peaks are the stimuli. For the output matrix, the sensors transformed by the diagonalized input matrix as the responses of the dofs which are compared, and the drive frequency peak associated with a coil output is the stimulus. However, the normalization still happens to each dof independently, not to each coil independently.
The output matrix I got had good agreement with the ITMY input matrix in the previous elog: for each dof/osem the elements had the same sign in both input and output matrices, so there are no positive feedback loops. The relative magnitude of the elements also corresponded well within rows of the input matrix. So the input and output matrices, while radically different from the ideal, were consistent with each other and referred to the same dof basis. So, I applied these new matrices (both input and output) to the damping loops to test whether this approach would work.
drive-generated output matrix:
UL UR LR LL SD
pit 1.701 -0.188 -2.000 -0.111 0.452
yaw 0.219 -1.424 0.356 2.000 0.370
pos 1.260 1.097 0.740 0.903 -0.763
sid 0.348 0.511 0.416 0.252 1.000
but 0.988 -1.052 0.978 -0.981 0.060
However, when Gautam attempted to lock the Y arm, we noticed that this change significantly impacted alignment. The alignment biases were adjusted accordingly and the arm was locking. But when the dither was run, the lock was consistently destroyed. This indicates that the dither alignment signals pass through the SUS screen output matrix. If the output matrix pitch and yaw columns refer instead to the free swing eigenmodes, anything that uses the output matrix and attempts to align pitch and yaw will fail. So, the ITMY matrices were restored to their previous values: a close to ideal input matrix and naive output matrix. We could try to change everything that is affected by the output matrices to be independent of a transformation to the free swing dof basis, and then implement this strategy. But to me, that seems like an unneccessary amount of changes with unpredictable consequences in order to fix something that isn't really broken. The damping works fine, maybe even better, when the input matrix is set by the output matrix: we define pitch, for example, to be "The mode of motion produced by a signal to the coils proportional to the pitch row of the naieve output matrix," and the same for the other dofs. Then you can drive one of these "idealized" dofs at a time and measure the sensor responses to find the input matrix. (That is how the input matrix currently in use for ITMY was found, and it seems to work well.)
I wanted to see what is the reason to have such large coupling between pitch and yaw motions.
The first test was to check orthogonality of the bias sliders. It was done by monitoring the suspension motion using the green beam.
The Y arm cavity was aligned to the green. The damping of ITMY was all turned off except for SD.
Then ITMY was misaligned by the bias sliders. The ITMY face CCD view shows that the beam is reasonably orthogonally responding to the pitch and yaw sliders.
I also confirmed that the OPLEV signals also showed reasonablly orthogonal responce to the pitch and yaw misalignment.
=> My intuition was that the coils (including the gain balance) are OK for a first approximation.
Then, I started to excite the resonant modes. I agree that it is difficult to excite a pure picth motion with the resonance.
So I wanted to see how the mixing is frequency dependent.
The transfer functions between ITMY_ASCPIT/YAW_EXC to ITMY_OPLEV_PERROR/YERROR were measured.
The attached PDFs basically shows that the transfer functions are basically orthogonal (i.e. pitch exc goes to pitch, yaw exc goes to yaw) except at the resonant frequency.
I think the problem is that the two modes are almost degenerate but not completely. This elog shows that the resonant freq of the ITMY modes are particularly close compared to the other suspensions.
If they are completely degenerate, the motion just obeys our excitation. However, they are slightly split. Therefore, we suffer from the coupled modes of P and Y at the resonant freq.
However, the mirror motion obeys the exitation at the off resonance as these two modes are similar enough.
This means that the problem exists only at the resonant frequencies. If the damping servos have 1/f slope around the resonant freqs (that's the usual case), the antiresonance due to the mode coupling does not cause servo instability thank to the sufficient phase margin.
In conclusion, unfortunately we can't diagnalize the sensors and actuators using the natural modes because our assumption of the mode purity is not valid.
We can leave the pitch/yaw modes undiagnalized or just believe the oplevs as a relatively reliable reference of pitch and yaw and set the output matrix accordingly.
The figures will be rotated later.
Local earth quake 3.7 mag trips PRM
What about the MC?
Tonight, and during last week's locking, we noticed something intermittently kicking the PRM. I've determined that PRM's LR OSEM is problematic again. The signal is coming in and out, which kicks the OSEM damping loops. I've had the watchdog tripped for a little bit, and here's the last ten minutes of the free swinging OSEM signal:
Here's the hour trend of the PRM OSEMS over the last 7 days a plot of just LR since the fix on the 9th of September.
It looks like it started misbehaving again on the evening of the 5th, which was right when we were trying to lock... Did we somehow jostle the suspension hard enough to knock the foil cap back into a bad spot?
It started here
100 Sapphire prizms arrived.
Perhaps the problem is electrical? The attached plot shows a downward trend for the LR sensor output over the past 20 days that is not visible in any of the other 4 sensor signals. The Al foil was shorting the electrical contacts for nearly 2 months, so perhaps some part of the driver circuit needs to be replaced? If so a Satellite Box swap should tell us more, I will switch the PRM and SRM satellite boxes. It could also be a dying LED on the OSEM itself I suppose. If we are accessing the chamber, we should come up with a more robust insulating cap solution for the OSEMs rather than this hacky Al foil + kapton arrangement.
The PRM and SRM Satellite boxes have been switched for the time being. I had to adjust some of the damping loop gains for both PRM and SRM and also the PRM input matrix to achieve stable damping as the PRM Satellite box has a Side sensor which reads out 0-10V as opposed to the 0-2V that is usually the case. Furthermore, the output of the LR sensor going into the input matrix has been turned off.
Looks like what were PRM problems are now seen in the SRM channels, while PRM itself seems well behaved. This supports the hypothesis that the satellite box is problematic, rather than any in-vacuum shenanigans.
Eric noted in this elog that when this problem was first noticed, switching Satellite boxes didn't seem to fix the problem. I think that the original problem was that the Al foil shorted the contacts on the back of the OSEM. Presumably, running the current driver with (close to) 0 load over 2 months damaged that part of the Satellite box circuitry, which lead to the subsequent observations of glitchy behaviour after the pumpdown. Which begs the question - what is the quick fix? Do we try swapping out the LM6321 in the LR LED current driver stage?
GV Edit Nov 2 2016: According to Rana, the load of the high speed current buffer LM6321 is 20 ohms (13 from the coil, and 7 from the wires between the Sat. Box and the coil). So, while the Al foil was shorting the coil, the buffer would still have seen at least 7 ohms of load resistance, not quite a short circuit. Moreover, the schematic suggests that that the kind of overvoltage protection scheme suggested in page 6 on the LM6321 datasheet has been employed. So it is becoming harder to believe that the problem lies with the output buffer. In any case, we have procured 20 of these discontinued ICs for debugging should we need them, and Steve is looking to buy some more. Ben Abbot will come by later in the afternoon to try and help us debug.
The two 40 mm apeture baffles at the ends were replaced by 50 mm one. ITM baffles with 50 mm apeture are baked ready for installation.
Green welding glass 7" x 9" shade #14 with 40 mm hole and mounting fixtures are ready to reduce scatter light on SOS
PEEK 450CA shims and U-shaped clips will keep these plates damped.
Everybody is happy, except ITMY_UL or satalite box.
Gautam shows perfect form in the OMC chamber.
Salton See is shaking again.
We believe the optimal OSEM damping would use an input matrix diagonalized to the free swing modes of the optic, and an output matrix which drives the coils appropriately to damp these free swing modes. As was discovered, a free swinging optic does not necessarily have eigenmodes that match up perfectly with pitch and yaw, however in the current state the "TO_COIL" output matrix that determines the drive signals in response to the diagonlized sensor output also controls the drive signals for the oplevs, LSC/ASC, and alignment biases. So attempts to diagonalize the output matrix to agree with the input matrix have resulted in problems elsewhere. (See previous elog). So, we want to expand the "TO_COIL" matrices to treat the OSEM sensor inputs separately from the others.
I just realized that Gautam set this test up and turned damping off......He will explane the details
Short summary of my Sat. Box. debugging activities over the last few days. Recall that the SRM Sat. Box has been plugged into the PRM suspension for a while now, while the SRM has just been hanging out with no electrical connections to its OSEMs.
As Steve mentioned, I had plugged in Ben's extremely useful tester box (I have added these to the 40m Electronics document sub-tree on the DCC) into the PRM Sat. Box and connected it to the CDS system over the weekend for observation. The problematic channel is LR. Judging by Steve's 2 day summary plots, LR looks fine. There is some unexplained behavior in the UR channel - but this is different from the glitchy behaviour we have seen in the LR channel in the past. Moreover, subsequent debugging activities did not suggest anything obviously wrong with this channel. So no changes were made to UR. I then pulled out the PRM sat.box for further diagnostics, and also, for comparison, the SRM sat. box which has been hooked up to the PRM suspension as we know this has been working without any issues.
Tracing out the voltages through the LED current driver circuit for the individual channels, and comparing the performance between PRM and SRM sat. boxes, I narrowed the problem down to a fault in either the LT1125CSW Quad Op-Amp IC or the LM6321M current driver IC in the LR channel. Specifically, I suspected the output of U3A (see Attachment #1) to be saturated, while all the other channels were fine. Looking at the spectrum at various points in the circuit with an SR785, I could not find significant difference between channels, or indeed, between the PRM/SRM boxes (up to 100kHz). So I decided to swap out both these ICs. Just replacing the OpAmp IC did not have any effect on the performance. But after swapping out the current buffer as well, the outputs of U3A and U11 matched those of the other channels. It is not clear to me what the mode of failure was, or if the problem is really fixed. I also checked to make sure that it was indeed the ICs that had failed, and not the various resistors/capacitors in the signal path. I have plugged in the PRM sat. box + tester box setup back into our CDS data acquisition for observation over a couple of days, but hopefully this does the job... I will update further details over the coming days.
I have restored control to PRM suspensions via the working SRM sat. box. The PRM Sat. Box and tester box are sitting near the BS/PRM chamber in the same configuration as Steve posted in his earlier elog for further diagnostics...
GV Edit 2230 hrs 7Nov2016: The signs from the last 6 hours has been good - see the attached minute trend plot. Usually, the glitches tend to show up in this sort of time frame. I am not quite ready to call the problem solved just yet, but I have restored the connections to the SRM suspension (the PRM and SRM Sat. Boxes are still switched). I've also briefly checked the SRM alignment, and am able to lock the DRMI, but the lock doesn't hold for more than a few seconds. I am leaving further investigations for tomorrow, let's see how the Sat. Box does overnight.
Looks like the PRM Sat. Box is now okay, no evidence of the kind of glitchy behaviour we are used to seeing in any of the 5 channels.
I took data of the ETMX SUSPOS, SUSPIT and SUSYAW channels while driving each of the 4 face coils. I manually turned off all the damping except the side.
Excitation: I used white noise bandpassed from 0.4 to 5 Hz in order to examine the responses around the resonance frequencies. To avoid ringing things up too much, I started with a very weak drive signal and gradually increased it until it seemed to have an effect on the mirror motion by looking at the oplev signals/sensor RMS values on the SUS screen; it's possible I'll need to do it again with a stronger signal if there's not enough coherence in the data.
Finding the matrix: The plan is to estimate the transfer function of the coil drive signal with the sensed degrees of freedom (specified by the already diagonalized input matrix). This transfer function can be averaged around the resonance peak for each dof to find the elements of the matrix that converts signals to dof responses, (the "response matrix", which is the inverse of the output matrix). Each column of the response matrix gets normalized so that the degrees of freedom influence the drive signals in the right ratio.
I left the tester box plugged in from Thursday night to Sunday afternoon, and in this period, the glitches still appeared in (and only in) the UL channel.
So yesterday evening, I pulled the Sat. Box. out and checked the DC voltages at various points in the circuit using a DMM, including the output of the high current buffer that supplies the drive current to the shadow sensor LEDs. When we had similar behaviour in the PRM box, this kind of analysis immediately identified the faulty component as the high current buffer IC (LM6321M) in the bad channel, but everything seems in order for the ITMY box.
I then checked the Satellite Amplifier Termination Board, which basically just adds 100ohm series resistors to the output of the PD readout, and all the resistors seem fine, the piece of insulating material affixed to the bottom of this board is also intact. I then used the SR785 in AC coupled mode to look at the high frequency spectrum at the same points I checked the DC voltages with the DMM (namely the drive voltage to the LEDs, and the PD readout voltages on the PCB as well as on the pins of the connector on the outside of the box after the termination board (leading to the DAQ), and nothing sticks out here in the UL channel either. Of course it could be that the glitches are intermittent, and during my tests they just weren't there...
I am hesitant to start pulling out ICs and replacing them without any obvious signs of failure from them, but I am out of debugging ideas...
One possibility is that the problem lies upstream of the Sat. Box - perhaps the UL channel in the Suspension PD Whitening and Interface Board is faulty. To test, I have now hooked up ITMY Sat. Box. + tester box to the signal chain of ETMY. If I can get the other tester box back from Ben, I will plug in the ETMY sat. box. + tester to the ITMY signal chain. This should tell us something...
The new SOS sus wire finally is stored in a nitrogen filled dessicator. This was recommended by Ca. Fine Wire to minimize the aging - oxidation.
The dessicator was pumped down with " aux-drypump " to 1 Torr and than filled up with N2 to 760 Torr. This was repeated 2x and the dessicator was sealed off.
ITMY is not like the others. Real or just OSEM madness?
Found that the BS whitening was off. Gautam says that "it has always been that way" and "there's nothing in the elog about this" and "I have no special relationship with Putin".
I looked at DV and DTT while turning the OSEM whitening back on. As expected, the sensor noise improved by 10x above 10 Hz. The time series shows no problems - its just less fuzzy now.
All OSEM spectra after the switch show on upper panel of plot. Lower panel shows comparison of BS UL before/after. To rotate the DTT PDF landscape output I typed this:
pdftk BS-white.pdf cat 1N output BSwhite.pdf
"if you see something, do something"
During the course of Rana's inspection of the general state of the IFO, he commented that there seemed to be several seismic-related IMC lock losses in the time that he had been observing it. This issue looked suspiciously like the the MC1 glitches I had noticed sometime late last year, especially since each time the IMC would unlock, we could see significant amounts of motion on MC REFL. To diagnose, we did the following:
Sure enough, there were several glitches that occurred in all 5 sensor channels. These glitches varied in size from a few counts (the smaller ones) to 60-70counts for the bigger ones. In the past, squishing the LEMO connector on the front of the PD whitening board (D000210) had apparently made the glitching go away. So tonight, for starters, we squished everything else - Sat. Box connectors, the breakout board from Sat. Box to whitening board on the back of 1X6, and the DB connector on the front of the whitening board. This had no effect - the glitching remained consistent.
Next, Rana pulled out two of the three 4pin LEMOs, and left only those coresponding to UL/LL plugged in - but the glitching persisted in these two channels. We then pulled out the board. It was installed in 1998, but has a sticker on it that says "fixed in 2003". Not sure what the fix was. Visual inspection of the circuit didn't show anything obviously faulty, but it did look like the two MAX333A quad switches (these control whether the whitening is bypassed or not) had been replaced at some point. There are other undesirable features, such as the use of thick film resistors, but nothing that would explain the glitchy behaviour.
Next, we re-inserted the whitening board back into its original slot in the Eurocrate, but switched the cables (both D sub and LEMO, but only on the whitening board end) between the boards for MC1 and MC3 (i.e. MC1 cables were routed through the whitening board that was originally used for MC3, and vice-versa). But the glitches remained consistent on the MC1 channels. So it looks like the board is not a likely culprit.
Finally, we went in and squished all the cables from the PD whitening board to the ADC (via an AA filter board). For some of the LEMO cables from the whitening board, the LEMO backshells were not properly tightened. Rana fixed these before putting them back in. Some of the connectors were also not pushed in tightly enough, Rana heard the click when he pushed them in. The cables from the adaptor board to the ADC itself looked fine, it was screwed on at both ends, and all these connections looked snug enough. In the interest of completeness, Rana also pushed in the backplane connectors on the Eurocrate (these supply the signals from the BIO cards to switch the whitening ON/OFF). The one corresponding to MC1 was indeed a little loose.
Coming back to the control room, we saw that the MC1 LR sensor was dead. After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints. Could this explain the glitchy behaviour? Perhaps, but the glitches remained in the 3 channels that were connected. Anyways, I will repair this cable tomorrow, and we can see if this has fixed the problem or not..
Some misc points:
PSL shutter is closed, MC1 watchdog is shutdown for the night.
After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints.
The faulty cable has been re-soldered (with heat shrink) and replaced. All 5 sensor signals appear normal on dataviewer now. I am leaving things in this state for the night, let us see if the glitches return overnight.
PSL shutter remains closed
After the repair of the faulty LEMO cable, I left MC1 with it's watchdog off overnight. Unfortunately, it looks like the problem still persists. The first attachment shows a second trend plot for the past 15 hours. Towards the left end of the plot, you can see where I re-connected the LEMO cable for the LR/UR channels.
A couple of months ago, I added a BLRMS block for the IMC optics that calculates BLRMS for the shadow sensor output as well as the coil output. Looking at this trend overnight, I noticed that the glitches appear in the coil outputs as well, as shown in the plot below, which is for a 1 hour stretch last night (I used the full data from a 16Hz coil output channel and not the BLRMS, I am not sure if there is a DQ'ed version of the coil outputs?).
Zooming in further to one of these glitches, we can see that the glitches in the coil and shadow sensor signals are in fact coincident.
But given that the watchdog was turned off all this time, the only voltage going to the coils should be the DC bias voltages. So does this not support the hypothesis that the problem lies in the part of the signal chain that supplies the bias voltage to the coils?
Never mind, the "coil output" channel isn't a true readback of the voltage to the coil, but is the calculated damping output (which is not sent to the coils when the watchdog is shutdown...
As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.
In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.
Going through the last ~20 hours of data, the MC1 sensor channels look glitch free the entire period. However, there is a ~10min period around 1PM UTC today when there were a couple of glitches ~80 counts in size in all the MC3 sensor channels. The attached shows the full 2k data from all 10 channels (MC1 and MC3 sensors) around this time.
Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...
Might be. Or it might be in the satellite box cabling. Hard to tell without a tester. I recommend you squish the cables on there and lock the MC and get back to the usual business. I'll check on sat. box with Ben.