40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 206 of 341  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  12583   Thu Oct 27 12:06:39 2016 gautamUpdateGeneralPRFPMI locked, arms loss improved
Quote:

Great to hear that we have the PRG of ~16 now!

Is this 150ppm an avg loss per mirror, or per arm?

I realized that I did not have a Finesse model to reflect the current situation of flipped folding mirrors (I've been looking at 'ideal' RC cavity lengths with folding mirrors oriented with HR side inside the cavity so we didn't have to worry about the substrate/AR surface losses), and it took me a while to put together a model for the current configuration. Of course this calculation does not need a Finesse model but I thought it would be useful nevertheless. 

In summary - the model with which the attached plot was generated assumes the following:

  • Arm lengths of 37.79m, given our recent modification of the Y arm length
  • RC lengths are all taken from here, I have modelled the RC folding mirrors as flipped with the substrate and AR surface losses taken from the spec sheet
  • The X axis is the average arm loss - i.e. (LITMX+LITMY+LETMX+LETMY)/2. In the model, I have distributed the loss equally between the ITMs and ETMs.

This calculation agrees well with the analytic results Yutaro computed here - the slight difference is possibly due to assuming different losses in the RC folding mirrors. 

The conclusion from this study seems to be that the arm loss is now in the 100-150ppm range (so each mirror has 50-75ppm loss). But these numbers are only so reliable, we need an independent loss measurement to verify. In fact, during last night's locking efforts, the arm transmission sometimes touched 400 (=> PRG ~22), which according to these plots suggest total arm losses of ~50ppm, which would mean each mirror has only 25ppm loss, which seems a bit hard to believe.

Attachment 1: PRG.pdf
PRG.pdf
  12586   Fri Oct 28 01:44:48 2016 gautamUpdateGeneralPRFPMI model vs data studies

Following Koji's suggestion, I decided to investigate the relation between my Finesse model and the measured data.

For easy reference, here is the loss plot again:

Sticking with the model, I used the freedom Finesse offers me to stick in photodiodes wherever I desire, to monitor the circulating power in the PRC directly, and also REFLDC. Note that REFLDC goes to 0 because I am using Finesse's amplitude detector at the carrier frequency for the 00 mode only. 

  

Both the above plots essentially show the same information, except the X axis is different. So my model tells me that I should expect the point of critical coupling to be when the average arm loss is ~100ppm, corresponding to a PRG of ~17 as suggested by my model.

Eric has already put up a scatter plot, but I reproduce another from a fresh lock tonight. The data shown here corresponds to the IFO initially being in the 'buzzing' state where the arms are still under ALS control and we are turning up the REFL gain - then engaging the QPD ASC really takes us to high powers. The three regimes are visible in the data. I show here data sampled at 16 Hz, but the qualitative shape of the scatter does not change even with the full data. As an aside, today I saw the transmission hit ~425!

  

I have plotted the scatter between TRX and REFL DC, but if I were to plot the scatter between POP DC and REFL DC, the shape looks similar - specifically, there is an 'upturn' in the REFL DC values in an area similar to that seen in the above scatter plot. POP DC is a proxy for the PRG, and I confirmed that for the above dataset, there is a monotonic, linear relationship between TRX and POPDC, so I think it is legitimate to compare the plot on the RHS in the row directly above, to the plot from the Finesse model one row further up. In the data, REFL DC seems to hit a minimum around TRX=320. Assuming a PRM transmission of 5.5%, TRX of 320 corresponds to a PRG of 17.5, which is in the ballpark of the region the model tells us to expect it to be. Based on this, I conclude the following:

  • It seems like the Finesse model I have is quite close to the current state of the IFO 
  • Given that we can trust the model, the PRC is now OVERCOUPLED - the scatter plot of data supports this hypothesis
  • Given that in today's lock, I saw arm transmission go up to ~425, this suggests that at optimal alignment, PRG can reach 23. Then, Attachment #1 suggests the average arm loss is <50ppm, which means the average loss per optic is <25ppm. I am not sure how physical this is, given that I remember seeing the specs for the ITMs and ETMs being for scatter less than 40 25ppm, perhaps the optic exceeded the specs, or I remember the wrong numbers, or the model is wrong

In other news, I wanted to try and do the sensing matrix measurements which we neglected to do yesterday. I turned on the notches in CARM, DARM, PRCL and MICH, and then tuned the LO amplitudes until I saw a peak in the error signal for that particular DOF with peak height a factor of >10 above the noise floor. The LO amplitudes I used are 

MICH: 40

PRCL: 0.7

CARM: 0.08

DARM: 0.08

There should be about 15 minutes of good data. More impressively, the lock tonight lasted 1 hour (see Attachment #6, unfortunately FB crashed in between). Last night we lost lock while trying to transition control to 1f signals and tonight, I believe a P.C. drive excursion of the kind we are used to seeing was responsible for the lockloss, so the PRFPMI seems pretty stable.

With regards to the step in the lock acquisition sequence where the REFL gain is turned up, I found in my (4) attempts tonight that I had most success when I adjusted the CARM A slider while turning up the REFL gain to offload the load on the CARM B servo. Of course, this may mean nothing... 

Attachment 1: loss.pdf
loss.pdf
Attachment 2: REFLDC.pdf
REFLDC.pdf
Attachment 3: CriticalCoupling.pdf
CriticalCoupling.pdf
Attachment 4: PRFPMI_Oct282016.pdf
PRFPMI_Oct282016.pdf
Attachment 5: PRFPMI_scatter.pdf
PRFPMI_scatter.pdf
Attachment 6: 1hourPRFPMILock.png
1hourPRFPMILock.png
  12587   Fri Oct 28 15:46:29 2016 gautamSummaryLSCX/Y green beat mode overlap measurement redone

I've been meaning to do this analysis ever since putting in the new laser at the X-end, and finally got down to getting all the required measurements. Here is a summary of my results, in the style of the preceeding elogs in this thread. I dither aligned the arms and maximized the green transmission DC levels, and also the alignment on the PSL table to maximize the beat note amplitude (both near and far field alignment was done), before taking these measurements. I measured the beat amplitude in a few ways, and have reported all of them below...

             XARM   YARM 
o BBPD DC output (mV), all measured with Fluke DMM
 V_DARK:     +1.0    +3.0
 V_PSL:      +8.0    +14.0
 V_ARM:      +175.0  +11.0


o BBPD DC photocurrent (uA)
I_DC = V_DC / R_DC ... R_DC: DC transimpedance (2kOhm)

 I_PSL:       3.5    5.5
 I_ARM:      87.0    4.0


o Expected beat note amplitude
I_beat_full = I1 + I2 + 2 sqrt(e I1 I2) cos(w t) ... e: mode overlap (in power)

I_beat_RF = 2 sqrt(e I1 I2)

V_RF = 2 R sqrt(e I1 I2) ... R: RF transimpedance (2kOhm)

P_RF = V_RF^2/2/50 [Watt]
     = 10 log10(V_RF^2/2/50*1000) [dBm]

     = 10 log10(e I1 I2) + 82.0412 [dBm]
     = 10 log10(e) +10 log10(I1 I2) + 82.0412 [dBm]

for e=1, the expected RF power at the PDs [dBm]
 P_RF:      -13.1  -24.5


o Measured beat note power (measured with oscilloscope, 50 ohm input impedance)      
 P_RF:      -17.8dBm (81.4mVpp)  -29.8dBm (20.5mVpp)   (38.3MHz and 34.4MHz)  
    e:        34                    30  [%]                          
o Measured beat note power (measured with Agilent RF spectrum analyzer)       
 P_RF:      -19.2  -33.5  [dBm] (33.2MHz and 40.9MHz)  
    e:       25     13    [%]                          

I also measured the various green powers with the Ophir power meter: 

o Green light power (uW) [measured just before PD, does not consider reflection off the PD]
 P_PSL:       16.3    27.2
 P_ARM:       380     19.1

Measured beat note power at the RF analyzer in the control room
 P_CR:      -36    -40.5    [dBm] (at the time of measurement with oscilloscope)
Expected    -17    - 9    [dBm] (TO BE UPDATED)

Expected Power: (TO BE UPDATED)
Pin + External Amp Gain (25dB for X, Y from ZHL-3A-S)
    - Isolation trans (1dB)
    + GAV81 amp (10dB)
    - Coupler (10.5dB)


The expected numbers for the control room analyzer in red have to be updated. 

The main difference seems to be that the PSL power on the Y broadband PD has gone down by about 50% from what it used to be. In either measurement, it looks like the mode matching is only 25-30%, which is pretty abysmal. I will investigate the situation further - I have been wanting to fiddle around with the PSL green path in any case so as to facilitate having an IR beat even when the PSL green shutter is closed, I will try and optimize the mode matching as well... I should point out that at this point, the poor mode-matching on the PSL table isn't limiting the ALS noise performance as we are able to lock reliably...

  12592   Wed Nov 2 22:56:45 2016 gautamUpdateCDSc1pem revamped

installing the BLRMS 2k blocks turned out to be quite non-trivial due to a whole host of CDS issues that had to be debugged, but i've restored everything to a good state now, and the channels are being logged. detailed entry with all the changes to follow.

  12594   Thu Nov 3 11:33:24 2016 gautamUpdateGeneralpower glitch - recovery

I did the following:

  • Hard reboots for fb, megatron, and all the frontends, in that order
  • Checked time on all FEs, ran sudo ntpdate -b -s -u pool.ntp.org where necessary
  • Restarted all realtime models
  • Restarted monit on all FEs
  • Reset Marconi to nominal settings, fCarrier=11.066209MHz, +13dBm amplitude
  • In the control room, restarted the projector and set up the usual StripTool traces
  • Realigned PMC
  • Slow machines did not need any touchups - interestingly, ITMX did not get stuck during this power glitch!

There was a regular beat coming from the speakers. After muting all the channels on the mixer and pulling the 3.5mm cable out, the sound persisted. It now looks like the mixer is broken sad

     ProFX8v2

 

  12595   Thu Nov 3 12:38:42 2016 gautamUpdateCDSc1pem revamped

A number of changes were made to C1PEM and some library parts. Recall that the motivation was to add BLRMS channels for all our suspension coils and shadow sensor PDs, which we are first testing out on the IMC mirrors.

Here is the summary:

BLRMS_2k library block

  • The name of the custom C code block in this library part was named 'BLRMSFILTER' which conflicted with the name of the function call in the C code it is linked to, which lead to compilation errors
  • Even though the part was found in /opt/rtcds/userapps/release/cds/c1/models and not in the common repository, just to be safe, I made a copy of the part called BLRMS_2k_40m which lives in the above directory. I also made a copy of the code it calls in /opt/rtcds/userapps/release/cds/c1/src

C1PEM model + filter channels

  • Adding the updated BLRMS_2k_40m library part still resulted in some compilation errors - specifically, it was telling me to check for missing links around the ADC parts
  • Eric suggested that the error messages might not be faithfully reporting what the problem is - true enough, the problem lay in the fact that c1pem wasn't updated to follow the namespace convention that we now use in all the RT models - the compiler was getting confused by the fact that the BLRMS stuff was in a namespace block called 'SUS', but the rest of the PEM stuff wasn't in such a block
  • I revamped c1pem to add namespace blocks called PEM and DAF, and put the appropriate stuff in the blocks, after which there were no more compilation errors
  • However, this namespace convention messed up the names of the filter modules and associated channels - this was resolved with Eric's help (find and replace did the job, this is a familiar problem that we had encountered not too long ago when C1IOO was similarly revamped...)
  • There was one last twist in that the model would compile and install, but just would not start. I tried the usual voodo of restarting all the models, and even did a soft reboot of c1sus, to no avail. Looking at dmesg, I tracked the problem down to a burt restore issue - the solution was to press the little 'BURT' button next to c1pem on the CDS overview MEDM screen as soon as it appeared while restarting the model

All the channels seem to exist, and FB seems to not be overloaded judging by the performance overnight up till the power outage. I will continue to monitor this...

GV Edit 3 Nov 2016 7pm:

I had meant to check the suitability of the filters used - there is a detailed account of the filters implemented in BLRMSFILTER.c here, and I quickly looked at the file on hand to make sure the BP filters made sense (see Attachment #1). These the BP filters are 8th order elliptical filters and the lowpass filters are16th order elliptical filters scaled for the appropriate frequency band, which are somewhat different from what we use on the seismometer BLRMS channels, where the filters are order 4, but I don't think we are significantly overloaded on the computational aspect, and the lowpass filters have sufficiently steep roll-off, these should be okay...

Attachment 1: BLRMSresp.pdf
BLRMSresp.pdf
  12596   Thu Nov 3 12:40:10 2016 gautamUpdateGeneral projector light bulb is out

The projector failed just now with a pretty loud 'pop' sound - I've never been present when the lamp goes out, so I don't know if this is usual. I have left the power cable unplugged for now...

Replacement is ordered Nov 4

  12602   Mon Nov 7 16:05:55 2016 gautamUpdateSUSPRM Sat. Box. Debugging

Short summary of my Sat. Box. debugging activities over the last few days. Recall that the SRM Sat. Box has been plugged into the PRM suspension for a while now, while the SRM has just been hanging out with no electrical connections to its OSEMs.

As Steve mentioned, I had plugged in Ben's extremely useful tester box (I have added these to the 40m Electronics document sub-tree on the DCC) into the PRM Sat. Box and connected it to the CDS system over the weekend for observation. The problematic channel is LR.  Judging by Steve's 2 day summary plots, LR looks fine. There is some unexplained behavior in the UR channel - but this is different from the glitchy behaviour we have seen in the LR channel in the past. Moreover, subsequent debugging activities did not suggest anything obviously wrong with this channel. So no changes were made to UR. I then pulled out the PRM sat.box for further diagnostics, and also, for comparison, the SRM sat. box which has been hooked up to the PRM suspension as we know this has been working without any issues. 

Tracing out the voltages through the LED current driver circuit for the individual channels, and comparing the performance between PRM and SRM sat. boxes, I narrowed the problem down to a fault in either the LT1125CSW Quad Op-Amp IC or the LM6321M current driver IC in the LR channel. Specifically, I suspected the output of U3A (see Attachment #1) to be saturated, while all the other channels were fine. Looking at the spectrum at various points in the circuit with an SR785, I could not find significant difference between channels, or indeed, between the PRM/SRM boxes (up to 100kHz). So I decided to swap out both these ICs. Just replacing the OpAmp IC did not have any effect on the performance. But after swapping out the current buffer as well, the outputs of U3A and U11 matched those of the other channels. It is not clear to me what the mode of failure was, or if the problem is really fixed. I also checked to make sure that it was indeed the ICs that had failed, and not the various resistors/capacitors in the signal path. I have plugged in the PRM sat. box + tester box setup back into our CDS data acquisition for observation over a couple of days, but hopefully this does the job... I will update further details over the coming days.

I have restored control to PRM suspensions via the working SRM sat. box. The PRM Sat. Box and tester box are sitting near the BS/PRM chamber in the same configuration as Steve posted in his earlier elog for further diagnostics...


GV Edit 2230 hrs 7Nov2016: The signs from the last 6 hours has been good - see the attached minute trend plot. Usually, the glitches tend to show up in this sort of time frame. I am not quite ready to call the problem solved just yet, but I have restored the connections to the SRM suspension (the PRM and SRM Sat. Boxes are still switched). I've also briefly checked the SRM alignment, and am able to lock the DRMI, but the lock doesn't hold for more than a few seconds. I am leaving further investigations for tomorrow, let's see how the Sat. Box does overnight.

Attachment 1: D961289-B2.pdf
D961289-B2.pdf
Attachment 2: PRMSatBoxtest.png
PRMSatBoxtest.png
  12603   Mon Nov 7 17:24:12 2016 gautamUpdateGreen LockingGreen beat setup on PSL table

I've been trying to understand the green beat setup on the PSL table to see if I can explain the abysmal mode-matching of the arm and PSL green beams on the broadband beat PDs. My investigations suggest that the mode-matching is very sensitive to the position of one of the lenses in the arm green path. I will upload a sktech of the PSL beat setup along with some photos, but here is the quick summary.

  1. I first mapped the various optical components and distances between them on the PSL table, both for the arm green path and the PSL green path
  2. Next, setting the PSL green waist at the center of the doubling oven and the arm green waist at the ITMs (in vacuum distances for the arm green backed out of CAD drawing), I used a la mode to trace the Gaussian beam profile for our present configuration. The main aim here was to see what sort of mode matching we can achieve theoretically, assuming perfect alignment onto the BBPDs. The simulation is simplified, the various beam splitters and other transmissive optics are treated as having 0 width
  3. It is pretty difficult to accurately measure path lengths to mm accuracy, so to validate my measurement, I measured the beam widths of the arm and PSL green beams at a few locations, and compared them to what my simulation told me to expect. The measurements were taken with a beam profiler I borrowed from Andrew Wade, and both the arm and PSL green beams have smooth Gaussian intensity profiles for the TEM00 mode (as they should!). I will upload some plots shortly. The agreement is pretty good, to within 10%, although geometric constraints on the PSL table limited the number of measurements I could take (I didn't want to disturb any optics at this point)
  4. I then played around with the position of a fast (100mm EFL) lens in the arm green path, to which the mode matching efficiency on the BBPD is most sensitive, and found that in a +/- 1cm range, the mode matching efficiency changes dramatically

Results:

Attachments #1 and 2: Simulated and measured beam profiles for the PSL and arm green beams. The origin is chosen such that both beams have travelled to the same coordinate when they arrive at the BBPD. The agreement between simulation and measurement is pretty good, suggesting that I have modelled the system reasonably well. The solid black line indicates the (approximate) location of the BBPD

     

Attachment #3: Mode matching efficiency as a function of shift of the above-mentioned fast lens. Currently, after my best efforts to align the arm and PSL green beams in the near and far fields before sending them to the BBPD results in a mode matching efficiency of ~30% - the corresponding coordinate in the simulation is not 0 because my length measurements are evidently not precise to the mm level. But clearly the mode matching efficiency is strongly sensitive to the position of this lens. Nevertheless, I believe that the conclusion that shifting the position of this lens by just 2.5mm from its optimal position degrades the theoretical maximum mode matching efficiency from >95% to 50% remains valid. I propose that we align the beams onto the BBPD in the near and far fields, and then shift this lens which is conveniently mounted on a translational stage, by a few mm to maximize the beat amplitude from the BBPDs. 

Unrelated to this work: I also wish to shift the position of the PSL green shutter. Currently, it is located before the doubling oven. But the IR pickoff for the IR beat setup currently is located after the doubling oven, so when the PSL green shutter is closed, we don't have an IR beat. I wish to relocate the shutter to a position such that it being open or closed does not affect the IR beat setup. Eventually, we want to implement some kind of PID control to make the end laser frequencies track the PSL frequency continuously using the frequency counter setup, for which we need this change...

Attachment 1: CurrentX.pdf
CurrentX.pdf
Attachment 2: CurrentY.pdf
CurrentY.pdf
Attachment 3: ProposedShift_copy.pdf
ProposedShift_copy.pdf
  12606   Tue Nov 8 11:54:38 2016 gautamUpdateSUSPRM Sat. Box. looks to be fixed

Looks like the PRM Sat. Box is now okay, no evidence of the kind of glitchy behaviour we are used to seeing in any of the 5 channels.

Quote:
 
GV Edit 2230 hrs 7Nov2016: The signs from the last 6 hours has been good - see the attached minute trend plot. Usually, the glitches tend to show up in this sort of time frame. I am not quite ready to call the problem solved just yet, but I have restored the connections to the SRM suspension (the PRM and SRM Sat. Boxes are still switched). I've also briefly checked the SRM alignment, and am able to lock the DRMI, but the lock doesn't hold for more than a few seconds. I am leaving further investigations for tomorrow, let's see how the Sat. Box does overnight.

 

  12609   Wed Nov 9 23:21:44 2016 gautamUpdateGreen LockingGreen beat setup on PSL table

I tried to realize an improvement in the mode matching onto the BBPDs by moving the lens mentioned in the previous elog in this thread. My best efforts today yielded X and Y beats at amplitudes -15.9dBm (@37MHz) and -25.9dBm (@25MHz) respectively. The procedure I followed was roughly:

  1. Do the near-field far-field alignment of the arm and PSL green beams
  2. Steer beam onto BBPD, center as best as possible using the usual technique of walking the beam across the photodiode
  3. Hook up the output of the scope to the Agilent network analyzer. Tweak the arm and PSL green alignments to maximize the beat amplitude. Then move the lens to maximize the beat amplitude.

As per my earlier power budget, these numbers translate to a mode matching efficiency of ~53% for the X arm beat and ~58% for the Y arm beat, which is a far cry from the numbers promised by the a la mode simulation (~90% at the optimal point, I could not achieve this for either arm scanning the lens through a maximum of the beat amplitude). Looks like this is the best we can do without putting in any extra lenses. Still a marginal improvement from the previous state though...

  12610   Thu Nov 10 19:02:03 2016 gautamUpdateCDSEPICS Freezes are back

I've been noticing over the last couple of days that the EPICS freezes are occurring more frequently again. Attached is an instance of StripTool traces flatlining. Not sure what has changed recently in terms of the network to cause the return of this problem... Also, they don't occur coincidentally on multiple workstations, but they do pop up in both pianosa and rossa.

Not sure if it is related, but we have had multiple slow machine crashes today as well. Specifically, I had to power cycle C1PSL, C1SUSAUX, C1AUX, C1AUXEX, C1IOOL0 at some point today

Attachment 1: epicsFreezesBack.png
epicsFreezesBack.png
  12611   Sat Nov 12 01:09:56 2016 gautamUpdateLSCRecovering DRMI locking

Now that we have all Satellite boxes working again, I've been working on trying to recover the DRMI 1f locking over the last couple of days, in preparation for getting back to DRFPMI locking. Given that the AS light levels have changed, I had to change the whitening gains on the AS55 and AS110 channels to take this into account. I found that I also had to tune a number of demod phases to get the lock going. I had some success with the locks tonight, but noticed that the lock would be lost when the MICH/SRCL boosts were triggered ON - when I turned off the triggering for these, the lock would hold for ~1min, but I couldn't get a loop shape measurement in tonight.


As an aside, we have noticed in the last couple of months glitchy behaviour in the ITMY UL shadow sensor PD output - qualitatively, these were similar to what was seen in the PRM sat. box, and since I was able to get that working again, I did a similar analysis on the ITMY sat. box today with the help of Ben's tester box. However, I found nothing obviously wrong, as I did for the PRM sat. box. Looking back at the trend, the glitchy behaviour seems to have stopped some days ago, the UL channel has been well behaved over the last week. Not sure what has changed, but we should keep an eye on this...

  12613   Mon Nov 14 14:21:06 2016 gautamSummaryCDSReplacing DIMM on Optimus

I replaced the suspected faulty DIMM earlier today (actually I replaced a pair of them as per the Sun Fire X4600 manual). I did things in the following sequence, which was the recommended set of steps according to the maintenance manual and also the set of graphics on the top panel of the unit:

  1. Checked that Optimus was shut down
  2. Removed the power cables from the back to cut the standby power. Two of the fan units near the front of the chassis were displaying fault lights, perhaps this has been the case since the most recent power outage after which I did not reboot Optimus
  3. Took off the top cover, removed CPU 6 (labelled "G" in the unit). The manual recommends finding faulty DIMMs by looking for an LED that is supposed to indicate the location of the bad card, but I couldn't find any such LEDs in the unit we have, perhaps this is an addition to the newer modules?
  4. Replaced the topmost (w.r.t the orientation the CPU normally sits inside the chassis) DIMM card with one of the new ones Steve ordered
  5. Put everything back together, powered Optimus up again. Reboot went smoothly, fan unit fault lights which I mentioned earlier did not light up on the reboot so that doesn't look like an issue.

I then checked for memory errors using edac-utils, and over the last couple of hours, found no errors (corrected or otherwise, see Praful's earlier elog for the error messages that we were getting prior to the DIMM swap)- I guess we will need to monitor this for a while more before we can say that the issue has been resolved.

Looking at dmesg after the reboot, I noticed the following error messages (not related to the memory issue I think):

[   19.375865] k10temp 0000:00:18.3: unreliable CPU thermal sensor; monitoring disabled
[   19.375996] k10temp 0000:00:19.3: unreliable CPU thermal sensor; monitoring disabled
[   19.376234] k10temp 0000:00:1a.3: unreliable CPU thermal sensor; monitoring disabled
[   19.376362] k10temp 0000:00:1b.3: unreliable CPU thermal sensor; monitoring disabled
[   19.376673] k10temp 0000:00:1c.3: unreliable CPU thermal sensor; monitoring disabled
[   19.376816] k10temp 0000:00:1d.3: unreliable CPU thermal sensor; monitoring disabled
[   19.376960] k10temp 0000:00:1e.3: unreliable CPU thermal sensor; monitoring disabled
[   19.377152] k10temp 0000:00:1f.3: unreliable CPU thermal sensor; monitoring disabled

I wonder if this could explain why the fans on Optimus often go into overdrive and make a racket? For the moment, the fan volume seems normal, comparable to the other SunFire X4600s we have running like megatron and FB...

  12616   Tue Nov 15 19:22:17 2016 gautamUpdateGeneralhousekeeping

PRM and SRM sat. boxes have been switched for some time now - but the PRM sat. box has one channel with a different transimpedance gain, and the damping loops for the PRM and SRM were not systematically adjusted to take this into account (I just tweaked the gain for the PRM and SRM side damping loops till the optic damped). Since both sat. boxes are nominally functioning now, I saw no reason to maintain this switched configuration so I swapped the boxes back, and restored the damping settings to their values from March 29 2016, well before either of this summer's vents. In addition, I want to collect some data to analyze the sat. box noise performance so I am leaving the SRM sat. box connected to the DAQ, but with the tester box connected to where the vacuum feedthroughs would normally go (so SRM has no actuation right now). I will collect a few hours of data and revert later tonight for locking activities....

  12619   Wed Nov 16 03:10:01 2016 gautamUpdateLSCDRMI locked on 1f and 3f signals

After much trial and error with whitening gains, demod phases and overall loop gains, I was finally able to lock the DRMI on both 1f and 3f signals! I went through things in the following order tonight:

  1. Lock the arms, dither align
  2. Lock the PRMI on carrier and dither align the PRM to get good alignment
  3. Tried to lock the DRMI on 1f signals - this took a while. I realized the reason I had little to no success with this over the last few days was because I did not turn off the automatic unwhitening filter triggering on the demod screens. I had to tweak the SRM alignment while looking at the AS camera, and also adjust the demod phases for AS55 (MICH is on AS55Q) and REFL55 (SRCL is on REFL55I). Once I was able to get locks of a few seconds, I used the UGF servos to set the overall loop gain for MICH, PRCL and SRCL, after which I was able to revert the filter triggering to the usual settings
  4. Once I adjusted the overall gains and demod phases, the DRMI locks were very stable - I left a lock alone for ~20mins, and then took loop shape measurements for all 3 loops
  5. Then I decided to try transfering to 3f signals - I first averaged the IN1s to the 'B' channels for the 3 vertex DOFs using cds avg while locked on the 1f signals. I then set a ramp time of 5 seconds and turned the gain of the 'A' channels to 0 and 'B' channels to 1. The transition wasn't smooth in that the lock was broken but was reacquired in a couple of seconds.
  6. The lock on 3f signals was also pretty stable, the current one has been going for >10 minutes and even when it loses lock, it is able to reacquire in a few seconds

I have noted all the settings I used tonight, I will post them tomorrow. I was planning to try a DRFPMI lock if I was successful with the DRMI earlier tonight, but I'm calling it a night for now. But I think the DRMI locking is now back to a reliable level, and we can push ahead with the full IFO lock...

It remains to update the auto-configure scripts to restore the optimized settings from tonight, I am leaving this to tomorrow as well...


Updated 16 Nov 2016 1130am

Settings used were as follows:

1f/3f DOF Error signal Whitening gain (dB) Demod phase (deg) Loop gain Trigger
DRMI Locking 16 Nov 2016
1f MICH (A) AS55Q 0 -42 -0.026 POP22I=1
1f PRCL (A) REFL11I 18 18 -0.0029 POP22I=1
1f SRCL (A) REFL55I 18 -175 -0.035 POP22I=10
3f MICH (B) REFL165Q 24 -86 -0.026 POP22I=1
3f PRCL (B) REFL33I 30 136 -0.0029 POP22I=1
3f SRCL (B) REFL165I and REFL33I - - -0.035 POP22I=10

 

  12623   Thu Nov 17 15:17:16 2016 gautamUpdateIMCMCL Feedback

As a starting point, I was looking at some of the old elogs and tried turning on the MCL feedback path with the existing control filters today. I tried various combinations of MCL Feedback and FF on and off, and looked at the MCL error signal (which I believe comes from the analog MC servo board?) spectrum for each case. We had used this earlier this year when EricQ and I were debugging the EX laser frequency noise to stabilize the low frequency excursions of the PSL frequency. The low frequency suppression can be seen in Attachment #1, there looks to be some excess MCL noise around 16Hz when the servo is turned on. But the MC transmission (and hence the arm transmission) decays and gets noisier when the MCL feedback path is turned on (see Attached StripTool screenshots).

Attachment 1: MCLerror.pdf
MCLerror.pdf
Attachment 2: MCLtest.png
MCLtest.png
Attachment 3: YarmCtrl.pdf
YarmCtrl.pdf
  12627   Fri Nov 18 17:52:42 2016 gautamUpdatePSLFSS Slow control -> Python, WFS re-engaged

[yinzi, craig, gautam]

Yinzi had translated the Perl PID script used to implement the discrete-time PID control, and had implemented it with Andrew at the PSL lab. Today afternoon we made some minor edits to make this suitable for the FSS Slow loop (essentially just putting the right channel names into her Python script). I then made an init file to run this script on megatron, and it looks to be working fine over the last half hour of observation or so. I am going to leave things in this state over the weekend to see how it performs.


We have been running with just the MC2 Transmission QPD for angular control of the IMC for a couple of months now because the WFS loops seemed to drag the alignment away from the optimum. We did the following to try and re-engage the WFS feedback:

  • Close the PSL shutter, turned off all the lights in the lab and ran the WFS DC offsets script : /opt/rtcds/caltech/c1/scripts/MC/WFS/WFS_DC_offsets
  • Locked the IMC, optimized alignment by hand (WFS feedback turned off) /opt/rtcds/caltech/c1/scripts/MC/WFS/WFS_DC_offsets
  • Unlocked the IMC, went to the AS table and centered the spots on the WFS
  • Ran WFS RF offsets script - this should be done with the IMC unlocked (after good alignment has been established) /opt/rtcds/caltech/c1/scripts/MC/WFS/WFS_DC_offsets
  • Re-engaged WFS servo

GV addendum 23Nov2016: The WFS have been working well over the last few days - I've had to periodically (~ once in a day) run the WFS reflief script to keep the outputs to the suspension PIT and YAW DOFs below 50cts, but the WFS aren't dragging the alignment away as we had noticed before. The only thing I did differently is to follow Rana's suggestion and set the RF offsets with the MC unlocked as opposed to locked. I've added a line to the script to remind the user to do so... Also, note that EricQ has recently cleaned up the scripts directory to remove the numerous obsolete scripts in there...

 

  12630   Mon Nov 21 14:02:32 2016 gautamUpdateLSCDRMI locked on 3f signals, arms held on ALS

Over the weekend, I was successful in locking the DRMI with the arms held on ALS. The locks were fairly robust, lasting order of minutes, and was able to reacquire by itself when it lost the lock in <1min. I had to tweak the demod phases and loop gains further compared to the 1f lock with no arms, but eventually I was able to run a sensing matrix measurement as well. A summary of the steps I had to follow:

  • Lock on 1f signals, no arms, and run sensing lines, adjust REFL33 and REFL 165 demod phases to align PRCL, MICH and SRCL as best as possible to REFL33I, REFL165Q and REFL165I respectively
  • I also set the offsets to the 'B' inputs at this stage
  • Lock arms on ALS, engage DRMI locking on 3f signals (the restore script resets some values like the 'B' channel offsets, so I modified the restore script to set the offsets I most recently measured)
  • I was able to achieve short locks on the settings from the locking with no arms - I set the loop gains using the UGF servos and ran some sensing lines to get an idea of what the final demod phases should be
  • Adjusted the demod phases, locked the DRMI again (with CARM offset = -4.0), and took another sensing matrix measurement (~2mins). The data was analyzed using the set of scripts EricQ has made for this purpose, here is the result from a lock yesterday evening (the radial axis is meant to be demod board output volts per meter but the calibration I used may be wrong)

I've updated the appropriate fields in the restore script. Now that the DRMI locking is somewhat stable again, I think the next step towards the full lock would be to zero the CARM offset and turning on the AO path.

On the downside, I noticed yesterday that ITMY UL shadow sensor readback was glitching again - for the locking yesterday, I simply held the output of that channel to the input matrix, which worked fine. I had already done some debugging on the Sat. Box with the help of the tester box, but unlike the PRM sat. box, I did not find anything obviously wrong with the ITMY one... I also ran into a CDS issue when I tried to run the script that sets the phase tracker UGF - the script reported that the channels it was supposed to read (the I and Q outputs of the ALS signal, e.g. C1:ALS-BEATX_FINE_I_OUT) did not exist. The same channels worked on dataviever though, so I am not sure what the problem was. Some time later, the script worked fine too. Something to look out for in the future I guess..

Attachment 1: DRMIArms_Nov20.pdf
DRMIArms_Nov20.pdf
  12631   Mon Nov 21 15:34:24 2016 gautamUpdateCOCRC folding mirrors - updated specs

Following up on the discussion from last week's Wednesday meeting, two points were raised:

  1. How do we decide what number we want for the coating on the AR side for 532nm?
  2. Do we want to adjust T@1064nm on the HR side to extract a stronger POP beam?

With regards to the coating on the AR side, I've put in R<300ppm@1064nm and R<1000ppm@532nm on the AR side. On the HR side, we have T>97% @ 532nm (copied from the current PR3/SR3 spec), and T<50ppm @1064nm. What are the ghost beams we need to be worried about? 

  • Scattered light the AR side interfering with the main transmitted green beam possibly making our beat measurement noisier
    • With the above numbers, accounting for the fact that we ask for a 2 degree wedge on PR3, the first ghost beam from reflection on the AR side will have an angular separation from the main beam of ~7.6 degrees. So over the ~4m the green beam travels before reaching the PSL table, I think there is sufficient angular separation for us to catch this ghost and dump it. 
    • Moreover, the power in this first ghost beam will be ~30ppm relative to the main green beam. If we can get R<100ppm @532nm on the AR side, the number becomes 3ppm
  • Prompt reflection from the HR surface of PR3 scattering green light back into the arm cavity mode 
    • The current spec has T>97% @532nm. So 3% is promptly reflected at the HR side of PR3
    • I'm not sure how much of a problem this really will be - I couldn't find the reflectivities of PR2 and PRM @532nm (were these ever measured?)
    • In any case, if we can have T<50ppm @1064nm and R>99.9% @532nm, that would be better

So in conclusion, with the specs as they are now, I don't think the ALS noise performance is adversely affected. I have updated the spec to have the following numbers now.

HR side: T < 50ppm @1064nm, T>99.9% @532nm

AR side: R < 100ppm @1064nm and @532nm

 

As for the POP question, if we want to extract a stronger POP beam, we will have to relax the requirement on the transmission @1064nm on the HR side. But recall that the approach we are now considering is to replace only PR3, and flip PR2 back the right way around. Currently, POP is extracted at PR2, so if we want to stick with the idea of getting a new PR3 and extracting a stronger POP beam, there needs to be a major optical layout reshuffle in the BS/PRM chamber. Koji suggested that in the interest of keeping things moving along, we don't worry about POP for the time being...


Alternatively, if it turns out that the vendor can meet the specs for our second requirement (which requires 1.5% of lambda @632nm measurement precision to meet the 10+/-5km RoC tolerance on PR3), then we can ast for T<1000ppm @1064nm for the HR coating on PR2, and keep the coating specs on PR3 as above. 

 

Attached is a pdf with the specs updated to reflect all the above considerations...

Attachment 1: Recycling_Mirrors_Specs_Nov2016.pdf
Recycling_Mirrors_Specs_Nov2016.pdf Recycling_Mirrors_Specs_Nov2016.pdf Recycling_Mirrors_Specs_Nov2016.pdf
  12635   Wed Nov 23 01:13:02 2016 gautamUpdateIMCMCL Feedback

I wanted to get a clearer idea of the FSS servo and the various boxes in the signal chain and so Lydia and I poked around the IOO rack and the PSL table - I will post a diagram here tomorrow.

We then wanted to characterize the existing loop. It occurred to me later in the evening to measure the plant itself to verify the model shape used to construct the invP filter in the feedback path. I made the measurement with a unity gain control path, and I think there may be an extra zero @10Hz in the model.

Earlier in the evening, we measured the OLG of the MCL loop using the usual IN1/IN2 prescription, in which above 10Hz, the measurement and FOTON disagree, which is not surprising given Attachment #1.

I didn't play around with the loop shape too much tonight, but we did perform some trials using the existing loop, taking into account some things I realized since my previous attempts. The summary of the performanceof the existing loop is:

  • Below 1Hz, MCL loop injects noise to the arm control signal. I need to think about why this is, but perhaps it is IMC sensing noise?
  • Between 1-4Hz, the MCL loop suppresses the arm control signal
  • Between 4-10Hz (and also between 60-100Hz for the Xarm), the MCL loop injects noise. Earlier in the evening, we had noticed that there was a bump in the X arm control signal between 60-100Hz (which was absent in the Y arm control signal). Koji later helped me diagnose this as too low loop gain, this has since been rectified, but the HF noise of the X arm remains somewhat higher than the Y arm.

All of the above is summarized in the below plots - this behaviour is (not surprisingly) in line with what Den observed back when he put these in.

  

 

The eventual goal here is to figure out if we can get an adaptive feedback loop working in this path, which can take into account prevailing environmental conditions and optimally shape the servo to make the arms follow the laser frequency more closely at low frequencies (i.e. minimize the effect of the noise injected by IMC length fluctuations at low frequency). But first we need to make a robust 'static' feedback path that doesn't inject control noise at higher frequencies, I need to think a little more about this and work out the loop algebra to figure out how to best do this...

Attachment 1: MCL_plant.pdf
MCL_plant.pdf
Attachment 2: OLG.pdf
OLG.pdf
Attachment 3: MC_armSpectra_X.pdf
MC_armSpectra_X.pdf
Attachment 4: MC_armSpectra_Y.pdf
MC_armSpectra_Y.pdf
  12638   Wed Nov 23 16:21:02 2016 gautamUpdateLSCITMY UL glitches are back

 

Quote:

As an aside, we have noticed in the last couple of months glitchy behaviour in the ITMY UL shadow sensor PD output - qualitatively, these were similar to what was seen in the PRM sat. box, and since I was able to get that working again, I did a similar analysis on the ITMY sat. box today with the help of Ben's tester box. However, I found nothing obviously wrong, as I did for the PRM sat. box. Looking back at the trend, the glitchy behaviour seems to have stopped some days ago, the UL channel has been well behaved over the last week. Not sure what has changed, but we should keep an eye on this...

I've noticed that the glitchy behaviour in ITMY UL shadow sensor readback is back - as mentioned above, I looked at the Sat. Box and could not find anything wrong with it, perhaps I'll plug the tester box in over the Thanksgiving weekend and see if the glitches persist...

  12643   Mon Nov 28 10:27:13 2016 gautamUpdateSUSITMY UL glitches are back

I left the tester box plugged in from Thursday night to Sunday afternoon, and in this period, the glitches still appeared in (and only in) the UL channel.

So yesterday evening, I pulled the Sat. Box. out and checked the DC voltages at various points in the circuit using a DMM, including the output of the high current buffer that supplies the drive current to the shadow sensor LEDs. When we had similar behaviour in the PRM box, this kind of analysis immediately identified the faulty component as the high current buffer IC (LM6321M) in the bad channel, but everything seems in order for the ITMY box. 

I then checked the Satellite Amplifier Termination Board, which basically just adds 100ohm series resistors to the output of the PD readout, and all the resistors seem fine, the piece of insulating material affixed to the bottom of this board is also intact. I then used the SR785 in AC coupled mode to look at the high frequency spectrum at the same points I checked the DC voltages with the DMM (namely the drive voltage to the LEDs, and the PD readout voltages on the PCB as well as on the pins of the connector on the outside of the box after the termination board (leading to the DAQ), and nothing sticks out here in the UL channel either. Of course it could be that the glitches are intermittent, and during my tests they just weren't there...

I am hesitant to start pulling out ICs and replacing them without any obvious signs of failure from them, but I am out of debugging ideas...


One possibility is that the problem lies upstream of the Sat. Box - perhaps the UL channel in the Suspension PD Whitening and Interface Board is faulty. To test, I have now hooked up ITMY Sat. Box. + tester box to the signal chain of ETMY. If I can get the other tester box back from Ben, I will plug in the ETMY sat. box. + tester to the ITMY signal chain. This should tell us something...

Attachment 1: ITMY_satboxSpectra.pdf
ITMY_satboxSpectra.pdf
  12648   Wed Nov 30 01:47:56 2016 gautamUpdateLSCSuspension woes

Short summary:

  • Looks like Satellite boxes are not to blame for glitchy behaviour of shadow sensor PD readouts
  • Problem may lie at the PD whitening boards (D000210) or with the Contec binary output cards in c1sus
  • Today evening, similar glitchy behaviour was observed in all MC1 PD readout channels, leading to frequent IMC unlocking. Cause unknown, although I did work at 1X5, 1X6 today, and pulled out the PD whitening board for ITMY which sits in the same eurocrate as that for MC1. MC2/MC3 do not show any glitches.

Detailed story below...


Part 1: Satellite box swap

Yesterday, I switched the ITMY and ETMY satellite boxes, to see if the problems we have been seeing with ITMY UL move with the box to ETMY. It did not, while ITMY UL remained glitchy (based on data from approximately 10pm PDT on 28Nov - 10am PDT 29 Nov). Along with the tabletop diagnosis I did with the tester box, I concluded that the satellite box is not to blame.


Part 2: Tracing the signal chain (actually this was part 3 chronologically but this is how it should have been done...)

So if the problem isn't with the OSEMs themselves or the satellite box, what is wrong? I attempted to trace the signal chain from the satellite box into our CDS system as best as I could. The suspension wiring diagram on our wiki page is (I think) a past incarnation. Of course putting together a new diagram was a monumental task I wasn't prepared to undertake tonight, but in the long run this may be helpful. I will put up a diagram of the part I did trace out tomorrow, but the relevant links for this discussion are as follows (? indicates I am unsure):

  1. Sat box (?)--> D010069 via 64pin IDE connector --> D000210 via DB15 --> D990147 via 4pin LEMO connectors --> D080281 via DB25 --> ADC0 of c1sus
  2. D000210 backplane --> cross-connect (mis)labelled "ITMX white" via IDE connector
  3. c1sus CONTEC DO-32L-PE --> D080478 via DB37 --> BO0-1 --> cross-connect labelled "XY220 1Y4-33-16A" via IDE --> (?)  cross-connect (mis)labelled "ITMX white" via IDE connector

I have linked to the DCC page for the various parts where available. Unfortunately I can't locate (on new DCC or old or elog or wiki) drawings for D010069 (Satellite Amplifier Adapter Board), D080281 ("anti-aliasing interface)" or D080478 (which is the binary output breakout box). I have emailed Ben Abbott who may have access to some other archive - the diagrams would be useful as it is looking likely that the problem may lie with the binary output.

So presumably the first piece of electronics after the Satellite box is the PD whitening board. After placing tags on the 3 LEMOs and 1 DB15 cable plugged into this board, I pulled out the ITMY board to do some tabletop diagnosis in the afternoon around 2pm 29Nov.


Part 3: PD whitening board debugging

This particular board has been reported as problematic in the recent past. I started by inserting a tester board into the slot occupied by this board - the LEDs on the tester board suggested that power-supply from the backplane connectors were alright, confirmed with a DMM.

Looking at the board itself, C4 and C6 are tantalum capacitors, and I have faced problems with this type of capacitor in the past. In fact, on the corresponding MC3 board (which is the only one visible, I didn't want to pull out boards unnecessarily) have been replaced with electrolytic capacitors, which are presumably more reliable. In any case, these capacitors do not seem to be at any fault, the board receives +/-15 V as advertised.

The whitening switching is handled by the MAX333 - this is what I looked at next. This IC is essentially a quad SPDT switch, and a binary input supplied via the backplane connector serves to route the PD input either through a whitening filter, or bypass it via a unity gain buffer. The logic levels that effect the switching are +15V and 0V (and not the conventional 5V and 0V), but according to the MAX333 datasheet, this is fine. I looked at the supply voltage to all ICs on the board, DC levels seemed fine (as measured with a DMM) and I also looked at it on an oscilloscope, no glitches were seen in ~30sec viewing stretch. I did notice something peculiar in that with no input supplied to the MAX333 IC (i.e. the logic level should be 15V), the NO and NC terminals appear shorted when checked with a DMM. Zach has noticed something similar in the past, but Koji pointed out that the DMM can be fooled into thinking there is a short. Anyway, the real test was to pull the logic input of the MAX333 to 0, and look at the output, this is what I did next.

The schematic says the whitening filter has poles at 30,100Hz and a zero at 3 Hz. So I supplied as "PD input" a 12Hz 1Vpp sinewave - there should be a gain of ~x4 when this signal passes through the path with the whitening filter. I then applied a low frequency (0.1Hz) square wave (0-5V) to the "bypass" input, and looked at the output, and indeed saw the signal amplitude change by ~4x when the input to the switch was pulled low. This behaviour was confirmed on all five channels, there was no problem. I took transfer functions for all 5 channels (both at the "monitor" point on the backplane connector and on the front panel LEMOs), and they came out as expected (plot to be uploaded soon).

Next, I took the board back to the eurocrate. I first put in a tester box into the slot and measured the voltage levels on the backplane pins that are meant to trigger bypassing of the whitening stage, all the pins were at 0V. I am not sure if this is what is expected, I will have to look inside D080478 as there is no drawing for it. Note that these levels are set using a Contec binary output card. Then I attached the PD whitening board to the tester board, and measured the voltages at the "Input" pins of all the 5 SPDT switches used under 2 conditions - with the appropriate bit sent out via the Contec card set to 0 or 1 (using the button on the suspension MEDM screens). I confirmed using the BIO medm screen that the bit is indeed changing on the software side, but until I look at D080478, I am not sure how to verify the right voltage is being sent out, except to check at the pins on the MAX333. For this test, the UL channel was indeed anomalous - while the other 4 channels yielded 0V (whitening ON, bit=1) and 15V (whitening OFF, bit=0), the corresponding values for the UL channel were 12V and 10V.

I didn't really get any further than this tonight. But this still leaves unanswered questions - if the measured values are faithful, then the UL channel always bypasses the whitening stage. Can this explain the glitchy behaviour?


Part 4: MC1 troublesfrown

At approximately 8pm, the IMC started losing lock far too often - see the attached StripTool trace. There was a good ~2hour stretch before that when I realigned the IMC, and it held lock, but something changed abruptly around 8pm. Looking at the IMC mirror OSEM PD signals, all 5 MC1 channels are glitching frequently. Indeed, almost every IMC lockloss in the attached StripTool is because of the MC1 PD readouts glitching, and subsequently, the damping loops applying a macroscopic drive to the optic which the FSS can't keep up with. Why has this surfaced now? The IMC satellite boxes were not touched anytime recently as far as I am aware. The MC1 PD whitening board sits in the same eurocrate I pulled the ITMY board out of, but squishing cables/pushing board in did not do anything to alleviate the situation. Moreover, MC2 and MC3 look fine, even though their PD whitening boards also sit in the same eurocrate. Because I was out of ideas, I (soft) restarted c1sus and all the models (the thinking being if something was wrong with the Contec boards, a restart may fix it), but there was no improvement. The last longish lock stretch was with the MC1 watchdog turned off, but as soon as I turned it back on the IMC lost lock shortly after.

I am leaving the autolocker off for the night, hopefully there is an easy fix for all of this...

Attachment 1: IMCwoes.png
IMCwoes.png
  12652   Wed Nov 30 17:08:56 2016 gautamUpdateLSCBinary output breakout box removed

[ericq, gautam]

To diagnose the glitches in OSEM readouts, we have removed one of the PCIE BO D37 to IDE50 adaptor boxes from 1X5. All the watchdogs were turned off, and the power to the unit was cut before the cables on the front panel were removed. I am working on the diagnosis, I will update more later in the evening. Note that according to the c1sus model, the box we removed supplies backplane logic inputs that control whitening for ITMX, ITMY, BS and PRM (in case anyone is wondering/needs to restore damping to any of these optics). The whitening settings for the IMC mirrors resides on the other unit in 1X5, and should not be affected.

  12653   Thu Dec 1 02:19:13 2016 gautamUpdateLSCBinary output breakout box restored

As we suspected, the binary breakout board (D080478, no drawing available) is simply a bunch of tracks printed on the PCB to route the DB37 connector pins to two IDE50 connectors. There was no visible damage to any of the tracks (some photos uploaded to the 40m picasa). Further, I checked the continuity between pins that should be connected using a DMM.

I got a slightly better understanding of how the binary output signal chain is - the relevant pages are 44 and 48 in the CONTEC manual. The diagram on pg44 maps the pins on the DB37 connector, while the diagram on pg 48 maps how the switching actually occurs. The "load" in our case is the 4.99kohm resistor on the PD whitening board D000210. Following the logic in the diagram on pg48 is easy - setting a "high" bit in the software should pull the load resistor to 0V while setting a "low" bit keeps the load at 15V (so effectively the whole setup of CONTEC card + breakout board + pull-up resistor can be viewed as a simple NOT gate, with the software bit as the input, and the output connected to the "IN" pin of the MAX333).

Since I was satisfied with the physical condition of the BO breakout board, I re-installed the box on 1X5. Then, with the help of a breakout board, I diagnosed the situation further - I monitored the voltage to the pins on the backplane connector to the whitening boards while switching the MEDM switches to toggle the whitening state. For all channels except ITMY UL, the behaviour was as expected, in line with the preceeding paragraph - the voltage swings between ~0V and ~15V. As mentioned in my post yesterday, the ITMY UL channel remains dodgy, with voltages of 12.84V (bit=1) and 10.79V (bit=0). So unless I am missing something, this must point to a faulty CONTEC card? We do have spares, do we want to replace this? It also looks like this problem has been present since at least 2011...

In any case, why should this lead to ITMY UL glitching? According to the MAX333 datasheet, the switch wants "low"<0.8V and "high">2.4V - so even if the CONTEC card is malfunctioning and the output is toggling between these two states, the condition should be that the whitening stage is always bypassed for this channel. The bypassed route works just fine, I measured the transfer function and it is unity as expected.

So what could possibly be leading to the glitches? I doubt that replacing the BO card will solve this problem. One possibility that came up in today's meeting is that perhaps the +24V to the Sat. Box. (which is used to derive the OSEM LED drive current) may be glitching - of course we have no monitor for this, but given that all the Sat. Amp. Adaptor boards are on 1X5 near the Acromag, perhaps Lydia and Johannes can recommission the PSL diagnostic Aromag to a power supply monitoring Acromag?


What do these glitches look like anyway? Here is a few second snapshot from one of the many MC1 excursions from yesterday - the original glitch itself is very fast, and then that gives an impulse to the damping loop which eventually damps away.

And here is one from when there was a glitch when the tester box was plugged in to the ITMY signal chain (so we can rule out anything in the vacuum, and also the satellite box itself as the glitches seem to remain even when boxes are shuffled around, and don't migrate with the box). So even though the real glitch happens in the UL channel (note the y axes are very different for the channels), the UR, LR and LL channels also "feel" it. recall that this is with the tester box (so no damping loops involved), and the fact that the side channel is more immune to it than the others is hard to explain. Could this just be electrical cross-coupling?

Still beats me what in the signal chain could cause this problem.


Some good news - Koji was running some tests on the modified WFS demod board and locked the IMC for this. We noticed that MC1 seemed well behaved for extended periods of time unlike last night. I realigned the PMC and IMC, and we have been having lock streches of a few hours as we usually have. I looked at the MC1 OSEM PD readbacks during the couple of lock losses in the last few hours, and didn't notice anything dramatic laugh. So if things remain in this state, at least we can do other stuff with the IFO... I have plugged in the ITMY sat. box again, but have left the watchdog disabled, lets see what the glitching situation is overnight... The original ITMY sat. box has been plugged into the ETMY DAQ signal chain with a tester box. The 3 day trend supports the hypothesis the sat. box is not to blame. So I am plugging the ETMY suspension back in as well...

Attachment 4: ULcomparison.pdf
ULcomparison.pdf
  12655   Thu Dec 1 20:20:15 2016 gautamUpdateIMCIMC loss measurement plan

We want to measure the IMC round-trip loss using the Isogai et. al. ringdown technique. I spent some time looking at the various bits and pieces needed to make this measurement today, this elog is meant to be a summary of my thoughts.

  1. Inventory
    • AOM (in its new mount to have the right polarization) has been installed upstream of the PMC by Johannes. He did a brief check to see that the beam is indeed diffracted, but a more thorough evaluation has to be done. There is currently no input to the AOM, the function generator on the PSL table is OFF.
    • The Isogai paper recommends 3 high BW PDs for the ringdown measurement. Souring through some old elogs, I gather that the QPDs aren't good for this kind of measurement, but the PDA255 (50MHz BW) is a suitable candidate. I found two in the lab today - one I used to diagnose the EX laser intensity noise and so I know it works, need to check the other one. We also have a working PDA10CF detector (150 MHz BW). In principle, we could get away with just two, as the ringdown in reflection and transmission do not have to be measured simultaneously, but it would be nice to have 3
    • DAQ - I think the way to go is to use a fast scope triggered on the signal sent to the AOM to cut the light to the IMC, need to figure out how to script this though judging by some 2007 elogs by rana, this shouldn't be too hard...
  2. Layout plans
    • Where to put the various PDs? Keeping with the terminology of the Isogai paper, the "Trans diode" can go on the MC2 table - from past measurements, there is already a pickoff from the beam going to the MC TRANS QPD which is currently being dumped, so this should be straightforward...
    • For the "Incident Diode", we can use the beam that was used for the 3f cancellation trials - I checked that the beam still runs along the edge of the PSL table, we can put a fast PD in there...
    • For the "REFL diode" - I guess the MC REFL PD is high BW enough, but perhaps it is better to stick another PD in on the AS table, we can use one of the existing WFS paths? That way we avoid the complicated transfer function of the IMC REFL PD which is tuned to have a resonance at 29.4MHz, and keeps interfacing with the DAQ also easy, we can just use BNC cables...
    • We should be able to measure and calibrate the powers incident on these PDs relatively easily.
       
  3. Other concerns
    • I have yet to do a thorough characterization of the AOM performance, there have been a number of elogs noting possible problems with the setup. For one, the RF driver datasheet recommends 28V supply voltage but we are currently giving it 24V. In the (not too distant) past, the AOM has been seen to not be very efficient at cutting the power, the datasheet suggests we should be able to diffract away 80% of the central beam but only 10-15% was realized, though this may have been due to sub-optimal alignment or that the AOM was receiving the wrong polarization...
  4. Plan of action
    • Check RF driver, AOM performance, I have in mind following the methodology detailed here
    • Measure PMC ringdown - this elog says we want it to be faster than 1us
    • Put in the three high BW PDs required for the IMC ringdown, check that these PDs are working
    • Do the IMC ringdown

Does this sound like a sensible plan? Or do I need to do any further checks?

  12657   Fri Dec 2 11:56:42 2016 gautamUpdateLSCMC1 LEMO jiggled

I noticed 2 periods of frequent IMC locklosses on the StripTool trace, and so checked the MC1 PD readout channels to see if there were any coincident glitches. Turns out there wasnt yes BUT - the LR and UR signals had changed significantly over the last couple of days, which is when I've been working at 1X5. The fast LR readback was actually showing ~0, but the slow monitor channel had been steady so I suspected some cabling shenanigans.

Turns out, the problem was that the LEMO connector on the front of the MC1 whitening board had gotten jiggled ever so slightly - I re-jiggled it till the LR fast channel registered similar number of counts to the other channels. All looks good for now. For good measure, I checked the 3 day trend for the fast PD readback for all 8 SOS optics (40 channels in all, I didn't look at the ETMs as their whitening boards are at the ends), and everything looks okay... This while situation seems very precarious to me, perhaps we should have a more robust signal routing from the OSEMs to the DAQ that is more immune to cable touching etc...

  12659   Fri Dec 2 16:21:12 2016 gautamUpdateGeneralrepaired projector, new mixer arrived and installed

The most recent power outage took out our projector and mixer. The projector was sent for repair while we ordered a new mixer. Both arrived today. Steve is working on re-installing the projector right now, and I installed the mixer which was verified to be working with our DAFI system (although the 60Hz issue still remains to be sorted out). The current channel configuration is:

Ch1: 3.5mm stereo output from pianosa

Ch2: DAFI (L)

Ch3: DAFI (R)

I've set some random gains for now, but we will have audio again when locking laugh

  12660   Fri Dec 2 16:40:29 2016 gautamUpdateIMC24V fuse pulled out

I've pulled out the 24V fuse block which supplies power to the AOM RF driver. The way things are set up on the PSL table, this same voltage source powers the RF amplifiers which amplify the green beatnote signals before sending them to the LSC rack. So I turned off the green beat PDs before pulling out the fuse. I then disconnected the input to the RF driver (it was plugged into a DS345 function generator on the PSL table) and terminated it with a 50 ohm terminator. I want to figure out a smart way of triggering the AOM drive and recording a ringdown on the scope, after which I will re-connect the RF driver to the DS345. The RF driver, as well as the green beat amplifiers and green beat PDs, remain unpowered for now...

  12663   Mon Dec 5 01:58:16 2016 gautamUpdateIMCIMC ringdowns

Over the weekend, I worked a bit on getting these ringdowns going. I will post a more detailed elog tomorrow but here is a quick summary of the changes I made hardware-wise in case anyone sees something unfamiliar in the lab...

  • PDA10CF PD installed on PSL table in the beam path that was previously used for the 3f cancellation trials
  • PDA255 installed on MC2 trans table, long BNC cable running from there to vertex via overhead cable tray
  • PDA255 installed on AS table in front of one of the (currently unused) WFS

I spent a while in preparation for these trials (details tomorrow) like optimizing AOM alignment/diffracted power ratio, checking AOM and PMC switching times etc, but once the hardware is laid out, it is easy to do a bunch of ringdowns in quick succession with an ethernet scope. Tonight I did about 12 ringdowns - but stupidly, for the first 10, I was only saving 1 channel from the oscilloscope instead of the 3 we want to apply the MIT method.

Here is a representative plot of the ringdown - at the moment, I don't have an explanation for the funky oscillations in the reflected PD signal, need to think on this.. More details + analysis to follow...


Dec 5 2016, 130pm:

Actually the plot I meant to put up is this one, which has the time window acquired slightly longer. The feature I am referring to is the 100kHz oscillation in the REFL signal. Any ideas as to what could be causing this?

Attachment 1: IMCringdown.pdf
IMCringdown.pdf
Attachment 2: IMCringdown_2.pdf
IMCringdown_2.pdf
  12664   Mon Dec 5 15:05:37 2016 gautamUpdateLSCMC1 glitches are back

For no apparent reason, the MC1 glitches are back. Nothing has been touched near the PD whitening chassis today, and the trend suggests the glitching started about 3 hours ago.. I had disabled the MC1 watchdog for a while to avoid the damping loop kicking the suspension around when these glitches occur, but have re-enabled it now. IMC is holding lock for some minutes... I was hoping to do another round of ringdowns tonight, but if this persists, its going to be difficult...

  12665   Mon Dec 5 15:55:25 2016 gautamUpdateIMCIMC ringdowns

As promised, here is the more detailed elog.


Part 1: AOM alignment and diffraction efficiency optimization

I started out by plugging in the input to the AOM driver back to the DS345 on the PSL table, after which I re-inserted the 24V fuse that was removed. I first wanted to optimize the AOM alignment and see how well we could cut the input power by driving the AOM. In order to investigate this, I closed the PMC, unlocked the PSL shutter, and dialed the PSL power down to ~100mW using the waveplate in front of the laser. Power before touching anything just before the AOM was 1.36W as measured with the Coherent power meter. 

The photodiode (PDA255) for this experiment was placed downstream of the 1%(?) transmissive optic that steers the beam into the PMC (this PD would also be used in Part 2, but has since been removed)...

Then I tuned the AOM alignment till I maximized the DC power on this newly installed PD. It would have been nicer to have the AOM installed on the mount such that the alignment screws were more easily accessible, but I opted against doing any major re-organization for the time being. Even after optimizing the AOM alignment, the diffraction efficiency was only ~15%, for 1V to the AOM driver input. So I decided to play with the AOM driver a bit.

Note that the AOM driver is powered by 24V DC, even though the spec sheet says it wants 28V. Also, the "ALC" input is left unconnected, which should be fine for our purposes. I opted to not mess with this for the time being - rather, I decided to tweak the RF adjust potentiometer on the front of the unit, which the spec sheet says can adjust the RF power between 1W and 2W. By iteratively tuning this pot and the AOM alignment, I was able to achieve a diffraction efficiency of ~87% (spec sheet tells us to expect 80%), in a switching time of ~130ns (spec sheet tells us to expect 200ns, but this is presumably a function of the beam size in the AOM). These numbers seemed reasonable to me, so I decided to push on. Note that I did not do a thorough check of the linearity of the AOM driver after touching the RF adjust potentiometer as Koji did - this would be relevant if we want to use the AOM as an ISS servo actuator, but for the ringdown, all that matters is the diffraction efficiency and switching time, which seemed satisfactory. 

At this point, I turned the PSL power back up (measured 1.36W just before the AOM). Before this, I estimated the PD would have ~10mW power incident on it, and I wanted it to be more like 1mW, so I I put an ND 1.0 filter on to avoid saturation.


Part 2: PMC "ringdown"

As mentioned in my earlier elog, we want the PMC to cut the light to the IMC in less than 1us. While I was at it, I decided to see if I could do a ringdown measurement for the PMC. For this, I placed two more PDs in addition to the one mentioned in Part 1. One monitored the transmitted intensity (PDA10CF, installed in the old 3f cancellation trial beam path, ~1mW incident on it when PMC is locked and well aligned). I also split off half the light to the PMC REFL CCD (2mW, so after splitting, PMC CCD gets 1mW through some ND filters, and my newly installed PD (PDA255) receives ~1mW). Unfortunately, the PMC ringdown attempts were not successful - the PMC remains locked even if we cut the incident light by 85%. I guess this isn't entirely surprising, given that we aren't completely extinguishing the input light - this document deals with this issue.... But the PMC transmitted intensity does fall in <200ns (see plot in earlier elog), which is what is critical for the IMC ringdown anyways. So I moved on.


Part 3: IMC ringdown

The PDA10CF installed in part 2 was left where it was. The reflected and transmitted light monitors were PDA255. The former was installed in front of the WFS2 QPD on the AS table (needed an ND1.0 filter to avoid damage if the IMC unlocks not as part of the ringdown, in which case ~6mW of power would be incident on this PD), while the latter was installed on the MC2 transmission table. We may have to remove the former, but I don't see any reason to remove the latter PD. I also ran a long cable from the MC2 trans table to the vertex area, which is where I am monitoring the various signals.

  

The triggering arrangement is shown below.

  

To actually do the ringdown, here is the set of steps I followed.

  1. Make sure settings on scope (X & Y scales, triggering) are optimized for data capture. All channels are set to 50ohm input impedance. The trigger comes from the "TTL" output of the DS345, whose "signal" output drives the AOM driver. Set the trigger to external, the mode should be "normal" and not "auto" (this keeps the data on the screen until the next trigger, allowing us to download the data via ethernet.
  2. The DS345 is set to output a low frequency (0.005Hz) square wave, with 1Vpp amplitude, 0.5V offset (so the AOM driver input is driven between 0V and 1V DC, which is what we want). This gives us ~100 seconds to re-lock the IMC, and download the data, all while chilling in the control room
  3. The autolocker was excellent yesterday, re-acquiring the IMC lock in ~30secs almost every time. But in the few instances it didn't work, turn the autolocker off (but make sure the MC2 tickle is on, it helps) and manually lock the IMC by twiddling the gain slider (basically manually do what the autolock script does). As mentioned above, you have ~100 secs to do this, if not just wait for 200secs and the next trigger...
  4. In the meantime, download the data (script details to follow). I've made a little wrapper script (/users/gautam/2016_12_IMCloss/grabChans.sh) which uses Tobin's original python script, which unfortunately only grabs data one channel at a time. The shell script just calls the function thrice, and needs two command line arguments, namely the base name for the files to which the data will be written, and an IP address for the scope...

It is possible to do ~15 ringdowns in an hour, provided the seismic activity is low and the IMC is in a good mood. Unfortunately, I messed up my data acquisiton yesterday, so I only have data from 2 ringdowns, which I will work on fitting and extracting a loss number from. The ringing in the REFL signal is also a mystery to me. I will try using another PDA255 and see if this persists. But anyways, I think we can exclude the later part of the REFL signal, and fit the early exponential decay, in the worst case. The ringdown signal plots have been uploaded to my previous elog. Also, the triggering arrangement can be optimized further, for example by using the binary output from one of our FEs to trigger the actual waveform instead of leaving it in this low frequency oscillation, but given our recent experience with the Binary Output cards, I thought this is unnecessary for the time being...

Data analysis to follow.


I have left all the PDs I put in for this measurement. If anyone needs to remove the one in front of WFS2, go ahead, but I think we can leave the one on the MC2 trans table there...

Attachment 2: AOMswitching.pdf
AOMswitching.pdf
Attachment 6: electricalLayout.pdf
electricalLayout.pdf
  12666   Mon Dec 5 19:29:52 2016 gautamUpdateIMCIMC ringdowns

The MC1 suspension troubles vanished as they came - but the IMC was remaining locked stably so I decided to do another round of ringdowns, and investigate this feature in the reflected light a bit more closely. Over 9 ringdowns, as seen in the below figure, the feature doesn't quite remain the same, but qualitatively the behaviour is similar.

Steve helped me find another PDA255 and so I will try switching out this detector and do another set of ringdowns later tonight. It just occurred to me that I should check the spectrum of the PD output out to high frequencies, but I doubt I will see anything interesting as the waveform looks clean (without oscillations) just before the trigger...

Attachment 1: REFLanomaly.pdf
REFLanomaly.pdf
  12667   Tue Dec 6 00:43:41 2016 gautamUpdateIMCmore IMC ringdowns

In an effor to see if I could narrow down the cause of the 100kHz ringing seen in the reflected PD signal, I tried a few things.

  1. Changed the PD - there was a PDA 255 sitting on the PSL table by the RefCav. Since it wasn't being used, I swapped the PD I was using with this. Unfortunately, this did not solve the problem.
  2. Used a different channel on the oscilloscope - ringing persisted
  3. Changed BNC cable running from PD to oscilloscope - ringing persisted
  4. Checked the spectrum of the PD under dark and steady illumination conditions for any features at 100kHz, saw nothing (as expected) 

I was working under the hypothesis that the ringing was due to some impedance mismatch between the PD output and the oscilloscope, and 4 above supports this. However, most documents I can find online, for example this one, recommend connecting the PD output via 50ohm BNC to a scope with input impedance 50ohms to avoid ringing, which is what I have done. But perhaps I am missing something.

Moreover, the ringdown in reflection actually supplies two of the five variables needed to apply the MIT method of loss estimation. I suppose we could fit the parameter "m4" from the ringdown in transmission, and then use this fitted value on the ringdown in reflection to see where the reflected power settles (i.e. the parameter "m3" as per the MIT paper). I will try analyzing the data on this basis.

I also measured the power levels at each of the PDs, these should allow us to calibrate the PD voltage outputs to power in Watts. All readings were taken with the Ophir power meter, with the filter removed, and the IMC locked.

PD Power level
REFL 0.47 mW (measured before 1.0 ND filter)
Trans 203 uW
Incident 1.06 mW

 

  12701   Tue Jan 10 22:55:43 2017 gautamUpdateCDSpower glitch - recovery steps

Here is a link to an elog with the steps I had to follow the last time there was a similar power glitch.

The RAID array restart was also done not too long ago, we should also do a data consistency check as detailed here, if not already..

If someone hasn't found the time to do this, I can take care of it tomorrow afternoon after I am back.

Quote:

Does "done" mean they are OK or they are somehow damaged? Do you mean the workstations or the front end machines?

The computers are all done.

megatron and optimus are not responding to ping commands or ssh -- please power them up if they are off; we need them to get data remotely

 

  12702   Wed Jan 11 16:35:03 2017 gautamUpdateCDSpower glitch - recovery progress

[lydia, ericq, gautam]

We set about following the instructions linked in the previous elog. A few notes/remarks:

  1. It is important to run the ntpdate commands before restarting the models. Sometimes, multiple restarts of the models were required to turn all the indicator blocks on the MEDM screen green.
  2. There was also an issue of multiple ntpd processes running on the same machine, which obviously caused all sorts of timing havoc. EricQ helped us diagnose and fix these. At the moment, all the lights are green on the CDS status MEDM screen
  3. On the hardware side, apart from the usual suspects of frontends/megatron/optimus/fb needing to be rebooted, I noticed that the ETMX OSEM lights were off on the control room monitors. Investigation pointed to the 2 20V sorensens at the X end outputting 0V, 0A after the power glitch. We turned down both dials, and then gradually ramped them up again. Both Sorensens now read +/-20V, 0.3A, which is in agreement with the label stuck onto them.
  4. Restarted MC autolocker and FSS Slow scripts on megatron. I have not yet looked at the status of the nds2 server on megatron.
  5. 11 MHz Marconi has yet to be restarted - but I am unable to get even the IMC locked at the moment. For some reason, the RMS of the MC1 and MC3 coils are way higher than what I am used to seeing (~5mV rms as compared to the <1mV rms I am used to seeing for a damped optic). I will investigate further. Leaving MC autolocker disabled for now.
  12708   Thu Jan 12 17:31:51 2017 gautamUpdateCDSDC errors

The IFO is more or less back to an operational state. Some details:

  1. The IMC mirror excess motion alluded to in the previous elog was due to some timing issues on c1sus. The "DAC" and "DK" blocks in the c1x02 diag word were red instead of green. Restarting all the models on c1sus fixed the problem
  2. When c1ioo was restarted, all of Koji's changes (digital) to the MC WFS servo where lost as they were not committed to the SDF. Eric suggested that I could just restore them from burt snapshots, which is what I did. I used the c1iooepics.snap file from 12:19PM PST on 26 December 2016, which was a time when the WFS servo was working well as per this elog by Koji. I have also committed all the changes to the SDF. IMC alignment has been stable for the last 4 hours.
  3. Johannes aligned and locked the arms today. There was a large DC offset on POX11, which was zeroed out by closing the PSL shutter and running LSC offsets. Both arms lock and stay aligned now.
  4. The doubling oven controller at the Y end was switched off. Johannes turned it on.
  5. Eric and I started a data consistency check on the RAID array yesterday, it has completed today and indicated no issues
  6. NDS2 is now running again on megatron so channel access from outside should(???) be possible again.

One error persists - the "DC" indicator (data concentrator?) on the CDS medm screen for the various models spontaneously go red and return to green often. Is this a known issue with an easy fix?

  12716   Fri Jan 13 23:39:46 2017 gautamUpdateGeneralETMX suspension electronics problems?

[Koji,gautam]

After Koji's leap second fix, we were playing around with the X arm locking. In particular, we were playing around with the limit value on the X arm LSC filter bank - the nominal value is 4000, we wanted to see if we could increase this without kicking the optic while acquiring arm lock. We initially increased it to 8000, and then turned it off altogether. Then we rapidly turned the output of the servo ON/OFF, and looked at the arm transmission to see if it came back to the level before unlocking, as an indication of whether the optic was kicked.

These trials suggested a value of 8000 for the limiter was OK, so we left the LSC mode on with the limiter set to 8000. But just as we were about to leave for the night, I noticed on the wall Striptool that the X arm was unlocked. Investigating, we found that the green wasn't even locking to a HOM. Further investigation of the Oplev spot showed that ETMX had received a large kick (both pitch and law errors were ~200urad). ITMX was unaffected.

We initially tried lowering the LSC limit value back to 4000, then used first the Oplev spot and then the green to align the arm. But turning on LSC misaligned the arm after acquiring lock. So we decided to leave LSC off, thinking that the notorious ETMX suspension problems have resurfaced. As a diagnostic, we figured we'd leave the watchdog tripped, and use the Oplev to see if the optic was getting kicked. But the act of turning the watchdog off kicked the optic again (WHY?!).

Looking at the ETMX sus screen, turning off all the damping and LSC (but watchdog on) still leaves a non-zero offset in the "Vmon" field, between 0.02-0.05V depending on the coil. Turning the watchdog OFF takes all these to 0.009V, although I can see the LR value fluctuating between 0.004V and 0.009V. I went to the Xend and squished all the cables on the Sat. Box, but the problem persisted.

At this time, I can't think of any explanation, so I am giving up for the night. To avoid unnecessarily kicking the optic, I am going to unplug the suspension from the Sat. Box and leave one of our tester boxes plugged in, lets see if that sheds any light on the situation...


Notes:

  1. The +/-20V sorensens at this end were "tripped" for a few days after the power glitch until they were reset and turned back on yesterday. But this should not affect Vmon, as these Sorensens only supply the DC voltage for the coil bias, which is a slow machine channel?
  2. The X arm was staying locked and well aligned for hours on end earlier this afternon - in fact it was locked for about 2 hours 6-8 hours ago, I can still see the trace on the wall StripTool....
  12725   Mon Jan 16 23:25:07 2017 gautamUpdateSUSMC1 SUS electronics investigation

[rana,gautam]

Summary:

  • MC1 glitchy behaviour is back
  • Found a broken LEMO cable, left unplugged for the night -> to be repaired tomorrow
  • Further diagnosis to follow

During the course of Rana's inspection of the general state of the IFO, he commented that there seemed to be several seismic-related IMC lock losses in the time that he had been observing it. This issue looked suspiciously like the the MC1 glitches I had noticed sometime late last year, especially since each time the IMC would unlock, we could see significant amounts of motion on MC REFL. To diagnose, we did the following:

  1. Closed PSL shutter
  2. Ramped down the gains of the MC1 damping loops by factor of 1000 in ~4 secs using z step
  3. Shut down the watchdog for MC1
  4. Observed dataviewer traces for glitches

Sure enough, there were several glitches that occurred in all 5 sensor channels. These glitches varied in size from a few counts (the smaller ones) to 60-70counts for the bigger ones. In the past, squishing the LEMO connector on the front of the PD whitening board (D000210) had apparently made the glitching go away. So tonight, for starters, we squished everything else - Sat. Box connectors, the breakout board from Sat. Box to whitening board on the back of 1X6, and the DB connector on the front of the whitening board. This had no effect - the glitching remained consistent.

Next, Rana pulled out two of the three 4pin LEMOs, and left only those coresponding to UL/LL plugged in - but the glitching persisted in these two channels. We then pulled out the board. It was installed in 1998, but has a sticker on it that says "fixed in 2003". Not sure what the fix was. Visual inspection of the circuit didn't show anything obviously faulty, but it did look like the two MAX333A quad switches (these control whether the whitening is bypassed or not) had been replaced at some point. There are other undesirable features, such as the use of thick film resistors, but nothing that would explain the glitchy behaviour.

Next, we re-inserted the whitening board back into its original slot in the Eurocrate, but switched the cables (both D sub and LEMO, but only on the whitening board end) between the boards for MC1 and MC3 (i.e. MC1 cables were routed through the whitening board that was originally used for MC3, and vice-versa). But the glitches remained consistent on the MC1 channels. So it looks like the board is not a likely culprit.

Finally, we went in and squished all the cables from the PD whitening board to the ADC (via an AA filter board). For some of the LEMO cables from the whitening board, the LEMO backshells were not properly tightened. Rana fixed these before putting them back in. Some of the connectors were also not pushed in tightly enough, Rana heard the click when he pushed them in. The cables from the adaptor board to the ADC itself looked fine, it was screwed on at both ends, and all these connections looked snug enough. In the interest of completeness, Rana also pushed in the backplane connectors on the Eurocrate (these supply the signals from the BIO cards to switch the whitening ON/OFF). The one corresponding to MC1 was indeed a little loose.

Coming back to the control room, we saw that the MC1 LR sensor was dead. After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints. Could this explain the glitchy behaviour? Perhaps, but the glitches remained in the 3 channels that were connected. Anyways, I will repair this cable tomorrow, and we can see if this has fixed the problem or not..


Some misc points:

  1. Regarding the adaptor boards that take the PD signals from the satellite box and route it to the whitening board, there are some clamps that hold the IDE connectors in place for MC1, MC2 and MC3 boards, but not for the others (see attached picture). Steve, can we install clamps for all of the boards? [taken care of, see here]
  2. The whitening boards are not screwed in place into the Eurocrate. This should be rectified.

PSL shutter is closed, MC1 watchdog is shutdown for the night.

Attachment 1: 20170116_231625.png
20170116_231625.png
Attachment 2: IMG_7175.JPG
IMG_7175.JPG
Attachment 3: IMG_7174.JPG
IMG_7174.JPG
  12728   Tue Jan 17 21:29:52 2017 gautamUpdateSUSMC1 SUS electronics investigation

 

Quote:
 

After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints.

The faulty cable has been re-soldered (with heat shrink) and replaced. All 5 sensor signals appear normal on dataviewer now. I am leaving things in this state for the night, let us see if the glitches return overnight.

PSL shutter remains closed

  12729   Tue Jan 17 21:31:57 2017 gautamUpdateGeneralETMX suspension electronics problems?

Last night, I plugged the ETMX suspension coils back into the satellite box. Tonight, we turned on the damping loops for ETMX. Rana centered the Oplev so we can use that as an additional diagnostic to see if the optic gets kicked around overnight. We will re-assess the situation tomorrow.

Sometime earlier today, Lydia noticed that the +/- 5V Sorensens at the X end were not displaying their nominal voltage/current values (as per the stickers on them). She corrected this.

  12730   Wed Jan 18 10:41:14 2017 gautamUpdateGeneralETMX suspension electronics problems?

Summary pages show no kicking in the ETMX watchdogs from midnight to 6 AM (0800 - 1400 UTC):

https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20170118/sus/watchdogs/

  12731   Wed Jan 18 11:40:54 2017 gautamUpdateSUSMC1 SUS electronics investigation

After the repair of the faulty LEMO cable, I left MC1 with it's watchdog off overnight. Unfortunately, it looks like the problem still persists. The first attachment shows a second trend plot for the past 15 hours. Towards the left end of the plot, you can see where I re-connected the LEMO cable for the LR/UR channels.

A couple of months ago, I added a BLRMS block for the IMC optics that calculates BLRMS for the shadow sensor output as well as the coil output. Looking at this trend overnight, I noticed that the glitches appear in the coil outputs as well, as shown in the plot below, which is for a 1 hour stretch last night (I used the full data from a 16Hz coil output channel and not the BLRMS, I am not sure if there is a DQ'ed version of the coil outputs?).

Zooming in further to one of these glitches, we can see that the glitches in the coil and shadow sensor signals are in fact coincident.

But given that the watchdog was turned off all this time, the only voltage going to the coils should be the DC bias voltages. So does this not support the hypothesis that the problem lies in the part of the signal chain that supplies the bias voltage to the coils?

Never mind, the "coil output" channel isn't a true readback of the voltage to the coil, but is the calculated damping output (which is not sent to the coils when the watchdog is shutdown...

  12734   Wed Jan 18 14:23:47 2017 gautamUpdateSUSMC1 SUS electronics investigation

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

  12736   Wed Jan 18 18:44:53 2017 gautamUpdateSUSMC1 SUS electronics investigation
Quote:

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.

  12739   Thu Jan 19 12:00:10 2017 gautamUpdateSUSMC1 SUS electronics investigation

Going through the last ~20 hours of data, the MC1 sensor channels look glitch free the entire period. However, there is a ~10min period around 1PM UTC today when there were a couple of glitches ~80 counts in size in all the MC3 sensor channels. The attached shows the full 2k data from all 10 channels (MC1 and MC3 sensors) around this time.

Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...

 

  12742   Fri Jan 20 11:16:30 2017 gautamUpdateSUSMC1 SUS electronics investigation

Both suspensions have been relatively well behaved for the best part of the last two days, since I effected the Satellite Box swap. Today morning, I set about re-enabling the damping and locking the MC. Judging by the wall StripTool, it stayed locked for about 30 mins or so, after which the glitching returned.

Attached is a screenshot of the sensor signals from MC1 and MC3 (second trend), and also the highest band (>30Hz) BLRMS output for the same 10 channels (full data sampled at 16Hz). Note that MC1 and MC3 satellite boxes remain swapped. So the glitches now have migrated to the MC3 channels.

I need to think about whether this is just coincidence, or if me re-enabling the damping has something to do with the re-occurrence of the glitching...


Addendum 4.30pm: I've also re-aligned the Y arm. Its alignment has been stable over the last few hours, despite several mode cleaner lock losses in between, it recovers good IR transmission. The X arm has been re-aligned to green, but I can't get it locked to the IR - everytime I turn the LSC to ETMX on, there seems to be some large misalignment applied to it. c1iscaux was dead, I restarted it by keying the crate. I haven't had time to investigate the X arm locking in detail, I will continue to debug this.

  12746   Mon Jan 23 15:16:52 2017 gautamUpdateOptical LeversETMY Oplev HeNe needs to be replaced

On the control room monitors, I noticed that the IR TEM00 spot was moving around rather a lot in the Y arm. The last time this happened had something to do with the ETMY Oplev, so I took a look at the 30 day trend of the QPD sum, and saw that it was decaying steeply (Steve will update with a long term trend plot shortly). I noticed the RIN also seemed rather high, judging by how much the EPICS channel reading for the QPD sum was jumping around. Attached are the RIN spectra, taken with the OL spot well centered on the QPD and the arms locked to IR. Steve will swap the laser out if it is indeed the cluprit.

Attachment 1: ETMY_Oplev.pdf
ETMY_Oplev.pdf
  12748   Tue Jan 24 01:04:16 2017 gautamSummaryIOOIMC WFS RF power levels

Summary:

I got around to doing this measurement today, using a minicircuits bi-directional coupler (ZFBDC20-61-HP-S+), along with some SMA-LEMO cables.

  • With the IMC "well aligned" (MC transmission maximized, WFS control signals ~0), the RF power per quadrant into the Demod board is of the order of tens of pW up to a 100pW.
  • With MC1 misaligned such that the MC transmission dropped by ~10%, the power per quadrant into the demod board is of the order of hundreds of pW.
  • In both cases, the peak at 29.5MHz was well above the analyzer noise floor (>20dB for the smaller RF signals), which was all that was visible in the 1MHz span centered around 29.5 MHz (except for the side-lobes described later).
  • There is anomalously large reflection from Quadrant 2 input to the Demod board for both WFS
  • The LO levels are ~-12dBm, ~2dBm lower than the 10dBm that I gather is the recommended level from the AD831 datasheet
Quote:

We should insert a bi-directional coupler (if we can find some LEMO to SMA converters) and find out how much actual RF is getting into the demod board.


Details:

I first aligned the mode cleaner, and offloaded the DC offsets from the WFS servos.

The bi-directional coupler has 4 ports: Input, Output, Coupled forward RF and Coupled Reverse RF. I connected the LEMO going to the input of the Demod board to the Input, and connected the output of the coupler to the Demod board (via some SMA-LEMO adaptor cables). The two (20dB) coupled ports were connected to the Agilent spectrum analyzer, which have input impedance 50ohms and hence should be impedance matched to the coupled outputs. I set the analyzer to span 1MHz (29-30MHz), IF BW 30Hz, 0dB input attenuation. It was not necessary to turn on averaging to resolve the peaks at ~29.5MHz since the IF bandwidth was fine enough.

I took two sets of measurements, one with the IMC well aligned (I maximized the MC Trans as best as I could to ~15,000 cts), and one with a macroscopic misalignment to MC1 such that the MC Trans fell to 90% of its usual value (~13,500 cts). The peak function on the analyzer was used to read off the peak height in dBm. I then converted this to RF power, which is summarized in the table below. I did not account for the main line loss of the coupler, but according to the datasheet, the maximum value is 0.25dB so there numbers should be accurate to ~10% (so I'm really quoting more S.Fs than I should be).

WFS Quadrant Pin (pW) Preflected(pW) Pin-demod board (pW)

IMC well aligned

1 1 50.1 12.6 37.5
2 20.0 199.5 -179.6
3 28.2 10.0 18.2
4 70.8 5.0

65.8

2 5 100 19.6 80.0
6 56.2 158.5 -102.3
7 125.9 6.3 11.5
8 17.8 6.3

119.6
 

WFS Quadrant Pin (pW) Preflected(pW) Pin-demod board (pW)

MC1 Misaligned

1 1 501.2 5.0 496.2
2 630.6 208.9 422
3 871.0 5.0 866
4 407.4 16.6

190.8

2 5 407.4 28.2 379.2
6 316.2 141.3 175.0
7 199.5 15.8 183.7
8 446.7 10.0 436.7

 

For the well aligned measurement, there was ~0.4mW incident on WFS1, and ~0.3mW incident on WFS2 (measured with Ophir power meter, filter out).

I am not sure how to interpret the numbers for quadrants #2 and #6 in the first table, where the reverse coupled RF power was greater than the forward coupled RF power. But this measurement was repeatable, and even in the second table, the reverse coupled power from these quadrants are more than 10x the other quadrants. The peaks were also well above (>10dBm) the analyzer noise floor 

I haven't gone through the full misalginment -> Power coupled to TEM10 mode algebra to see if these numbers make sense, but assuming a photodetector responsivity of 0.8A/W, the product (P1P2) of the powers of the beating modes works out to ~tens of pW (for the IMC well aligned case), which seems reasonable as something like P1~10uW, P2 ~ 5uW would lead to P1P2~50pW. This discussion was based on me wrongly looking at numbers for the aLIGO WFS heads, and Koji pointed out that we have a much older generation here. I will try and find numbers for the version we have and update this discussion.

Misc:

  1. For the sake of completeness, the LO levels are ~ -12.1dBm for both WFS demod boards (reflected coupling was negligible)
  2. In the input signal coupled spectrum, there were side lobes (about 10dB lower than the central peak) at 29.44875 MHz and 29.52125 MHz (central peak at 29.485MHz) for all of the quadrants. These were not seen for the LO spectra.
  3. Attached is a plot of the OSEM sensor signals during the time I misaligned MC1 (in both pitch and yaw approximately by equal amounts). Assuming 2V/mm for the OSEM calibration, the approximate misalignment was by ~10urad in each direction.
  4. No IMC suspension glitching the whole time I was working today yes

 

Attachment 1: MC1_misalignment.png
MC1_misalignment.png
ELOG V3.1.3-