The IFO is more or less back to an operational state. Some details:
One error persists - the "DC" indicator (data concentrator?) on the CDS medm screen for the various models spontaneously go red and return to green often. Is this a known issue with an easy fix?
After Koji's leap second fix, we were playing around with the X arm locking. In particular, we were playing around with the limit value on the X arm LSC filter bank - the nominal value is 4000, we wanted to see if we could increase this without kicking the optic while acquiring arm lock. We initially increased it to 8000, and then turned it off altogether. Then we rapidly turned the output of the servo ON/OFF, and looked at the arm transmission to see if it came back to the level before unlocking, as an indication of whether the optic was kicked.
These trials suggested a value of 8000 for the limiter was OK, so we left the LSC mode on with the limiter set to 8000. But just as we were about to leave for the night, I noticed on the wall Striptool that the X arm was unlocked. Investigating, we found that the green wasn't even locking to a HOM. Further investigation of the Oplev spot showed that ETMX had received a large kick (both pitch and law errors were ~200urad). ITMX was unaffected.
We initially tried lowering the LSC limit value back to 4000, then used first the Oplev spot and then the green to align the arm. But turning on LSC misaligned the arm after acquiring lock. So we decided to leave LSC off, thinking that the notorious ETMX suspension problems have resurfaced. As a diagnostic, we figured we'd leave the watchdog tripped, and use the Oplev to see if the optic was getting kicked. But the act of turning the watchdog off kicked the optic again (WHY?!).
Looking at the ETMX sus screen, turning off all the damping and LSC (but watchdog on) still leaves a non-zero offset in the "Vmon" field, between 0.02-0.05V depending on the coil. Turning the watchdog OFF takes all these to 0.009V, although I can see the LR value fluctuating between 0.004V and 0.009V. I went to the Xend and squished all the cables on the Sat. Box, but the problem persisted.
At this time, I can't think of any explanation, so I am giving up for the night. To avoid unnecessarily kicking the optic, I am going to unplug the suspension from the Sat. Box and leave one of our tester boxes plugged in, lets see if that sheds any light on the situation...
During the course of Rana's inspection of the general state of the IFO, he commented that there seemed to be several seismic-related IMC lock losses in the time that he had been observing it. This issue looked suspiciously like the the MC1 glitches I had noticed sometime late last year, especially since each time the IMC would unlock, we could see significant amounts of motion on MC REFL. To diagnose, we did the following:
Sure enough, there were several glitches that occurred in all 5 sensor channels. These glitches varied in size from a few counts (the smaller ones) to 60-70counts for the bigger ones. In the past, squishing the LEMO connector on the front of the PD whitening board (D000210) had apparently made the glitching go away. So tonight, for starters, we squished everything else - Sat. Box connectors, the breakout board from Sat. Box to whitening board on the back of 1X6, and the DB connector on the front of the whitening board. This had no effect - the glitching remained consistent.
Next, Rana pulled out two of the three 4pin LEMOs, and left only those coresponding to UL/LL plugged in - but the glitching persisted in these two channels. We then pulled out the board. It was installed in 1998, but has a sticker on it that says "fixed in 2003". Not sure what the fix was. Visual inspection of the circuit didn't show anything obviously faulty, but it did look like the two MAX333A quad switches (these control whether the whitening is bypassed or not) had been replaced at some point. There are other undesirable features, such as the use of thick film resistors, but nothing that would explain the glitchy behaviour.
Next, we re-inserted the whitening board back into its original slot in the Eurocrate, but switched the cables (both D sub and LEMO, but only on the whitening board end) between the boards for MC1 and MC3 (i.e. MC1 cables were routed through the whitening board that was originally used for MC3, and vice-versa). But the glitches remained consistent on the MC1 channels. So it looks like the board is not a likely culprit.
Finally, we went in and squished all the cables from the PD whitening board to the ADC (via an AA filter board). For some of the LEMO cables from the whitening board, the LEMO backshells were not properly tightened. Rana fixed these before putting them back in. Some of the connectors were also not pushed in tightly enough, Rana heard the click when he pushed them in. The cables from the adaptor board to the ADC itself looked fine, it was screwed on at both ends, and all these connections looked snug enough. In the interest of completeness, Rana also pushed in the backplane connectors on the Eurocrate (these supply the signals from the BIO cards to switch the whitening ON/OFF). The one corresponding to MC1 was indeed a little loose.
Coming back to the control room, we saw that the MC1 LR sensor was dead. After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints. Could this explain the glitchy behaviour? Perhaps, but the glitches remained in the 3 channels that were connected. Anyways, I will repair this cable tomorrow, and we can see if this has fixed the problem or not..
Some misc points:
PSL shutter is closed, MC1 watchdog is shutdown for the night.
After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints.
The faulty cable has been re-soldered (with heat shrink) and replaced. All 5 sensor signals appear normal on dataviewer now. I am leaving things in this state for the night, let us see if the glitches return overnight.
PSL shutter remains closed
Last night, I plugged the ETMX suspension coils back into the satellite box. Tonight, we turned on the damping loops for ETMX. Rana centered the Oplev so we can use that as an additional diagnostic to see if the optic gets kicked around overnight. We will re-assess the situation tomorrow.
Sometime earlier today, Lydia noticed that the +/- 5V Sorensens at the X end were not displaying their nominal voltage/current values (as per the stickers on them). She corrected this.
Summary pages show no kicking in the ETMX watchdogs from midnight to 6 AM (0800 - 1400 UTC):
After the repair of the faulty LEMO cable, I left MC1 with it's watchdog off overnight. Unfortunately, it looks like the problem still persists. The first attachment shows a second trend plot for the past 15 hours. Towards the left end of the plot, you can see where I re-connected the LEMO cable for the LR/UR channels.
A couple of months ago, I added a BLRMS block for the IMC optics that calculates BLRMS for the shadow sensor output as well as the coil output. Looking at this trend overnight, I noticed that the glitches appear in the coil outputs as well, as shown in the plot below, which is for a 1 hour stretch last night (I used the full data from a 16Hz coil output channel and not the BLRMS, I am not sure if there is a DQ'ed version of the coil outputs?).
Zooming in further to one of these glitches, we can see that the glitches in the coil and shadow sensor signals are in fact coincident.
But given that the watchdog was turned off all this time, the only voltage going to the coils should be the DC bias voltages. So does this not support the hypothesis that the problem lies in the part of the signal chain that supplies the bias voltage to the coils?
Never mind, the "coil output" channel isn't a true readback of the voltage to the coil, but is the calculated damping output (which is not sent to the coils when the watchdog is shutdown...
As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.
In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.
Going through the last ~20 hours of data, the MC1 sensor channels look glitch free the entire period. However, there is a ~10min period around 1PM UTC today when there were a couple of glitches ~80 counts in size in all the MC3 sensor channels. The attached shows the full 2k data from all 10 channels (MC1 and MC3 sensors) around this time.
Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...
Both suspensions have been relatively well behaved for the best part of the last two days, since I effected the Satellite Box swap. Today morning, I set about re-enabling the damping and locking the MC. Judging by the wall StripTool, it stayed locked for about 30 mins or so, after which the glitching returned.
Attached is a screenshot of the sensor signals from MC1 and MC3 (second trend), and also the highest band (>30Hz) BLRMS output for the same 10 channels (full data sampled at 16Hz). Note that MC1 and MC3 satellite boxes remain swapped. So the glitches now have migrated to the MC3 channels.
I need to think about whether this is just coincidence, or if me re-enabling the damping has something to do with the re-occurrence of the glitching...
Addendum 4.30pm: I've also re-aligned the Y arm. Its alignment has been stable over the last few hours, despite several mode cleaner lock losses in between, it recovers good IR transmission. The X arm has been re-aligned to green, but I can't get it locked to the IR - everytime I turn the LSC to ETMX on, there seems to be some large misalignment applied to it. c1iscaux was dead, I restarted it by keying the crate. I haven't had time to investigate the X arm locking in detail, I will continue to debug this.
On the control room monitors, I noticed that the IR TEM00 spot was moving around rather a lot in the Y arm. The last time this happened had something to do with the ETMY Oplev, so I took a look at the 30 day trend of the QPD sum, and saw that it was decaying steeply (Steve will update with a long term trend plot shortly). I noticed the RIN also seemed rather high, judging by how much the EPICS channel reading for the QPD sum was jumping around. Attached are the RIN spectra, taken with the OL spot well centered on the QPD and the arms locked to IR. Steve will swap the laser out if it is indeed the cluprit.
I got around to doing this measurement today, using a minicircuits bi-directional coupler (ZFBDC20-61-HP-S+), along with some SMA-LEMO cables.
We should insert a bi-directional coupler (if we can find some LEMO to SMA converters) and find out how much actual RF is getting into the demod board.
I first aligned the mode cleaner, and offloaded the DC offsets from the WFS servos.
The bi-directional coupler has 4 ports: Input, Output, Coupled forward RF and Coupled Reverse RF. I connected the LEMO going to the input of the Demod board to the Input, and connected the output of the coupler to the Demod board (via some SMA-LEMO adaptor cables). The two (20dB) coupled ports were connected to the Agilent spectrum analyzer, which have input impedance 50ohms and hence should be impedance matched to the coupled outputs. I set the analyzer to span 1MHz (29-30MHz), IF BW 30Hz, 0dB input attenuation. It was not necessary to turn on averaging to resolve the peaks at ~29.5MHz since the IF bandwidth was fine enough.
I took two sets of measurements, one with the IMC well aligned (I maximized the MC Trans as best as I could to ~15,000 cts), and one with a macroscopic misalignment to MC1 such that the MC Trans fell to 90% of its usual value (~13,500 cts). The peak function on the analyzer was used to read off the peak height in dBm. I then converted this to RF power, which is summarized in the table below. I did not account for the main line loss of the coupler, but according to the datasheet, the maximum value is 0.25dB so there numbers should be accurate to ~10% (so I'm really quoting more S.Fs than I should be).
For the well aligned measurement, there was ~0.4mW incident on WFS1, and ~0.3mW incident on WFS2 (measured with Ophir power meter, filter out).
I am not sure how to interpret the numbers for quadrants #2 and #6 in the first table, where the reverse coupled RF power was greater than the forward coupled RF power. But this measurement was repeatable, and even in the second table, the reverse coupled power from these quadrants are more than 10x the other quadrants. The peaks were also well above (>10dBm) the analyzer noise floor
I haven't gone through the full misalginment -> Power coupled to TEM10 mode algebra to see if these numbers make sense, but assuming a photodetector responsivity of 0.8A/W, the product (P1P2) of the powers of the beating modes works out to ~tens of pW (for the IMC well aligned case), which seems reasonable as something like P1~10uW, P2 ~ 5uW would lead to P1P2~50pW. This discussion was based on me wrongly looking at numbers for the aLIGO WFS heads, and Koji pointed out that we have a much older generation here. I will try and find numbers for the version we have and update this discussion.
This is probably just a confirmation of something we discussed a couple of weeks back, but I wanted to get more familiar with using the multi-coherence (using EricQs nice function from the pynoisesub package) as an indicator of how much feedforward noise cancellation can be achieved. In particular, in light of our newly improved WFS demod/whitening boards, I wanted to see if there was anything to be gained by adding the WFS to our current MCL feedforward topology.
I used a 1 hour data segment - the channels I looked at were the vertex seismometer (X,Y,Z) and the pitch and yaw signals of the two WFS, and the coherence of the uncorrelated part of these multiple witnesses with MCL. I tried a few combinations to see what is the theoretical best achievable subtraction:
The attached plot suggests that there is negligible benefit from adding the WFS in any combination to the MCL feedforward, at least from the point of view of theoretical achievable subtraction.
I also wanted to put up a plot of the current FF filter performance, for which I collected 1 hour of data tonight with the FF on. While the feedforward does improve the MCL spectrum, I expected better performance judging by previous entries in the elog, which suggest that the FIR implementation almost saturates the achievable lower bound. The performance seems to have degraded particularly around 3Hz, despite the multi-coherence being near unity at these frequencies. Perhaps it is time to retrain the Weiner filter? I will also look into installation of the accelerometers on the MC2 chamber, which we have been wanting to do for a while now...
We rebooted c1psl, c1iscaux and c1aux which were all showing the typical symptom of responding to ping but not to telnet (and also blanked out epics fields on the MEDM screens). Keyed all these crates.
Restored burt snapshots for c1psl, PMC locked fine, and IMC is also locked now.
Johannes forgot to elog this yesterday, but he rebooted c1susaux following the usual procedure to avoid getting ITMX stuck.
It was raised at the Wednesday meeting that I did not check the RF pickup levels while measuring the RF error signal levels into the Demod board. So I closed the PSL shutter, and re-did the measurement with the same measurement scheme. The detailed power levels (with no light incident on the WFS, so all RF pickup) is reported in the table below.
These numbers can be subtracted from the corresponding columns in the previous elog to get a more accurate estimate of the true RF error signal levels. Note that the abnormal behaviour of Quadrant #2 on both WFS demod boards persists.
I'm not sure if this is related, but since today morning, I've noticed that the data concentrator errors have returned. Looking at daqd.log, there is a 1 second timing mismatch error that is being generated. Usually, manually running ntpdate on the front ends fixes this problem, but it did not work today.
The coil and PD BLRMS are useful tools in identifying when glitches occur in the PD readout, I thought it would be good to install them for ITMY, ETMX and SRM (since I plan to switch the MC3 satellite box, which we suspect to be problematic, with the SRM one). For this purpose, I had to install some IPC SHMEM blocks in C1SUS and recompile. 24 IPC channels were added to pipe the coil, PD and Oplev signals from C1SUS to C1PEM - the recompilation went smoothly, and it doesn't look like the model computation time has increased significantly or that the model is any closer to timing out.
However, I was unable to install the BLRMS blocks in C1PEM, as when I tried to compile the model with BLRMS for these extra 24 channels, I got a compilation error saying that I have exceeded the maximum allowed 499 testpoints per channel. Is there any workaround to this? It would be possible to create a custom BLRMS block that doesn't have all those testpoints, maybe this is the way to go? Especially if we want to install these channels for all our SOS optics, and also replace the current Seismic BLRMS with this scheme for consistency?
GV edit: I have implemented this scheme - after backing up the original BLRMS_2k part, I made a new one with no testpoints and only EPICS readouts. Doing so allowed me to recompile c1pem without any issues, the CPU time seems to have gone up by 3us from ~55us to 58us. So the BLRMS data record is only available at 16Hz, since there are no DQ channels in the BRLMS block - do we want these in any case? Let's see how this does over the weekend...
Some more details of our investigation:
We pulled out the RF AM stabilization box from the 1X2 rack. PSL shutter was closed, marconi output, RF distribution box and RF AM stabilization box were turned off in that order. We had to remove the 4 rack nut screws on the RF distribution box because of the stiff cables which prevented the RF AM stabilization box extraction. I've left the marconi output and the RF distribution boxes off, and have terminated all open SMA connections with 50 ohm terminators just in case. Rack nuts for RF distribution box have been removed, it is currently sitting on a metal plate that is itself screwed onto the rack. I deemed this a stable enough ledge for the box to sit on in the short run, while we debug the RF AM stabilization box. We will work on the debugging and re-install the box as soon as we are done...
> What is the probe situation? Ought to use a high impedance FET probe to measure this or else the scope would load the circuit.
We did indeed use the active probe, with the 100:1 attenuator in place. The values Lydia has quoted have 40dB added to account for this.
> What kind of HELA are the HELA amplifiers? Please a link to the data sheet if you can find it. I wonder what the gain and NF are at 30 MHz. I think the HELA-10D should be a good variant
The HELA is marked as HELA-10. It doesn't have the '+' suffix but according to the datasheet, it seems like it is just not RoHS compliant. It isn't indicated which of the varieties (A-D) is used either on the schematic or the IC, only B and D are 50ohms. For all of them, the typical gain is 11-12dB, and NF of 3.5dB.
I've been suggesting that there may be something wonky with the Seismic Rainbow Striptool on the wall for the last couple of weeks. Here are a few things that were verified today.
I've added the schematic of the RF AM stabilization board to the 40m PSL document tree, after having created a new DCC document for our 40m edits. Pictures of the board before and after modification will also be uploaded here...
I had noticed something wonky with the microphone, but neglected to elog it. I had tested it after installation by playing a sine wave from my laptop and looking at the signal on the PSL table, it worked fine. But you can see in the attached minute trend plot that the signal characteristics changed abruptly ~half a day after installation, and never quite recovered.\
Rana motivated me to take a step back and reframe the objectives and approach for this project, so I am collecting some thoughts here on my understanding of it. As I write this, some things still remain unclear to me, so I am leaving these as questions here for me to think about...
and come up with the best loop that meets all our rquirements? What constitutes the "best" loop? How do we weight the relative importance of our various requirements?
For the specific problem of making the MCL feedback loop better, the approach I have in mind right now is the following:
My immediate goal is to have the Simulink model updated.
Thoughts/comments on the above will be appreciated...
Had to reboot c1psl, c1susaux, c1auxex, c1auxey and c1iscaux today. PMC has been relocked. ITMX didn't get stuck. According to this thread, there have been two instances in the last 10 days in which c1psl and c1susaux have failed. Since we seem to be doing this often lately, I've made a little script that uses the netcat utility to check which slow machines respond to telnet, it is located at /opt/rtcds/caltech/c1/scripts/cds/testSlowMachines.bash.
The script can be executed by ./testSlowMachines.bash.
I've edited Rana's Simulink model to reflect the current IMC servo topology (to the best of my understanding). I've tried to use Transfer Function blocks wherever possible so that we can just put in the appropriate zpk model in the script that will linearize the whole loop. I've also omitted the FSS SLOW loop for now.
I've been looking through some old elogs and it looks like there have been several modifications to both the MC servo board (D040180) and the TT FSS Box (D040105). I think it is easiest just to measure these TFs since the IMC is still down, so I will set about doing that today. There is also a Pomona Box between the broadband EOM and the output of the TT FSS box, which is meant to sum in the modulation for PMC locking, about which I have not yet found anything on the elog.
So the next steps are:
If anyone sees something wrong with this topology, please let me know so that I can make the required changes.
A few minutes back, I glanced up at the control room StripTool and noticed that the MCREFL PD DC level had gone up from ~0 to ~0.7, even though the PSL shutter was closed. This seemed bizzare to me. Strangely, simply cycling the shutter returned the value to the expected value of 0. I wonder if this is just a CDS problem to do with c1iool0 or c1psl? (both seem to be responding to telnet though...)
Since things look to be back to normal, I am going to start with my characterization of the various TFs in the IMC FSS loop...
Quick summary elog, details to follow. I did the following:
The measurements I have look reasonable. But I had a hard time trying to look at the schematic and determine what is the appropriate number and locations of poles/zeros with which to fit the measured transfer function. Koji and I spent some time trying to go through the MC Servo board schematic, but looks like the version uploaded on the 40m DCC tree doesn't have changes made to it reflected (we compared to pictures on the 40m google photos page and saw a number of component values were different). Since the deviation between fit and measurement only occurs above 1MHz (while using poles/zeros inferred from the schematic), we decided against pulling out the servo board and investigating further - but this should be done at the next opportunity. I've marked the changes we caught on a schematic and will upload it to the 40m DCC page, and we can update this when we get the chance.
So it remains to fit the other two measured TFs, and add them to the Simulink model. Then the only unknown will be the PDH discriminant, which we anyway want to characterize given that we will soon have much more modulation.
Data + plots + fits + updated schematics to follow...
I'd like to fix a few things at 1X1 when we plug in the new amplifier for the 29.5MHz modulation signal.
Steve has ordered rolls of pre-twisted wire to run from 1X1 to the PSL table, so that part can be handled later.
But at 1X1, we need to tap new paths from +/- 24V to the DIN connectors. I think it's probably fine to turn off the two Sorensens, do the wiring, and then turn them back on, but is there any procedure for how this should be done?
Here are the details as promised.
Attachment #1: Updated simulink model. Since I haven't actually run this model, all the TF blocks are annotated "???", but I will post an updated version once I have run the model (and fix some of the questionable aesthetic choices)
Attachment #2: Measured and fitted transfer functions from the "IN1" input (where the demodulated MC REFL goes) to the "SERVO" output of the MC servo board (to FSS box). As mentioned in my previous elog, I had to put in a pole (fitted to be at ~2MHz, called pole 9 in the plot) in order to get good agreement between fit an measurement up to 10MHz. I didn't bother fitting all the high frequency features. Both gain sliders on the MEDM screen ("IN1 Gain" and "VCO gain") were set to 0dB for this measurement, while the super boosts were all OFF.
Attachment #3: Measured and fitted transfer function from "TEST 1 IN" to "FAST OUT" of the FSS box. Both gains on the FSS MEDM screen ("Common gain adjust" and "fast gain adjust") were set to 0dB for this measurement. I didn't need any ad-hoc poles and zeros for this fit (i.e. I can map all the fitted poles and zeros to the schematic), but the fit starts to deviate from the measurement just below 1 MHz.. perhaps I need to add a zero above 1MHz, but I can't see why from the schematic...
Attachment #4: Measured TF from "TEST 1 IN" to "PC OUT" on the FSS box. MEDM gains were once again 0dB. I can't get a good fit to this, mainly because I can't decipher the poles and zeros for this path from the schematic (there are actually deviations from the schematic posted on the 40m DCC page in terms of component values, I will try and correct whatever I notice) . I'll work on this...
Attachment #5: Data files + .fil files used to fit the data with LISO
Most of the model has come together, I am not too far from matching the modelled OLG to the measured OLG. So I will now start thinking about designing the controller for the MCL part (there are a couple of TFs that have to be measured for this path).
Lydia finished up installing the new RF amplifier, and will elog the details of the installation.
I wanted to try and measure the IMC OLG to compare against my Simulink model. So I went about performing a few checks. Summary of my findings:
TBC tomorrow, I'm leaving the PSL shutter closed and the RF source off for tonight...
Rana and I spent some time looking at the IMC demod board earlier today. I will post the details shortly, but there was a label on the front panel which said that the nominal LO level to the input should be -8dBm. The new 29.5MHz routing scheme meant that the LO board was actually being driven at 0dBm (that too when the input to the RF distribution box was attenuated by 5dB).
An elog search revealed this thread, where Koji made some changes to the demod board input attenuators. Rana commented that it isn't a good idea to have the LO input be below 0dBm, so after consulting with Koji, we decided that we will
After implementing these changes, and testing the board with a Marconi on the workbench, I found that the measured power levels (measured with an active FET probe) behave as expected, up till the ERA-5SM immediately prior to the LO (U4 and U6 on the schematic). However, the power after this amplifier (i.e. the input to the on-circuit LO, Minicircuits JMS-1H, which we want to be +17dBm), is only +16dBm. The input to these ERA-5SMs, which are only ~2years old, is -2dBm, so with the typical gain of +20dB, I should have 18dBm at their output. Moreover, increasing the input power to the board from the Marconi doesn't linearly increase the output from the ERA-5SM. Just in case, I replaced one of the ERA-5SMs, but observed the same behaviour, even though the amplifier shouldn't be near saturation (the power upstream of the ERA-5SM does scale linearly).
This needs to be investigated further, so I am leaving the demod board pulled out for now...
29.5 MHz RF Modulation Source
IMC Demodulation Board
I wanted to do a quick check to see if the observed signal levels were in agreement with tests done on the workbench with the Marconi. The mixers used, JMS-1H, have an advertised conversion loss of ~7dB (may be a little higher if we are not driving the LO at +17dBm). The Lissajous ellipse above is consistent with these values. I didn't measure powers with the MC REFL PD plugged into the demod board, but the time series plot above suggest that I should have ~0dBm power in the MC REFL PD signal at 29.5MHz for the strongest flashes (~0.3Vpp IF signal for the strong flashes).
MC Servo Board
Some general remarks
I was a little confused why the In1 Gain had to be as high as +10dB - before the changes to the RF chain, we were using +27dB, and we expect the changes made to have increased the modulation depth by a factor of ~25, so I would have expected the new In1 Gain to be more like 0dB.
While walking by the PSL table, I chanced upon the scope monitoring PMC transmission, and I noticed that the RIN was unusually high (see the scope screenshot below). We don't have the projector on the wall anymore, but it doesn't look like this has shown up in the SLOW monitor channel anyways. Disabling the MC autolocker / closing the PSL shutter had no effect. I walked over to the amplifier setup in 1X2, and noticed that the SMA cable connecting the output of the amplifier to the EOM drive was flaky. By touching the cable a little, I noticed that the trace on the scope appeared normal again. Turning off the 29.5MHz modulation source completely returned the trace to normal.
So I just made a new cable of similar length (with the double heat shrink prescription). The PMC transmission looks normal on the scope now. I also re-aligned the PMC for good measure. So presumably, we were not driving the EOM with the full +27dBm of available power. Now, the In1 Gain on the MC servo board is set to +2dB, and I changed the nominal FSS FAST gain to +18dB. The IMC OLTF now has a UGF of ~165kHz, though the phase margin is only ~27 degrees..
MC Servo Board
Following the discussion at the meeting today, I wanted to finish up the WFS tuning and then hand over the IFO to Johannes for his loss stuff. So I did the following:
At this point, I figured I would leave the WFS in this state and observe its behaviour overnight. But abruptly, the IMC behaviour changed dramatically. I saw first that the IMC had trouble re-acquiring lock. Moreover, the PC Drive seemed saturated at 10.0V, even when there was no error signal to the MC Servo board. Looking at the MEDM screen, I noticed that the "C1-IOO_MC_SUM_MON" channel had picked up a large (~3V) DC offset, even with In1 and In2 disabled. Moreover, this phenomenon seemed completely correlated with opening/closing the PSL shutter. Johannes and I did some debugging to make sure that this wasn't a sticky button/slider issue, by disconnecting all the cables from the front panel of the servo board - but the behaviour persisted, there seemed to be some integration of the above-mentioned channel as soon as I opened the PSL shutter.
Next, I blocked first the MC REFL PD, and then each of the WFS - turns out, if the light to WFS2 was blocked and the PSL shutter opened, there was no integrating behaviour. But still, locking the MC was impossible. So I suspected that something was wrong with the LO inputs to the WFS Demod Boards. Sure enough, when I disconnected and terminated those outputs of the RF distribution box, I was able to re-lock the MC fine.
I can't explain this bizzare behaviour - why should an internal monitor channel of the MC Servo board integrate anything when the only input to it is the backplane connector (all front panel inputs physically disconnected, In1 and In2 MEDM switches off)? Also, I am not sure how my work on the WFS could have affected any hardware - I did not mess around at the 1X1 rack in the evening, and the light has been incident on the WFS heads for the past few days. The change in modulation depth shouldn't have resulted in the RF power in this chain crossing any sort of damage threshold since the measured power before the changes was at the level of -70dBm, and so should be at most -40dBm now (at the WFS demod board input). The only thing different today was that the digital inputs of the WFS servos were turned on...
So for tonight I am leaving the two outputs of the RF distribution box that serve as the LO for the WFS demod boards terminated, and have also blocked the light to both WFS with beam blocks. The IMC seems to be holding lock steady, PC drive levels look normal...
Unrelated to this work, but I have committed to the svn the updated versions of the mcup and mcdown scripts, to reflect the new gains for the autolocker...
Craig and I have been trying to put together a Simulink diagram of the proposed alternative calibration scheme. Each time I talk the idea over with someone, I convince myself it makes sense, but then I try and explain it to someone else and get more confused. Probably I am not even thinking about this in the right way. So I am putting what I have here for comments/suggestions.
What's the general idea?
Suppose the PSL is locked to the MC cavity, and the AUX laser is locked to the arm cavity (with sufficiently high BW). Then by driving a line in the arm cavity length, and beating the PSL and AUX lasers, we can determine how much we are modulating the arm cavity length in metres by reading out the beat frequency between the two lasers, provided the arm cavity length is precisely known.
So we need:
To be able to sense a 1kHz line being driven at 1e-16 m amplitude, I estimate we need a beat note stability of ~1mHz/rtHz at 1kHz.
Requirements and what we have currently:
On the hardware side of things, we need:
Koji and I briefly looked through the fiber inventory we have yesterday. We have some couplers (one mounted) and short (5m) patch fibers. But I think the fiber infrastructure we have in place currently is adequate - we have the AUX light brought to the PSL table, and there is a spare fiber running the other way if we want to bring the PSL IR to the end as well.
I need to also think about where we can stick the EOM in given physical constraints on the EX table and the beam diameter/aperture of EOM...
Turns out the "problem" with WFS2 and the apparent offset accumulation on the IMC Servo board is probably a slow machine problem.
Today, Koji and I looked at the situation a little more closely. This anomalous behaviour of the C1:IOO-MC_SUM channel picking up an offset seems correlated with light being incident on WFS2 head. Placing an ND filter in front of WFS 2 slowed down the rate of accumulation (though it was still present). But we also looked at the in-loop error signal on the IMC board (using the "Out 2" BNC on the front panel), and this didn't seem to show any offset accumulation. Anyways, the ability of the Autolocker doesn't seem to be affected by this change, so I am leaving the WFS servo turned on.
The new demod phases (old +45degrees) and gains (old gains *0.2) have been updated in the SDF table. It remains to see that the WFS loops don't drag the alignment over longer timescales. I will post a more detailed analysis here over the weekend...
Also, we thought it would be nice to have DQ channels for the WFS error signals for analysis of the servo (rather than wait for 30 mins to grab live fine resolution spectra of the error signals with the loop On/Off). So I have added 16 DQ channels [recorded at 2048 Hz] to the c1ioo model (for the I and Q demodulated signal from each quadrant for the 8 quadrants). The "DRATE" for the c1ioo model has increased from ~200 to 410. Comparing to the "DRATE" of c1lsc, which is around 3200, we think this isn't significantly stretching the DAQ abilities of the c1ioo model...
Here is a comparison of the error signal spectra after increasing the IMC modulation depth, to the contribution with RF inputs / whitening inputs terminated (which I borrowed from Koji's characterization of the same in Dec 2016, these shouldn't have changed).
Some general observations:
I will update with the in-loop error signal spectra, which should give us some idea of the loop bandwidth.
I will look into lowering the sampling rate, and how much out-of-band power is aliasing into the 0-256 Hz band and update with my findings.
Yikes. Please change the all teh WFS DQ channels sample rates from 2048 down to 512 Hz. I doubt we ever need anything about 180 Hz.
There is sometimes an issue with this: if our digital AA filters are not strong enough, the noise about above 256 Hz can alias into the 0-256 Hz band. We ought to check this quantitatively and make some elog statement about our AA filters. This issue is also seen in DTT when requesting a low frequency spectrum: DTT uses FIR filters which are sometimes not sharp enough to prevent this issue.
I've now made a DCC page for the mirror specifications, all revisions should be reflected there.
Over the last couple of days, I've been playing around with Rana's coating optimization code to come up with a coating design that will work for us. The basic idea is a to use MATLAB's particle swarm constrained optimization tool to minimize an error function that is a composite of four penalties:
On the AR side, I only considered 2 and 3. The weighting of these four components were set somewhat arbitrarily, but I seem to be able to get reasonable results so I am going with this for now.
From my first pass at it, the numbers I've been able to get, for 19 layer pairs, are (along with some plots):
(in this picture, the substrate is to the right of layer 38)
(substrate to the right of layer 38)
These numbers are already matching the specs we have on the DCC page currently. I am not sure how much better we can get the specs on the HR side keeping with 19 layer pairs...
All of this data, plus the code used to generate them, is on the gitlab coatings page...
Since it would be nice to have the latest version of Matlab, with all its swanky new features (?), available on the control room computers and Optimus, I downloaded Matlab R2016b and activated it with the Caltech Campus license. I installed it into /cvs/cds/caltech/apps/linux64/matlab16b. Specifically, I would like to run the coating optimization code on Optimus, where I can try giving it more stringent convergence criterion to see if it converges to a better spot.
I trust that this way, we don't interfere with any of the rtcds stuff.
If I've done something illegal license-wise or if this is likely to cause havoc, please point me to what is the correct way to do this.
GV 18 Mar 2017: Though I installed this using the campus network license key, this seems to only work on Rossa. If I run it on the other control room machines/Optimus, it throws up a licensing error. I will check with Larry W. as to how to resolve this...
The alignment wasn't disturbed for the photo-taking - I just re-checked that the spot is indeed incident on the MC REFL PD. MC REFL appeared dark because I had placed a physical beam block in the path to avoid accidental PSL shutter opening to send a high power beam during the photo-taking. I removed this beam block, but MC wouldn't lock. I double checked the alignment onto the MC REFL PD, and verified that it was ok.
Walking over to the 1X1, I noticed that the +24V Sorensen that should be pushing 2.9A of current when our new 29.5MHz amplifier is running, was displaying 2.4A. This suggests the amplifier is not being powered. I toggled the power switch at the back and noticed no difference in either the MC locking behaviour or the current draw from the Sorensen.
To avoid driving a possibly un-powered RF amplifier, I turned off the Marconi and the 29.5MHz source. I can't debug this anymore tonight so I'm leaving things in this state so that Lydia can check that her box works fine...
I turned the RF sources back on and opened the PSL shutter. MC REFL was dark on the camera; people were taking pictures of the PD face today so I assume it just needs to be realigned before the mode cleaner can be locked again.
I've been sitting on some data for a while now which I finally got around to plotting. Here is a quick summary:
Attachment #1: I applied a step input to the offset of each of the six WFS loops and observed the step response. The 1/e time constant for all 4 WFS loops is <10s suggesting a bandwidth a little above 0.1Hz. However, the MC2 P and Y loops have a much longer time contant of ~150s. Moreover, it looks like the DC centering of the spot on the QPD isn't great - the upper two quadrants (as per the MEDM screen) have ~3x the cts of the lower pair.
I did not (yet) try increasing the gain of this loop to see if this could be mitigated. I accidentally saved this as a png, I will put up the pdf plot
Attachment #2: This is a comparison of the WFS error signals with the loops engaged (solid lines) vs disabled (dashed lines). Though these measurements were taken at slightly different times, they are consistent with the WFS loop bandwidths being ~0.1Hz.
Attachment #3: Comparison of the spectra of the testpoint channels and their DQ counterparts at the same time which are sampled at 512Hz. It does not look like there is any dramatic aliasing going on, although it is hard to tell what exactly is the order of the digital AA filter implemented by the RCG. Further investigation remains to be done... For reference, here are some notes: T1600059, T1400719
GV 7 March 2017 6pm: It looks like we use RCG v2.9.6, so it should be the latter document that is applicable. I've been going through some directories to try and find the actual C-code where the filter coeffs are defined, but have been unsuccessful so far...
For a few days now, the "code status" page has been telling us that the summary pages are DEAD, even though the pages themselves seemed to be generating plots. I logged into the 40m shared account on the cluster and checked the status of the condor job (with condor_q), and did not find anything odd there. I decided to consult Max, who pointed out that the script that checks the code status (/home/40m/DetectorChar/bin/checkstatus) was looking for a particular string in the log files ("gw_daily_summary"), while the recent change in the default output of condor_q meant that the string actually being written to the log files was "gw_daily_summa". This script has now been modified to look for instances of "gw_daily" instead, and so the code status indicator seems to be working again...
The execution of the summary page scripts has also been moved back to pcdev1 (from pcdev2, where it was moved to temporarily because of some technical problems with pcdev1).
This was still running at ~9.30am today morning, at which point I manually terminated it after confirming with Johannes that it was okay to do so. Judging by the StripTool traces in the control room, the mode cleaner remained locked for most of the night, there should be plenty of usable data...
Note that I re-aligned the Y-arm (to experiment further with photo-taking) at about 9.30am, so the data after this time should be disregarded...
loss map script running on Rossa that moves the beam on ETMX. Yarm was misaligned for this, most recent PIT and YAW settings were saved beforehand. This will take until late at night, I estimate 2-3 am.
Rana suggested including some additional terms to the cost function to penalize high sensitivity to deviations in the layer thickness (L). So the list of terms contributing to the cost function now reads:
I did not include other sensitivity terms, like sensitivity to the refractive index values for the low and high index materials (which are just taken from GWINC).
There is still some arbitrariness in how I chose to weight the relative contributions to the cost function, but after some playing around, I think I have a solution that I think will work. Here are the spectral reflectivity and layer thickness plots for the HR and AR sides respectively.
HR side: for a 1% increase in the thickness of all layers, the transmission changes by 5% @ 1064nm p-pol and 0.5% @ 532nm s and p-pol
AR side: for a 1% change in the thickness of all layers, the transmission changes by <0.5% @ 532nm s and p-pol
(substrate to the right of layer 38)
I've also checked that we need 19 layer pairs to meet the spec requirements, running the code with fewer layer pairs leads to (in particular) large deviations from the target value of 50ppm @ 1064nm p-pol.
Do these look reasonable?
I did a quick measurement of the beam size on the MC REFL PD today morning. I disabled the MC autolocker while this measurement was in progress. The measurement set up was as follows:
This way I was able to get right up to the heat sink - so this is approximately 2cm away from the active area of the PD. I could also measure the beam size in both the horizontal and vertical directions.
The measured and fitted data are:
The beam size is ~0.4mm in diameter, while the active area of the photodiode is 2mm in diameter according to the datasheet. So the beam is ~5x smaller than the active area of the PD. I couldn't find anything in the datasheet about what the damage threshold is in terms of incident optical power, but there is ~100mW on th MC REFL PD when the MC is unlocked, which corresponds to a peak intensity of ~1.7 W / mm^2...
Even though no optics were intentionally touched for this measurement, I quickly verified that the spot is centered on the MC REFL PD by looking at the DC output of the PD, and then re-enabled the autolocker.
There is no internet connectivity on any of the control room machines.
I have been trying to debug by tracing the cabling situation in the rack in the office area, and will update if/when this problem has been resolved. I had last come into the lab on Saturday and there was no problem then. There 40m wireless network servicing the office area seems to work fine.
Koji diagnosed that the NAT router was to blame for this problem. I simply power cycled this router, and now the connectivity has been restored.
It was possible to log into nodus and then to pianosa - and it was also possible to log into the various control room machines once logged into nodus. However, the outward packets seemed to not get transmitted. Anyways, power cycling the NAT Router unit seems to have done the job.