Note the effect of quadrature rotation for small offsets.
Can't find hostname 'fb40m'
it only lasted a few hours
We were stymied tonight by a problem which began late this afternoon. The MC would periodically go angularly unstable, breaking lock and tripping the MC2 watchdogs. Suspicion fell naturally upon McWFS.
Eventually I traced the problem to the MC3 SIDE damping, which appeared to not work--it wouldn't actually damp, and the Vmon values did not correspond to the SDSEN outputs. Suspicion fell on the coil driver.
Looking at the LEMO monitors on the MC3 coil driver, with the damping engaged, showed clear bit resolution at the 100mV level, indicating a digital/DAC problem. Rebooting c1sosvme, which acquires all the OSEM sensor signals and actually does the side damping, resolved the issue.
Lies! The problem was not resolved. The plot shows a 2-day trend, with the onset of the problem yesterday clearly visible as well as the ineffectiveness of the soft-reboot done yesterday. So we'll try a hard-reboot.
This is the two arms locked, for an hour. No integrator in either loop, but from this it looks like ETMY may have a bigger length2angle problem than ETMX. I'll put some true integrators in the loops and do this again.
There appear to be at least two independent problems: the coil balancing for ETMY is bad, and something about ITMX is broken (maybe a coil driver).
The Y-arm becomes significantly misaligned during long locks, causing the arm power to drop. This misalignment tracks directly with the DC drive on ETMY. Power returns to the maximum after breaking and re-establishing lock.
ITMX alignment wanders around sporadically, as indicated by the oplevs and the X-arm transmitted power. Power returns to previous value (not max) after breaking and re-establishing lock.
Both loops have integrators.
I've disabled the alarm for PEM_count_half, using the mask in the 40m.alhConfig file. We can't do anything about it, and it's just annoying.
I edited the configure scripts (those called from the C1IFO_CONFIGURE screen) for restore XARM and YARM. These used to misalign the ITM of the unused arm, which is totally unnecessary here, as we have both POX and POY. They also used to turn off the drive to the unused ETM. I've commented out these lines, so now running the two restores in series will leave a state where both arms can be locked. This also means that the ITMs will never be deliberately mis-aligned by the restore scripts.
I just restarted the elog. It was crashed for unknown reasons. The restarting instructions are in the wiki.
steve, rob, alberto
Steve installed two rotary flow meters into the MOPA chiller system--one at the chiller flow output and one in the NPRO cooling line. After some hijinks, we discovered that the long, insulated chiller lines have the same labels at each end. This means that if you match up the labels at the chiller end, at the MOPA end you need switch labels: out goes to in and vice-versa. This means that, indubitably, we have at some point had the flow going backwards through the MOPA, though I'm not sure if that would make much of a difference.
Steve also installed a new needle valve in the NPRO cooling line, which works as expected as confirmed by the flow meter.
We also re-discovered that the 40m procedures manual contains an error. To turn on the chiller in the MOPA start-up process, you have to press ON, then RS-232, then ENTER. The proc man says ON, RS-232, RUN/STOP.
The laser power is at 1.5W and climbing.
The chiller HT alarm started blinking, as the water temperature had reached 40 degrees C, and was still rising. We turned off the MOPA and the chiller. Maybe we need to open the needle valve a bit more? Or maybe the flow needs to be reversed? The labels on the MOPA are backwards?
The chiller appears to be broken. We currently have it on, with both the SENSOR and RS-232 unplugged. It's running, circulating water, and the COOL led is illuminated. But the temperature is not going down. The exhaust out the back is not particularly warm. We think this means the refrigeration unit has broken, or the chiller computer is not communicating with the refrigerator/heat exchanger. Regardless, we may need a new chiller and a new laser.
steve, alberto, rob
After some futzing around with the chiller, we have come to the tentative conclusion that the refrigeration unit is not working. Steve called facilities to try to get them to recharge the refrigerant (R-404a) tomorrow, and we're also calling around for a spare chiller somewhere in the project (without luck so far).
The repair man thinks it's a bad start capacitor, which is 240uF at 120V. Steve has ordered a new one which should be here tomorrow, and with luck we'll have lasing by tomorrow afternoon.
I'm setting SLOWDC to about -5.
I had to edit FSSSlowServo because it had hard limits on SLOWDC at (-5 and 5). It now goes from -10 to 10.
I locked to PSL loops, then tweaked the alignment of the MC to get it to lock.
I first steering MC1 until all the McWFS quads were saturated. This got the MC locking in a 01 mode. So I steered MC1 a little more till it was 00. Then I steered MC2 to increase the power a little bit. After that, I just enabled the MC autolocker.
c1susvme2 has been running just a bit late for about a week. I rebooted it.
The plot shows SRM_FE_SYNC, which is the number of times in the last second that c1susvme2 was late for the 16k cycle. Similarly for ETMX.
The reboot appears to have worked.
I added op540m's display 0 (the northern-most monitor in the control room) to the MEDM screens webpage: https://nodus.ligo.caltech.edu:30889/medm/screenshot.html
Now we can see the StripTool displays that are usually parked on that screen.
Some thoughts on what happened with the MOPA cooling.
Some unknown thing happened to precipitate the initial needle valve jiggle, which unleashed a torrent of flow through the NPRO. This flow was made possible by the fact that the cooling lines are labeled confusingly, and so flow was going backwards through the needle valve, which was thus powerless to restrict it. The NPRO got extremely cold, and most of the chiller's cooling power was being used to unnecessarily cool the NPRO. So, the PA was not getting cooled enough. At this, point, reversing the flow probably would have solved everything. Instead, we turned off the chiller and thus discovered the flaky start-motor capacitor.
Now we have much more information, flow meters in the NPRO and main cooling lines, a brand-new, functioning needle valve, a better understanding of the chiller/MOPA settings necessary for operation, and the knowledge of what happens when you install a needle valve backwards.
I restarted ntpd on op440m to solve a "synchronization error" that we were having in DTT. I also edited the config file (/etc/inet/ntp.conf) to remove the lines referring to rana as an ntp server; now only nodus is listed.
To do this:
log in as root
/usr/local/bin/ntpd -c /etc/inet/ntp.conf
Here is a set of mode scans of the AS port, using the OMC as a mode scanner. The plot overlays various configurations of the IFO.
To remove PZT nonlinearity, each scan was individually flattened in fsr-space by polynomial (3rd order) fitting to some known peak locations (the carrier and RF sidebands).
tdsavg 5 C1:LSC-PD4_DC_IN1
was causing grievous woe in the cm_step script. It turned out to fail intermittently at the command line, as did other LSC channels. (But non-LSC channels seem to be OK.) So we power cycled c1lsc (we couldn't ssh).
Then we noticed that computers were out of sync again (several timing fields said 16383 in the C0DAQ_RFMNETWORK screen). We restarted c1iscey, c1iscex, c1lsc, c1susvme1, and c1susvme2. The timing fields went back to 0. But the tdsavg command still intermittently said "ERROR: LDAQ - SendRequest - bad NDS status: 13".
The channel C1:LSC-SRM_OUT16 seems to work with tdsavg every time.
Let us know if you know how to fix this.
Did you try restarting the framebuilder?
What you type is in bold:
op440m> telnet fb40m 8087
Lock acquisition is proceeding smoothly for the most part, but there is a very consistent failure point near the end of the cm_step script.
Near the end of the procedure, while in RF common mode, the sensing for the MCL path of the common mode servo is transitioned from a REFL 166I signal which comes into the LSC whitening board from the demodulator, to another copy of the signal which has passed through the common mode board, and is coming out of the Length output of the common mode board. We do this because the signal which comes through the CM board sees the switchable low-frequency boost filter, and so both paths of the CM servo (AO and MCL) can get that filter switched on at the same time.
The problem is occurring after this transition, which works reliably. However, when the script tries to remove the final CARM offset, and bring the offset to zero, lock is abruptly lost. DARM, CM, and the crossover all look stable, and no excess noise appears while looking at the DARM, CARM, MCF spectra. But lock is always lost right about the same offset.
I've seen this before. At that time, the problem was gone spontaneously the next day.
You could stop just before the offset reaches zero and then try to slowly reduce the offset manually to see where is the threshold.
Well, it hasn't gone away yet. It happened Sat, Mon, and Tues afternoon, as well as Friday. The threshold varies slightly, but is always around ~200-300 cnts. I've tried reducing the offset with the signal coming from the CM board and the signal not going through the CM board, I've also tried jumping the signal to zero (rather than a gradual reduction).
Tonight we'll measure the MC length and set the modulation frequencies, and maybe try some MZ tweaking to do RFAMMon minimization.
I started two scripts, senseDRM and loadDRMImatrixData.m, which Peter will bang on until they're correct. They're in the $SCRIPTS/LSC directory. The first is a perl script which uses TDS tools to drive the DRM optics and measure the response at the double demod photo-detectors, and write these results to a series of files loadable by matlab. The second loads the output from the first script, inverts the resulting sensing matrix to get an input matrix, and spits out a tdswrite command which can be copied and pasted into a terminal to load the new input matrix values.
What's left is mainly in figuring out how to do the matrix inversion properly. Right now the script does not account for the output matrix, the gains in the feedback filters at the measurement frequency, or the fact that we'll likely want the UGF of our loops to be less than the measurement frequency. Peter's going to hash out these details.
I had trouble getting the SRC handoff from SD to DD to work. I tried different gains, flipping the PD7 & 8 demod phases by 180 degrees, and messing with the output matrix to reduce cross-couplings in the state with MICH & PRC on DD and SRC on SD. Eventually I decided to try to make the DRM matrix diagonalization work.
It does, mostly. The handoff is now stable, and the loops all have UGFs around 100Hz. So, tonight anyways, it's possible to run senseDRM and then loadDRMImatrixData.m and run the resulting tdswrite command, and have a working handoff. I had to eliminate a few PDs (PD5 & PD10) to get it to work properly.
It would be nice if this script would measure all the PDs and allow individual setting of loop UGFs and measurement frequencies.
Lock is still being lost, right at the end of the process when trying to reduce the CARM offset to zero.
The IFO is locked, at the operating point (zero CARM offset). The problem with reducing the residual CARM offset in the last stage appears to have been because the common mode gain was getting too high, and so the loop was going unstable at high frequencies.
The cm_step script is currently a confusing mess, with all the debugging statements. I'll clean it up this afternoon and check that it still works.
With the common mode servo bandwidth above 30kHz and the BOOST on (1), I was able to switch on the test mass dewhitening. Finally.
I checked out a copy of matapps into /cvs/cds/caltech/apps/lscsoft so that I could find the matlab function strassign.m, which is necessary for some old mDV commands to run. I don't know why it became necessary or why it disappeared if it did.
Here's a noise spectrum of the RSE interferometer, in anti-spring mode, with RF readout. I'd say the calibration is "loose."
I used the Buonanno & Chen modification of the KLMTV IFO transfer functions to model the DARM opto-mechanical response. I just guessed at the quadrature, and normalized the optical gain at the frequency of the calibration line used (927Hz, not visible on the plot).
This is a ratio of PD1_I to PD1_Q (so a ratio of the two quadratures of AS166), measured in an anti-spring state. It's not flat because our set up has single sideband RF heterodyne detection, and using a single RF sideband as a local oscillator allows one to detect different quadratures by using different RF demodulation phases. So the variation in frequency is actually a measure of how the frequency response of DARM changes when you vary the detection quadrature. This measure is imperfect because it doesn't account for the effect of the DARM loop.
Even though you can choose your detection quadrature with this setup, you can't get squeezed quantum noise with a single RF sideband. The quantum noise around the other (zero-amplitude) RF sideband still gets mixed in, and negates any squeezing benefits.
tdsresp is broken on our linux control room machines. I made a little perl replacement which uses the DiagTools.pm perl module, called pzresp. It's in the $SCRIPTS/general directory, and so in the path of all the machines. I also edited the cshrc.40m file so that on linux machines tdsresp points to this perl replacement.
I've patched DiagTools.pm to circumnavigate the tdsdmd bug described here. I also added a function to DiagTools.pm called diagRespNoLog, which is just like diagResp but without that pesky log file.
Here's the output from the tdsresp binary on CentOS:
allegra:~>tdsresp 941.54 10000 100 10 C1:LSC-ITMX_EXC C1:LSC-PD1_Q C1:LSC-PD1_I
nan nan nan nan nan nan nan
nan nan nan nan nan nan nan
nan nan nan nan nan nan nan
*** glibc detected *** tdsresp: free(): invalid next size (fast): 0x089483e8 ***
======= Backtrace: =========
======= Memory map: ========
00242000-00249000 r-xp 00000000 fd:00 15400987 /lib/librt-2.5.so
00249000-0024a000 r--p 00006000 fd:00 15400987 /lib/librt-2.5.so
0024a000-0024b000 rw-p 00007000 fd:00 15400987 /lib/librt-2.5.so
009f9000-00a13000 r-xp 00000000 fd:00 15400963 /lib/ld-2.5.so
00a13000-00a14000 r--p 00019000 fd:00 15400963 /lib/ld-2.5.so
00a14000-00a15000 rw-p 0001a000 fd:00 15400963 /lib/ld-2.5.so
00a17000-00b55000 r-xp 00000000 fd:00 15400974 /lib/libc-2.5.so
00b55000-00b57000 r--p 0013e000 fd:00 15400974 /lib/libc-2.5.so
00b57000-00b58000 rw-p 00140000 fd:00 15400974 /lib/libc-2.5.so
00b58000-00b5b000 rw-p 00b58000 00:00 0
00b5d000-00b70000 r-xp 00000000 fd:00 15400984 /lib/libpthread-2.5.so
00b70000-00b71000 r--p 00012000 fd:00 15400984 /lib/libpthread-2.5.so
00b71000-00b72000 rw-p 00013000 fd:00 15400984 /lib/libpthread-2.5.so
00b72000-00b74000 rw-p 00b72000 00:00 0
00b76000-00b78000 r-xp 00000000 fd:00 15400981 /lib/libdl-2.5.so
00b78000-00b79000 r--p 00001000 fd:00 15400981 /lib/libdl-2.5.so
00b79000-00b7a000 rw-p 00002000 fd:00 15400981 /lib/libdl-2.5.so
00b7c000-00ba1000 r-xp 00000000 fd:00 15400975 /lib/libm-2.5.so
00ba1000-00ba2000 r--p 00024000 fd:00 15400975 /lib/libm-2.5.so
00ba2000-00ba3000 rw-p 00025000 fd:00 15400975 /lib/libm-2.5.so
00bca000-00bdd000 r-xp 00000000 fd:00 15401011 /lib/libnsl-2.5.so
00bdd000-00bde000 r--p 00012000 fd:00 15401011 /lib/libnsl-2.5.so
00bde000-00bdf000 rw-p 00013000 fd:00 15401011 /lib/libnsl-2.5.so
00bdf000-00be1000 rw-p 00bdf000 00:00 0
00dca000-00dd5000 r-xp 00000000 fd:00 15400986 /lib/libgcc_s-4.1.2-20080825.so.1
00dd5000-00dd6000 rw-p 0000a000 fd:00 15400986 /lib/libgcc_s-4.1.2-20080825.so.1
08048000-080b7000 r-xp 00000000 00:17 6455328 /cvs/cds/caltech/apps/linux/tds/bin/tdsresp
080b7000-080ba000 rw-p 0006e000 00:17 6455328 /cvs/cds/caltech/apps/linux/tds/bin/tdsresp
080ba000-080bb000 rw-p 080ba000 00:00 0
0893d000-0896b000 rw-p 0893d000 00:00 0 [heap]
f5e73000-f5e74000 ---p f5e73000 00:00 0
f5e74000-f6874000 rw-p f5e74000 00:00 0
f692d000-f6931000 r-xp 00000000 fd:00 15400995 /lib/libnss_dns-2.5.so
f6931000-f6932000 r--p 00003000 fd:00 15400995 /lib/libnss_dns-2.5.so
f6932000-f6933000 rw-p 00004000 fd:00 15400995 /lib/libnss_dns-2.5.so
f6956000-f6a12000 rw-p f6a31000 00:00 0
f6a74000-f6a7d000 r-xp 00000000 fd:00 15400997 /lib/libnss_files-2.5.so
f6a7d000-f6a7e000 r--p 00008000 fd:00 15400997 /lib/libnss_files-2.5.so
f6a7e000-f6a7f000 rw-p 00009000 fd:00 15400997 /lib/libnss_files-2.5.so
f6a7f000-f6a80000 ---p f6a7f000 00:00 0
f6a80000-f7480000 rw-p f6a80000 00:00 0
f7480000-f7481000 ---p f7480000 00:00 0
f7481000-f7e83000 rw-p f7481000 00:00 0
f7e83000-f7f63000 r-xp 00000000 fd:00 6236924 /usr/lib/libstdc++.so.6.0.8
f7f63000-f7f67000 r--p 000df000 fd:00 6236924 /usr/libAbort
It's railed. This is what halted locking progess on Monday night, as this channel is used for the offloadMCF script, which slowly feeds back a CARM signal to the ETMs to prevent the VCO from saturating.
Attached is a 5 day trend, which shows that the channel went dead a few days ago. All the channels shown are being collected from the same ICS110B (I think), but only some are dead. It looks like they went dead around the time of the "All computers down" from Sunday.
Attached are the channels being recorded from the ICS110B in 1Y2 (the IOO rack). Channels 12, 13, 16, 17, 22, 24, 25 appear to have gone dead after the computer problems on Sunday.
This has been fixed by one of the two most powerful & useful IFO debugging techniques: rebooting. I keyed the crate in 1Y2.
Before heading back to the 40m to check on the computer situation, I thought I'd check the web screenshots page that Kakeru worked on, and it looks like none of the screens have been updated since June 1st. I don't know what the story is on that one, or how to fix it, but it'd be handy if it were fixed.
Apparently I broke this when I added op540m to the webstatus page. It's fixed now.
I found the alignment biases for the PRM and the SRM in a funny state. It seemed like they had been "saved" in badly misaligned position, so the restore scripts on the IFO configure screen were not working. I've manually put them into a better alignment.
All suspentions are kicked up. Sus dampings and oplev servos turned off.
c1iscey and c1lsc are down. c1asc and c1iovme are up-down
The computers and RFM network are up working again. A boot fest was necessary.Then I restored all the parameters with burtgooey.
The mode cleaner alignment is in a bad state. The autolocker can't get it locked. I don't know what caused it to move so far from the good state that it was till this afternoon. I went tuning the periscope but the cavity alignment is so bad that it's taking more time than expected. I'll continue working on that tomorrow morning.
I now suspect that after the reboot the MC mirrors didn't really go back to their original place even if the MC sliders were on the same positions as before.
we diagnosed the problem. It was related with sticky sliders. After a reboot of C1:IOO the actual output of the DAC does not correspond anymore to the values read on the sliders. In order to update the actual output it is necessary to do a change of the values of the sliders, i.e. jiggling a bit with them.
I've updated the slider twiddle script to include the MC alignment biases. We should run this script whenever we reboot all the hardware, and add any new sticky sliders you find to the end of the script. It's at
in the rack next to the printer. It sounds like a fan is hitting something.
When I turned them on, the control signal in Pitch from WFS2 started going up with no stop. It was like the integrator in the loop was fed with a DC bias. The effect of that was to misalign the MC cavity from the good state in which it was with the only length control on (that is, transmission ~2.7, reflection ~ 0.4).
I don't know why that is happening. To exclude that it was due to a computer problem I first burtrestored C1IOO to July the 18th, but since that did not help, I even restarted it. Also that didn't solve the problem.
At least one problem is the mis-centering of the resonant spot on MC2, which can be viewed with the video monitors. It's very far from the center of the optic, which causes length-to-angle coupling that makes the mulitple servos which actuate on MC2 (MCL, WFS, local damping) fight each other and go unstable.
I tilted the periscope beam and aligned the MC. Now the spot at the Faraday entrance is near the center of the aperture in up/down space. The arm powers are only going up to ~0.8, though. Maybe we should try a little bit of left/right.
I looked at the IP POS spot with a viewer card, and it looked round, so no obvious egregious clipping in the Faraday. Someone might take a picture with one of the GigE camera and get us a beam profile there.
We no longer have an MC1 and MC3 camera view.
I can see a bright scatterer that can be seen from the east viewport of the BSC, but I can't tell what it is. It could be a ghost beam.
It would be nice to get an image looking into the north viewport of the IOO chamber. I can't see in there because the BS oplev table is in the way.
Last night Rana noticed that the overflows on the ITM and ETM coils were a crazy huge number. Today I rebooted c1dcuepics, c1iovme, c1sosvme, c1susvme1 and c1susvme2 (in that order). Rob helped me burt restore losepics and iscepics, which needs to be done whenever you reboot the epics computer.
Unfortunately this didn't help the overflow problem at all. I don't know what to do about that.
Just start by re-setting them to zero. Then you have to figure out what's causing them to saturate by watching time series and looking at spectra.
Steve, Rana, Ben, Jenne, Alberto, Rob
UPS in the vacuum rack failed this afternoon, cutting off power to the vacuum control system. After plugging all the stuff that had been plugged into the UPS into the wall, everything came back up. It appears that V1 closed appropriately, TP1 spun down gracefully on its own battery, and the pressure did not rise much above 3e-6 torr.
The UPS fizzed and smelled burnt. Rana will order a new, bigger, better, faster one.
We've had a devil of a time getting V1 to open, due to the Interlock code.
The short story is that if C1:Vac-PTP1_pressure > 1.0, the interlock code won't let you push the button to open V1 (but it won't close V1).
PTP1 is broken, so the interlock was frustrating us. It's been broken for a while, but this hasn't bitten us till now.
We tried swapping out the controller for PTP1 with one of Bob's from the Bake lab, but it didn't work.
It said "NO COMM" in the C1:Vac-PTP1_status, so I figured it wouldn't update if we just used tdswrite to change C1:Vac-PTP1_pressure to 0.0. This actually worked, and V1 is now open. This is a temporary fix.