Seems like DTT also works now. The trick seems to be to run sudo /usr/bin/diaggui instead of just diaggui. So this is indicative of some conflict between the yum installed gds and the relic gds from our shared drive. I also have to manually change the NDS settings each time, probably there's a way to set all of this up in a more smooth way but I don't know what it is. awggui still doesn't get the correct channels, not sure where I can change the settings to fix that.
I think if the magnet fell off, we would see high DC signal, and not 0 as we do now. I suspect satellite box or PD readout board/cabling. I am looking into this, tester box is connected to ITMY sat. box for now. I will restore the suspension later in the evening.
Suspension has now been restored. With combination of multimeter, octopus cable and tester box, the problem is consistent with being in the readout board in 1X5/1X6 or the cable routing the signals there from the sat. box.
We may lost the UL magnet or LED
For a series resistance of 4.5 kohm, we are suffering from the noise-gain amplified voltage noise of the Op27 (2*3.2nV/rtHz), and the Johnson noise of the two 1 kohm input and feedback resistances. As a result, the current noise is ~2.7 pA/rtHz, instead of the 1.9 pA/rtHz we expect from just the Johnson noise of the series resistance. For the present EX coil driver configuration of 2.25 kohm, the Op27 voltage noise is actually the dominant noise source. Since we are modeling small amounts (<1dB) of measurable squeezing, such factors are important I think.
[Attachment #1] --- Sketch of the fast signal path in the coil driver board, with resistors labelled as in the following LISO model plots. Note that as long as the resistance of the coil itself << the series resistance of the coil driver fast and slow paths, we can just add their individual current noise contributions, hence why I have chosen to model only this section of the overall network.
[Attachment #2] --- Noise breakdown per LISO model with top 5 noises for choice of Rseries = 2.25 kohm. The Johnson noise contributions of Rin and Rf exactly overlap, making the color of the resulting line a bit confusing, due to the unfortunate order of the matplotlib default color cycler. I don't want to make a custom plot, so I am leaving it like this.
[Attachment #3] --- Noise breakdown per LISO model with top 5 noises for choice of Rseries = 4.5 kohm. Same comments about color of trace representing Johnson noise of Rin and Rf.
Possible mitigation strategies:
I've chosen to ignore the noise contribution of the high current buffer IC that is inside the feedback loop. Actually, it may be interesting to compare the noise measurements (on the electronics bench) of the circuit as drawn in Attachment #1, without and with the high current buffer, to see if there is any difference.
This study also informs about what level of electronics noise is tolerable from the De-Whitening stage (aim for ~factor of 5 below the Rseries Johnson noise).
Finally, in doing this model, I understand that the observation the voltage noise of the coil driver apparently decreased after increasing the series resistance, as reported in my previous elog. This is due to the network formed by the fast and slow paths (during the measurement, the series resistance in the slow path makes a voltage divider to ground), and is consistent with LISO modeling. If we really want to measure the noise of the fast path alone, we will have to isolate it by removing the series resistance of the slow bias path.
Comment about LISO breakdown plots: for the OpAmp noises, the index "0" corresponds to the Voltage noise, "1" and "2" correspond to the current noise from the "+" and "-" inputs of the OpAmp respectively. In future plots, I'll re-parse these...
I will upload more details + photos + data + schematic + LISO model breakdown tomorrow to a DCC page.
I wanted to investigate my coil driver noise measurement technique under more controlled circumstances, so I spent yesterday setting up various configurations on a breadboard in the control room. The overall topology was as sketched in Attachment #1 of the previous elog, except for #4 below. Summary of configurations tried (series resistance was 4.5k ohm in all cases):
Attachment #1: Picture of the breadboard setup.
Attachment #2: Noise measurements (input shorted to ground) with 1 Hz linewidth from DC to 4 kHz.
Attachment #3: Noise measurements for full SR785 span.
Attachment #4: Apparent coupling due to PSRR.
Attachment #5: Comparison of low frequency noise with and without the LM6321 part of the fast DAC path implemented.
All SR785 measurements were made with input range fixed at -42dBVpk, input AC coupled and "Floating", with a Hanning window.
I've been thinking about what we need to do to the de-whitening boards for the ITMs and ETMs, in order to have low noise actuators. Noting down what I have so far, so that people can comment / point out things I've overlooked.
Attachment #1: Block diagram schematic of the de-whitened signal path on D000183 as it currently exists. I've omitted the unity gain buffer stage at the output, though this is important for noise considerations.
Some considerations, in rough order of priority:
I will experiment with a few different shapes and investigate noise and de-whitened digital signal levels based on these considerations. At the very least, I guess we should remove the x3 gain on the ETM boards, they have already been bypassed for the ITMs.
I moved the N2 check script and the disk usage checking script from the (sudo) crontab of nodus to the controls user crontab on megatron .
For the upcoming vent, we'd like to rotate the SOS towers to correct for the large YAW bias voltages used for DC alignment of the ITMs and ETMs. We could then use a larger series resistance in the DC bias path, and hence, reduce the actuation noise on the TMs.
Today, I used the calibrated Oplev error signals to estimate what angular correction is needed. I disabled the Oplev loops, and drove a ~0.1 Hz sine wave to the EPICS channel for the DC yaw bias. Then I looked at the peak-to-peak Oplev error signal, which should be in urad, and calibrated the slider counts to urad of yaw alignment, since I know the pk-to-pk counts of the sine wave I was driving. With this calibration, I know how much DC Yaw actuation (in mrad) is being supplied by the DC bias. I also know the directions the ETMs need to be rotated, I want to double check the ITMs because of the multiple steering mirrors in vacuum for the Oplev path. I will post a marked up diagram later.
Steve is going to come up with a strategy to realize this rotation - we would like to rotate the tower through an axis passing through the CoM of the suspended optic in the vertical direction. I want to test out whatever approach we come up with on the spare cage before touching the actual towers.
Here are the numbers. I've not posted any error analysis, but the way I'm thinking about it, we'd do some in air locking so that we have the cavity axis as a reference and we'd use some fine alignment adjust (with the DC bias voltages at 0) until we are happy with the DC alignment. Then hopefully things change by so little during the pumpdown that we only need small corrections with the bias voltages.
Oplev error signal readback
PRM watchdog was tripped around 7:15am PT today morning. I restored it.
Aaron and I are going to do the checkout of the OMC electronics outside vacuum today. At some point, we will also want to run a c1omc model to integrate with rtcds. Barring objections, I will set up this model on one of the spare cores on the physical machine c1ioo tomorrow.
We only anticipate opening up the IOO chamber and the EY chamber.
Vent preparation: see here.
@SV, we are ready to vent tomorrow. Aaron is supposed to show up ~830am to assist.
After getting the go ahead from Steve, I removed the physical beam block on the PSL table, sent the beam into the IFO, and re-aligned the MC to lock at low power. I've also revived my low power autolocker (running on megatron), seems to work okay though the gains may not be optimal, but it seems to do the job for now. Nominal transmission when well aligned at low power is ~1200cts. I briefly checked Y arm alignment with the green, seems okay, but didn't try locking the Y arm yet. All doors are still on, and I'm closing the PSL shutter again while Keerthana and Sandrine are working near the AS table.
For the heater setup on EY table, I EQ-stopped ETMY. Only the face EQ stops (3 on HR face, 2 on AR face) were engaged. The EY Oplev HeNe was also shutdown during this procedure.
Per Steve's instructions, we did the following:
I implemented this today. For now, the LSC output matrix is set to actuate on MC2 for Y arm locking. As expected, the transmission was much more stable, and the PLL control signal RMS was also reduced by factor of ~3. MC2 control signal does pick up a large (~2000 cts) DC component over a few hours, so we need to relieve this periodically.
Now that we have a workable ASS for the Y arm as well, we should be able to have more confidence in returning to the same beam spot position on the ETM and staying there during a scan using this technique.
The main improvement to be trialled next in the scanning is to improve the speed of scanning. As things stand, my script takes ~2.5 seconds per datapoint. If we can cut this in half, that'd be huge. On Wednesday night, we were extraordinarily lucky to avoid MC3 glitching, EPICS/slow machine failures, and GPIB freezes. Today, the latter reared its head. Fortunately, since I'm dumping data to file for each datapoint, this means we at least have data till the GPIB freeze.
For future measurements, we should consider locking the IMC length to the arm cavity - this would eliminate such alignment drifts, and maybe also make the PLL control signal RMS smaller.
Not related to this work: Terra, Sandrine, Keerthana and I cleaned up the lab a bit today, and made better cable labels. Aaron and I have to clean up the OMC area a bit. Huge thanks to Steve for taking care of our mess elsewhere in the lab!
We did a quick check of this board today. Main takeaways:
With the correct , we expect 0V from the DAC to result in 0 actuation on the mirror, assuming that an equal 75V goes to 2 PZTs mounted diametrically opposite on the optic. Hopefully, this means we have sufficient range to scan the input pointing into the OMC and get some sort of signal in the REFL signal (while length PZT is being scanned) which indicates a resonance.
We plan to carve out some IFO time for this work next week.
I was thinking a little more about the way we are training the network for the current topology - because the network has no recurrent layers, I guess it has no memory of past samples, and so it doesn't have any sense of the temporal axis. In fact, Keras by default shuffles the training data you give it randomly so the time ordering is lost. So the training amounts to requiring the network to identify the center of the Gaussian beam and output that. So in the training dataset, all we need is good (spatial) coverage of the area in which the spot is most likely to move? Or is the idea to develop some tools to generate video with spot motion close to that on the ETM in lock, so that we can use it with a network topology that has memory?
This looks like good progress. Instead of fixed sines or random noise, you should generate now a time series for the motion which is random noise but with a power spectrum similar to what we see for the ETM pitch motion in lock. You can use inverse FFT to get the time series from the open loop OL spectra (being careful about edge effects)
After this work, I've been having some trouble getting data with Python NDS. Eventually, I figured out that the nds connection request has to be pointed at '188.8.131.52' (the address of the NAT router which faces the outside world), port 31200 (it used to work with 'nds40.ligo.caltech.edu' or '184.108.40.206'). So the following snippet in python allows a connection to be opened. Offline access of frame data via NDS2 now seems possible.
So far, ssh (22), web services (30889), and elog (8081, 8080) were tested. We also need to test megatron NDS port forwarding and rsync for nodus, too.Finally I turned off the firewall rules of shorewall on nodus as it is no longer necessary.
Kevin and I saw some weird IMC / PEM BLRMS behaviour today - see Attached screenshot. Not sure what was happening with the IMC, but MCtrans was oscillating at ~3Hz for a good 20 minutes or so. I just killed the lock, and restarted MCautolocker on megatron. There was a strange feature in the 3-10Hz BLRMS around that time as well. All seems back to normal now...
When I came in this morning:
Checking status of slow machines, it looked like c1sus, c1aux, and c1iscaux needed reboots, which I did. Still PMC would not lock. So I did a burtrestore, and then PMC was locked. But there seemed to be waaaaay to much motion of MCREFL, so I checked the suspension. The shadow sensor EPICS channels are reporting ~10,000 cts, while they used to be ~1000cts. No unusual red flags on the CDS side. Everything looked nominal when I briefly came in at 6:30pm PT yesterday, not sure if anything was done with the IFO last night.
Pending further investigation, I'm leaving all watchdogs shutdown and the PSL shutter closed.
A quick look at the Sorensens in 1X6 revealed that the +/- 20V DC power supplies were current overloaded (see Attachment #1). So I set those two units to zero until we figure out what's going on. Possibly something is shorted inside the ITMX satellite box and a fuse is blown somewhere. I'll look into it more once Steve is back.
[koji, steve, gautam]
We debugged this in the following way:
So for now, the power cable to the box is disconnected on the back end. We have to pull it out and debug it at some point.
Apart from this, megatron was un-sshable so I had to hard reboot it, and restart the MCautolocker, FSSslowPy and nds2 processes on it. I also restarted the modbusIOC processes for the PSL channels on c1auxex (for which the physical Acromag units sit in 1X5 and hence were affected by our work), mainly so that the FSS_RMTEMP channel worked again. Now, IMC autolocker is working fine, arms are locked (we can recover TRX and TRY~1.0), and everything seems to be back to a nominal state. Phew.
After this work, we recovered the nominal RTCDS state. The main points were:
Some stuff that is not working as usual:
I made a model change in c1x03 (the IOP model on c1ioo) to add a DAC part. The model compiled, installed and started correctly, and looking at dmesg on c1ioo, it recognises the DAC card as what it is. Next step is to use a core on c1ioo for a c1omc model, and actually try driving some signals.
Note that the only change made to the c1ioo expansion chassis was that a DAC card was installed into the PCIe bus. The adaptor card which allows interfacing the DAC card to an AI board was already in the expansion chassis, presumably from whenever the DAC was removed from this machine.
*I think I forgot to restart optimus after this work...
The main motivation behind adding a DAC card in c1ioo was to setup an RTCDS model for the OMC. Attachment #1 shows the new look CDS overview screen. Here is what I did.
Mostly, I followed instructions from when I setup the model for the EX green PZTs.
The model is just a toy for now (CDS parameters, ADC block and 2 CDS filter modules). I leave it to Aaron to actually populate it, check functionality etc. The path to the model is /opt/rtcds/caltech/c1/userapps/release/isc/c1/models/c1omc.mdl. I am listing the parameters set on the CDS_PARAMETERS block:
Building and installing model:
Once the model was installed, I logged into c1ioo, and built and installed the models using the usual rtcds make and rtcds install instructions. Before starting the model, I edited /diskless/root.jessie/etc/rtsystab to allow c1omc to be run on c1ioo. Using sudo cset set, I verified that CPU #6 is no longer listed (if I understand correctly, the RTCDS system takes over the core).
To reflect all this on the MEDM CDS OVERVIEW screen, I just edited the screen.
Finally, I followed the instructions here to get the channels into frames and make all the indicators green. Went into fb and restarted the daqd processes. All looks good . I'm going to leave the model running overnight to investigate stability. I forgot to svn commit the model tonight, will do it tomorrow.
The testing plan (at least initially) is to install the AA and AI boards from the OMC rack in 1X1/1X2. Then we will have short SCSI cables running from the ADC/DAC to these. The actual HV driving stages will remain in the OMC rack (NE corner of AS table).
@Steve, can we get 10 Male-Female D9 cables so that we can run them from 1X1/1X2 to the OMC rack?
Unrelated to this work: There were 2 crashes of the models on c1lsc, one ~6pm and one right now ~1030pm. The restart script brought everything back gracefully ...
I walked down to the X end and found that the entire AUX laser electronics rack isn't getting any power. There was no elog about this.
I couldn't find any free points in the power strip where I think all this stuff was plugged in so I'm going to hold off on resurrecting this until tomorrow when I'll work with Steve.
The X arm green does not stay locked to the cavity - the alignment looks fine, and the green flashes are strong, but the lock does not hold. This shouldn't be directly connected to anything we did today since the Green PDH servo is entirely analog.
Actually, c1lsc had crashed again sometime last night so I had to reboot everything this morning. I used the reboot script again, but I increased the sleep time between trying to start up the models again so that I could walk into the VEA and power cycle the c1lsc expansion chassis, as this kind of frequent model crash has been fixed by doing so in the past. Sure enough, there have been no issues since I rebooted everything at ~1030 in the morning.
The c1omc model itself has been stable as well, though of course, there is nothing in there at the moment. I may do a check of the newly installed DAC tomorrow just to see that we can put out a sine wave.
Steve has ordered the D-sub cabling that will allow us to route signals between AA/AI boards in 1X1/1X2 to the HV PZT electronics in the OMC rack. Things look setup for a measurement next week. Aaron will post a block diagram + photoz of what box goes where in the electronics racks.
Steve and I restored the power to the EX AUX electronics rack. The power strip on the lowest shelf of the AUX rack now goes to another power strip laid out vertically along the NW corner of 1X9. The EX green locks to the arm just fine now.
The idea we are going with to push the coil driver noise contribution down is to simply increase the series resistance between the coil driver board output and the OSEM coil. But there are two paths, one for fast actuation and one that provides a DC current for global alignment. I think the simplest way to reduce the noise contribution of the latter, while preserving reasonable actuation range, is to implement a precision DC high-voltage source. A candidate that I pulled off an LT application note is shown in Attachment #1.
If all this seems reasonable, I'd like to prototype this circuit and test it with ETMX, which already has the high series resistance for the fast path. So I will ask Steve to order the OpAmp and transistors.
The wall StripTool indicated that the IMC wasn't too happy when I came in today. Specifically:
The last time this happened, it was due to the Sorensens not spitting out the correct voltages. This time, there were no indications on the Sorensens that anything was funky. So I just disabled the MCautolocker and figured I'd debug later in the evening.
However, around 5pm, the shadow sensor values looked nominal again, and when I re-enabled the local damping, the MC REFL spot suggested that the local damping was working just fine. I re-enabled the MCautolocker, MC re-locked almost immediately. To re-iterate, I did nothing to the electronics inside the VEA. Anyways, this enabled us to work on the X arm ASS (next elog).
After I effected the series resistance change for ETMX, the X arm ASS didn't work (i.e. IR transmission would degrade if the servo was run). Today, we succeeded in recovering a functional ASS servo .
So both arms have working dither alignment servos now. But remember that the Y arm ASS gains have been set for locking the Y arm with MC2 as the actuator, not ETMY.
We then tried to maximize GTRX using the PZT mirrors, but were only successful in reaching a maximum of 0.41. The value I remember from before the vent was 0.5, and indeed, with the IR alignment not quite optimized before we began this work, I saw GTRX of 0.48. But the IR dither servo signals indicate that the cavity axis may have shifted (spot position on the ITM, which is uncontrolled, seems to have drifred significantly, the Pitch signal doesn't stay on the StripTool scale anymore). So we may have to double check that the transmitted beam isn't falling off the GTRX DC PD.
Since the lab-wide computer shutdown last Wednesday, all the realtime models running on c1lsc have been flaky. The error is always the same:
[58477.149254] c1cal: ADC TIMEOUT 0 10963 19 11027
[58477.149254] c1daf: ADC TIMEOUT 0 10963 19 11027
[58477.149254] c1ass: ADC TIMEOUT 0 10963 19 11027
[58477.149254] c1oaf: ADC TIMEOUT 0 10963 19 11027
[58477.149254] c1lsc: ADC TIMEOUT 0 10963 19 11027
[58478.148001] c1x04: timeout 0 1000000
[58479.148017] c1x04: timeout 1 1000000
[58479.148017] c1x04: exiting from fe_code()
This has happened at least 4 times since Wednesday. The reboot script makes recovery easier, but doing it once in 2 days is getting annoying, especially since we are running many things (e.g. ASS) in custom configurations which have to be reloaded each time. I wonder why the problem persists even though I've power-cycled the expansion chassis? I want to try and do some IFO characterization today so I'm going to run the reboot script again but I'll get in touch with J Hanks to see if he has any insight (I don't think there are any logfiles on the FEs anyways that I'll wipe out by doing a reboot). I wonder if this problem is connected to DuoTone? But if so, why is c1lsc the only FE with this problem? c1sus also does not have the DuoTone system set up correctly...
The last time this happened, the problem apparently fixed itself so I still don't have any insight as to what is causing the problem in the first place . Maybe I'll try disabling c1oaf since that's the configuration we've been running in for a few weeks.
Independent from the problems the vertex machine has been having (I think, unless it's something happening over the shared memory network), I noticed on Friday that the ETMX watchdog was tripped. Today, once again, the ETMX watchdog was tripped. There is no evidence of any abnormal seismic activity around that time, and anyways, none of the other watchdogs tripped. Attachment #1 shows that this happened ~838am PT today morning. Attachment #2 shows the 2k sensor data around the time of the trip. If the latter is to be believed, there was a big impulse in the UL shadow sensor signal which may have triggered the trip. I'll squish cables and see if that helps - Steve and I did work at the EX electronics rack (1X9) on Friday but this problem precedes our working there...
OK, how about this:
The question still remains of how to combine the fast and bias paths in this proposed scheme. I think the following approach works for prototyping at least:
In the longer term, perhaps the Satellite Box revamp can accommodate a bias voltage summation connector.
Bah! Too complex.
I have neglected many practical concerns. Some things that come to mind:
I spent most of today fighting various CDS errors.
Let's see how stable this configuration is. Onto some locking now...
Stability was short-lived it seems. When I came in this morning, all models on c1lsc were dead already, and now c1sus is also dead (Attachment #1). Moreover, MC1 shadow sensors failed for a brief period again this afternoon (Attachment #2). I'm going to wait for some CDS experts to take a look at this since any fix I effect seems to be short-lived. For the MC1 shadow sensors, I wonder if the Trillium box (and associated Sorensen) failure somehow damaged the MC1 shadow sensor/coil driver electronics.
I've left the c1lsc frontend shutdown for now, to see if c1sus and c1ioo can survive without any problems overnight. In parallel, we are going to try and debug the MC1 OSEM Sensor problem - the idea will be to disable the bias voltage to the OSEM LEDs, and see if the readback channels still go below zero, this would be a clear indication that the problem is in the readback transimpedance stage and not the LED. Per the schematic, this can be done by simply disconnecting the two D-sub connectors going to the vacuum flange (this is the configuration in which we usually use the sat box tester kit for example). Attachment #1 shows the current setup at the PD readout board end. The dark DC count (i.e. with the OSEM LEDs off) is ~150 cts, while the nominal level is ~1000 cts, so perhaps this is already indicative of something being broken but let's observe overnight.
Overnight, all models on c1sus and c1ioo seem to have had no stability issues, supporting the hypothesis that timing issues stem from c1lsc. Moreover, the MC1 shadow sensor readouts showed no negative values over a ~12hour period. I think we should just observe this for another day, in any case I don't think there is any urgent IFO related activity scheduled.
I am starting the c1x04 model (IOP) on c1lsc to see how it behaves overnight.
Well, there was apparently an immediate reaction - all the models on c1sus and c1ioo reported an ADC timeout and crashed. I'm going to reboot them and still have c1x04 IOP running, to see what happens.
[97544.431561] c1pem: ADC TIMEOUT 3 8703 63 8767
[97544.431574] c1mcs: ADC TIMEOUT 1 8703 63 8767
[97544.431576] c1sus: ADC TIMEOUT 1 8703 63 8767
[97544.454746] c1rfm: ADC TIMEOUT 0 9033 9 8841
As part of this slow but systematic debugging, I am turning on the c1lsc model overnight to see if the model crashes return.
Today while Rich Abbott was here, Koji and I had a brief discussion with him about the HV amplifier idea for the coil driver bias path. He gave us some useful tips, perhaps most useful being a topology that he used and tested for an aLIGO ITM ESD driver which we can adapt to our application. It uses a PA95 high voltage amplifier which differs from the PA91 mainly in the output voltage range (up to 900V for the former, "only" 400V for the former. He agrees with the overall design idea of
He also gave some useful suggestions like
I am going to work on making a prototype version of this box for 5 channels that we can test with ETMX. I have been told that the coupling from side coil to longitudinal motion is of the order of 1/30, in which case maybe we only need 4 channels.
For operating the SRC in the "Signal-Recycled" tuning, the SRC macroscopic length needs to be ~4.04m (compared to the current value of ~5.399m), assuming we don't do anything fancy like change the modulation frequencies and not transmit through the IMC. We're putting together a notebook with all the calculations, but today I was thinking about what the signal extraction path should be, specifically which chamber the SRM should be in. Just noting down the thoughts I had here while they're fresh in my head, all this has to be fleshed out, maybe I'm making this out to be more of a problem than it actually is.
The model seems to have run without issues overnight. Not completely related, but the MC1 shadow sensor signals also don't show any abnormal excursions to negative values in the last 48 hours. I'm thinking about re-connecting the satellite box (but preserving the breakout setup at 1X6 for a while longer) and re-locking the IMC. I'll also start c1ass on the c1lsc frontend. I would say that the other models on c1lsc (i.e. c1oaf, c1cal, c1daf) aren't really necessary for basic IFO operation.
A brief follow-up on this since we discussed this at the meeting yesterday: the attached DV screenshot shows the full 2k data for a period of 2 seconds starting just before the watchdog tripped. It is clear that the timescale of the glitch in the UL channel is much faster (~50 ms) compared to the (presumably mechanical) timescale seen in the other channels of ~250 ms, with the step also being much smaller (a few counts as opposed to the few thousand counts seen in the UL channel, and I guess 1 OSEM count ~ 1 um). All this supports the hypothesis that the problem is electrical and not mechanical (i.e. I think we can rule out the Acromag sending a glitchy signal to the coil and kicking the optic). The watchdog itself gets tripped because the tripping condition is the RMS of the shadow sensor outputs, which presumably exceeds the set threshold when UL glitches by a few thousand counts.
After this work of increasing the series resistance on ETMX, there have been numerous occassions where the insufficient misalignment of ETMX has caused problems in locking vertex cavities. Today, I modified the script (located at /opt/rtcds/caltech/c1/medm/MISC/ifoalign/AlignSoft.py) to avoid such problems. The way the misalign script works is to write an offset value to the "TO_COIL" filter bank (accessed via "Output Filters" button on the Suspension master MEDM screen - not the most intuitive place to put an offset but okay). So I just increased the value of this offset from 250 counts to 2500 counts (for ETMX only). I checked that the script works, now when both ETMs are misaligned, the AS55Q signal shows a clean Michelson-like sine wave as it fringes instead of having the arm cavity PDH fringes as well .
Note that the svn doesn't seem to work on the newly upgraded SL7 machines: svn status gives me the following output.
Is it safe to run 'svn upgrade'? Or is it time to migrate to git.ligo.org/40m/scripts?
For the first time after the whirlwind vent, I managed to lock the PRMI.
I don't have the energy to make a DRMI attempt tonight - but the signs are encouraging. I'd like to use the IFO in the next few days to try and recover DRMI locking. The main concern is that the optical path on the AS beam has changed by ~0.3m I estimate. So the demod phase for AS55 may need to be adjusted, but the change due to optical path length only should be ~10degrees so the DRMI locking with the old settings should still work. Perhaps we also want to scan the PRC and SRC with the phase information from the Trans/Refl transfer functions as well.
Don't want to jinx it, but the c1lsc FE models have been stable. Tomorrow, I'd like to re-enable c1cal, since it has some useful channels for NBing. Could c1daf/c1oaf which have significant amounts of custom C code be the culprits?
Can we use the leakage beam from MMT2 on the OMC table as the LO beam? I can't find the spec for this optic, but the leakage beam was clearly visible on an IR card even with the IMC locked with 100 mW input power so presumably there's enough light there, and this is a cavity transmission beam which presumably has some HOM content filtered out.
My current thought is to use the MC reflection, the beam that heads from MC1 to MCR1, as the LO beam
Larry W said that some security issues were flagged on nodus. So I ran
on nodus. The exclude flag is because there were some conflicts related to that particular package. Hopefully this has fixed the problem. It's been a while since the last update, which was in January of this year.
In preparation for attempting some DRMI locking, I did the following:
Not related to this work, but I turned the Agilent NA off since we aren't using it immediately.
While working on the single arm alignment, I noticed that today, i was able to get the X arm transmission back to ~1.22, and the GTRX to 0.52. These are closer to the values I remember from prior to the vent. Running the dither alignment promptly degrades both the green and IR transmissions. Since the pianosa SL7 upgrade, I can't use the sensoray to capture images, but to me, the spot looks a little off-center in Yaw on ETMX in this configuration, I've tried to show this in the phone grab (Atm #2). Maybe indicative of clipping somewhere upstream of ITMX?
Anyways, I'm pushing onwards for now, something to check out in the daytime.
After tweaking the AS55 demod phase, SRM alignment, triggering settings, I got a few brief DRMI locks in tonight, I'm calling it a success (though this isn't really robust yet). The main things to do now are:
I think the main IFO characterization remaining to be done to determine the status of the IFO post vent is to measure the losses of the arm cavities. IMO, we will need to certainly fix the clipping at ETMY before we attempt some serious locking.
It looks like we can have a stable SRC of length 4.044 m without getting any new mirrors, so this is an option to consider in the short-term.
gautam 245pm: Koji pointed out that the G&H mirrors are coated for normal incidence, but looking at the measurement, it looks like the optic has T~75ppm at 45 degree incidence, which is maybe still okay. Alternatively, we could use the -600m SR3 as the single folding mirror in the SRC, at the expense of slightly reduced mode-matching between the arm cavity and SRC.