Per discussion today eve, barring objections, I will do the following tomorrow morning:
I finished the re-soldering work today, and have measured the coil driver noise pre-Mods and post-Mods. Analysis tomorrow. I am holding off on re-installing the board tonight as it is likely we will have to tune all the loops to make them work with the reduced range. So ETMX will remain de-commissioned until tomorrow.
I decided to take a quick look at the data. Changes made to the ETMX coil driver board:
I also took the chance to check the integrity of the LM6321 ICs. In the past, a large DC offset on the output pin of these has been indicative of a faulty IC. But I checked all the ICs with a DMM, and saw no anomalies.
Measurement condition was that (i) the Fast input was terminated to ground via 50ohm, (ii) the Bias input was shorted to ground. SR785 was used with G=100 Busby preamp (in which Steve installed new batteries today, for someone had left it on for who knows how long) for making the measurement. The voltage measurement was made at the D-Sub connector on the front panel which would be connected to the Sat. Box, with the coil driver not connected to anything downstream.
Summary of results:
[Attachment #1] - Noise measurement out to 800 Hz. The noise only seems to agree with the LISO model above 300 Hz. Not sure if the low-frequency excess is real or a measurement artefact. Tomorrow, I plan to make an LPF pomona box to filter out the HF pickup and see if the low-frequency characteristics change at all. Need to think about what this corner freq. needs to be. In any case, such a device is probably required to do measurements inside the VEA.
[Attachment #2] - Noise measurement for full SR785 span. The 19.5 kHz harmonics are visible. I have a theory about the origin of these, need to do a couple of more tests to confirm and will make a separate log.
[Attachment #3] - zip of LISO file used for modeling coil driver. I don't have the ASCII art in this, so need to double check to make sure I haven't connected some wrong nodes, but I think it's correct.
Measurements seem to be consistent with LISO model predictions.
*Note: Curves labelled "LISO model ..." are really quad sum of liso pred + busby box noise.
My main finding tonight is: With the increased series resistance (400 ohm ---> 2.25 kohm), LISO modeling tells me that even though the series resistance (Johnson noise) used to dominate the voltage noise at the output to the OSEM, the voltage noise of the LT1125 in the bias path now dominates. Since we are planning to re-design the entire bias path anyways, I am not too worried about this for the moment.
I will upload more details + photos + data + schematic + LISO model breakdown tomorrow to a DCC page.
gautam noon 21 June 2018: I was looking at the wrong LISO breakdown curves. So the input stage Op27 voltage noise used to dominate. Now the Bias path LT1125 voltage noise dominates. None of the conclusions are affected... I've uploaded the corrected plots and LISO file here now.
Initial tests look promising. Local damping works and I even locked the X arm using POX, although I did it in a fake way by simply inserting a x5.625 (=2.25 kohm / 400 ohm) gain in the coil driver filter banks. I will now tune the individual loop gains to account for the reduced actuation range.
Now I have changed the loop gains for local damping loops, Oplev loops, and POX locking loop to account for the reduced actuation range. The dither alignment servo (X arm ASS) has not been re-commissioned yet...
We may lost the UL magnet or LED
I think if the magnet fell off, we would see high DC signal, and not 0 as we do now. I suspect satellite box or PD readout board/cabling. I am looking into this, tester box is connected to ITMY sat. box for now. I will restore the suspension later in the evening.
Suspension has now been restored. With combination of multimeter, octopus cable and tester box, the problem is consistent with being in the readout board in 1X5/1X6 or the cable routing the signals there from the sat. box.
For a series resistance of 4.5 kohm, we are suffering from the noise-gain amplified voltage noise of the Op27 (2*3.2nV/rtHz), and the Johnson noise of the two 1 kohm input and feedback resistances. As a result, the current noise is ~2.7 pA/rtHz, instead of the 1.9 pA/rtHz we expect from just the Johnson noise of the series resistance. For the present EX coil driver configuration of 2.25 kohm, the Op27 voltage noise is actually the dominant noise source. Since we are modeling small amounts (<1dB) of measurable squeezing, such factors are important I think.
[Attachment #1] --- Sketch of the fast signal path in the coil driver board, with resistors labelled as in the following LISO model plots. Note that as long as the resistance of the coil itself << the series resistance of the coil driver fast and slow paths, we can just add their individual current noise contributions, hence why I have chosen to model only this section of the overall network.
[Attachment #2] --- Noise breakdown per LISO model with top 5 noises for choice of Rseries = 2.25 kohm. The Johnson noise contributions of Rin and Rf exactly overlap, making the color of the resulting line a bit confusing, due to the unfortunate order of the matplotlib default color cycler. I don't want to make a custom plot, so I am leaving it like this.
[Attachment #3] --- Noise breakdown per LISO model with top 5 noises for choice of Rseries = 4.5 kohm. Same comments about color of trace representing Johnson noise of Rin and Rf.
Possible mitigation strategies:
I've chosen to ignore the noise contribution of the high current buffer IC that is inside the feedback loop. Actually, it may be interesting to compare the noise measurements (on the electronics bench) of the circuit as drawn in Attachment #1, without and with the high current buffer, to see if there is any difference.
This study also informs about what level of electronics noise is tolerable from the De-Whitening stage (aim for ~factor of 5 below the Rseries Johnson noise).
Finally, in doing this model, I understand that the observation the voltage noise of the coil driver apparently decreased after increasing the series resistance, as reported in my previous elog. This is due to the network formed by the fast and slow paths (during the measurement, the series resistance in the slow path makes a voltage divider to ground), and is consistent with LISO modeling. If we really want to measure the noise of the fast path alone, we will have to isolate it by removing the series resistance of the slow bias path.
Comment about LISO breakdown plots: for the OpAmp noises, the index "0" corresponds to the Voltage noise, "1" and "2" correspond to the current noise from the "+" and "-" inputs of the OpAmp respectively. In future plots, I'll re-parse these...
I wanted to investigate my coil driver noise measurement technique under more controlled circumstances, so I spent yesterday setting up various configurations on a breadboard in the control room. The overall topology was as sketched in Attachment #1 of the previous elog, except for #4 below. Summary of configurations tried (series resistance was 4.5k ohm in all cases):
Attachment #1: Picture of the breadboard setup.
Attachment #2: Noise measurements (input shorted to ground) with 1 Hz linewidth from DC to 4 kHz.
Attachment #3: Noise measurements for full SR785 span.
Attachment #4: Apparent coupling due to PSRR.
Attachment #5: Comparison of low frequency noise with and without the LM6321 part of the fast DAC path implemented.
All SR785 measurements were made with input range fixed at -42dBVpk, input AC coupled and "Floating", with a Hanning window.
For the upcoming vent, we'd like to rotate the SOS towers to correct for the large YAW bias voltages used for DC alignment of the ITMs and ETMs. We could then use a larger series resistance in the DC bias path, and hence, reduce the actuation noise on the TMs.
Today, I used the calibrated Oplev error signals to estimate what angular correction is needed. I disabled the Oplev loops, and drove a ~0.1 Hz sine wave to the EPICS channel for the DC yaw bias. Then I looked at the peak-to-peak Oplev error signal, which should be in urad, and calibrated the slider counts to urad of yaw alignment, since I know the pk-to-pk counts of the sine wave I was driving. With this calibration, I know how much DC Yaw actuation (in mrad) is being supplied by the DC bias. I also know the directions the ETMs need to be rotated, I want to double check the ITMs because of the multiple steering mirrors in vacuum for the Oplev path. I will post a marked up diagram later.
Steve is going to come up with a strategy to realize this rotation - we would like to rotate the tower through an axis passing through the CoM of the suspended optic in the vertical direction. I want to test out whatever approach we come up with on the spare cage before touching the actual towers.
Here are the numbers. I've not posted any error analysis, but the way I'm thinking about it, we'd do some in air locking so that we have the cavity axis as a reference and we'd use some fine alignment adjust (with the DC bias voltages at 0) until we are happy with the DC alignment. Then hopefully things change by so little during the pumpdown that we only need small corrections with the bias voltages.
Oplev error signal readback
This bad connection is coming back
PRM watchdog was tripped around 7:15am PT today morning. I restored it.
For the heater setup on EY table, I EQ-stopped ETMY. Only the face EQ stops (3 on HR face, 2 on AR face) were engaged. The EY Oplev HeNe was also shutdown during this procedure.
Yesterday I inspected this BS oplev viewport. The heavy connector tube was shorting to table so It was moved back towards the chamber. The connection is air tight with kapton tape temporarly.
The beam paths are well centered. The viewport is dusty on the inside.
The motivation was to improve the oplev noise.
When I came in this morning:
Checking status of slow machines, it looked like c1sus, c1aux, and c1iscaux needed reboots, which I did. Still PMC would not lock. So I did a burtrestore, and then PMC was locked. But there seemed to be waaaaay to much motion of MCREFL, so I checked the suspension. The shadow sensor EPICS channels are reporting ~10,000 cts, while they used to be ~1000cts. No unusual red flags on the CDS side. Everything looked nominal when I briefly came in at 6:30pm PT yesterday, not sure if anything was done with the IFO last night.
Pending further investigation, I'm leaving all watchdogs shutdown and the PSL shutter closed.
A quick look at the Sorensens in 1X6 revealed that the +/- 20V DC power supplies were current overloaded (see Attachment #1). So I set those two units to zero until we figure out what's going on. Possibly something is shorted inside the ITMX satellite box and a fuse is blown somewhere. I'll look into it more once Steve is back.
[koji, steve, gautam]
We debugged this in the following way:
So for now, the power cable to the box is disconnected on the back end. We have to pull it out and debug it at some point.
Apart from this, megatron was un-sshable so I had to hard reboot it, and restart the MCautolocker, FSSslowPy and nds2 processes on it. I also restarted the modbusIOC processes for the PSL channels on c1auxex (for which the physical Acromag units sit in 1X5 and hence were affected by our work), mainly so that the FSS_RMTEMP channel worked again. Now, IMC autolocker is working fine, arms are locked (we can recover TRX and TRY~1.0), and everything seems to be back to a nominal state. Phew.
The trillium interface box was removed from the rack.
The problem was the incorrect use of an under-spec TVS (Transient Voltage Suppression) diodes (~ semiconductor fuse) for the protection circuit.
The TVS diodes we had had the breakdown voltages lower than the supplied voltages of +/-20V. This over-voltage eventually caused the catastrophic breakdown of one of the diodes.
I don't find any particular reason to have these diodes during the laboratory use of the interface. Therefore, I've removed the TVS diodes and left them unreplaced. The circuit was tested on the bench and returned to the rack. All the cables are hooked up, and now the BRLMs look as usual.
- The board version was found to be D1000749-v2
- There was an obvious sign of burning or thermal history around the components D17 and D14. The solder of the D17 was so brittle that just a finger touch was enough to remove the component.
- These D components are TVS diodes (Transient Voltage Suppression Diodes) manufactured by Littelfuse Inc. It is sort of a surge/overvoltage protector to protect rest of the circuit to be exposed to excess voltage. The specified component for D17/D14 was 5.0SMMDJ20A with reverse standoff voltage (~operating voltage) of 20V and the breakdown voltage of 22.20V(min)~24.50V(max). However, the spec sheet told that the marking of the proper component must be "5BEW" rather than "DEM," which is visible on the component. Some search revealed that the used component was SMDJ15A, which has the breakdown voltage of 16.70V~18.50V. This spec is way too low compared to the supplied voltage of +/-20V.
The idea we are going with to push the coil driver noise contribution down is to simply increase the series resistance between the coil driver board output and the OSEM coil. But there are two paths, one for fast actuation and one that provides a DC current for global alignment. I think the simplest way to reduce the noise contribution of the latter, while preserving reasonable actuation range, is to implement a precision DC high-voltage source. A candidate that I pulled off an LT application note is shown in Attachment #1.
If all this seems reasonable, I'd like to prototype this circuit and test it with ETMX, which already has the high series resistance for the fast path. So I will ask Steve to order the OpAmp and transistors.
Bah! Too complex.
The wall StripTool indicated that the IMC wasn't too happy when I came in today. Specifically:
The last time this happened, it was due to the Sorensens not spitting out the correct voltages. This time, there were no indications on the Sorensens that anything was funky. So I just disabled the MCautolocker and figured I'd debug later in the evening.
However, around 5pm, the shadow sensor values looked nominal again, and when I re-enabled the local damping, the MC REFL spot suggested that the local damping was working just fine. I re-enabled the MCautolocker, MC re-locked almost immediately. To re-iterate, I did nothing to the electronics inside the VEA. Anyways, this enabled us to work on the X arm ASS (next elog).
Independent from the problems the vertex machine has been having (I think, unless it's something happening over the shared memory network), I noticed on Friday that the ETMX watchdog was tripped. Today, once again, the ETMX watchdog was tripped. There is no evidence of any abnormal seismic activity around that time, and anyways, none of the other watchdogs tripped. Attachment #1 shows that this happened ~838am PT today morning. Attachment #2 shows the 2k sensor data around the time of the trip. If the latter is to be believed, there was a big impulse in the UL shadow sensor signal which may have triggered the trip. I'll squish cables and see if that helps - Steve and I did work at the EX electronics rack (1X9) on Friday but this problem precedes our working there...
OK, how about this:
The question still remains of how to combine the fast and bias paths in this proposed scheme. I think the following approach works for prototyping at least:
In the longer term, perhaps the Satellite Box revamp can accommodate a bias voltage summation connector.
I have neglected many practical concerns. Some things that come to mind:
Today while Rich Abbott was here, Koji and I had a brief discussion with him about the HV amplifier idea for the coil driver bias path. He gave us some useful tips, perhaps most useful being a topology that he used and tested for an aLIGO ITM ESD driver which we can adapt to our application. It uses a PA95 high voltage amplifier which differs from the PA91 mainly in the output voltage range (up to 900V for the former, "only" 400V for the former. He agrees with the overall design idea of
He also gave some useful suggestions like
I am going to work on making a prototype version of this box for 5 channels that we can test with ETMX. I have been told that the coupling from side coil to longitudinal motion is of the order of 1/30, in which case maybe we only need 4 channels.
A brief follow-up on this since we discussed this at the meeting yesterday: the attached DV screenshot shows the full 2k data for a period of 2 seconds starting just before the watchdog tripped. It is clear that the timescale of the glitch in the UL channel is much faster (~50 ms) compared to the (presumably mechanical) timescale seen in the other channels of ~250 ms, with the step also being much smaller (a few counts as opposed to the few thousand counts seen in the UL channel, and I guess 1 OSEM count ~ 1 um). All this supports the hypothesis that the problem is electrical and not mechanical (i.e. I think we can rule out the Acromag sending a glitchy signal to the coil and kicking the optic). The watchdog itself gets tripped because the tripping condition is the RMS of the shadow sensor outputs, which presumably exceeds the set threshold when UL glitches by a few thousand counts.
Here is an other big one
I took another pass at this. Here is what I have now:
Attachment #1: Composite amplifier design to suppress voltage noise of PA91 at low frequencies.
Attachment #2: Transfer function from input to output.
Attachment #3: Top 5 voltage noise contributions for this topology.
Attachment #4: Current noises for this topology, comparison to current noise from fast path and slow DAC noise.
Attachment #5: LISO file for this topology.
Looks like this will do the job. I'm going to run this by Rich and get his input on whether this will work (this design has a few differences from Rich's design), and also on how to best protect from HV incidents.
I had a very fruitful discussion with Rich about this circuit today. He agreed with the overall architecture, but made the following suggestions (Attachment #1 shows the circuit with these suggestions incorporated):
If all this sounds okay, I'd like to start making the PCB layout (with 5 such channels) so we can get a couple of trial boards and try this out in a couple of weeks. Per the current threat matrix and noises calculated, coil driver noise is still projected to be the main technical noise contribution in the 40m PonderSqueeze NB (more on this in a separate elog).
Glitch, small amplitude, 350 counts & no trip.
The second big glich trips ETMX sus. There were small earth quakes around the glitches. It's damping recovered.
All suspension tripped. Their damping restored. The MC is locked.
ITMX-UL & side magnets are stuck.
I freed ITMX and coarsely realigned the IFO using the OPLEVs. All the alignments were a bit off from overnight.
The IFO is still only able to lock in MICH mode currently, which was the situation before the earthquake. This morning I additionally tried restoring the burt state of the four machines that had been rebooted in the last week (c1iscaux, c1aux, c1psl, c1lsc) but that did not solve it.
M3.4 Colton shake did not trip sus.
I've been plugging away at Altium prototyping the high-voltage bias idea, this is meant to be a progress update.
I need to get footprints for some of the more uncommon parts (e.g. PA95) from Rich before actually laying this out on a PCB, but in the meantime, I'd like feedback on (but not restricted to) the following:
I also don't have a good idea of what the PCB layer structure (2 layers? 3 layers? or more?) should be for this kind of circuit, I'll try and get some input from Rich.
*Updated with current noise (Attachment #2) at the output for this topology of series resistance of 25 kohm in this path. Modeling was done (in LTspice) with a noiseless 25kohm resistor, and then I included the Johnson noise contribution of the 25k in quadrature. For this choice, we are below 1pA/rtHz from this path in the band we care about. I've also tried to estimate (Attachment #3) the contribution due to (assumed flat in ASD) ripple in the HV power supply (i.e. voltage rails of the PA95) to the output current noise, seems totally negligible for any reasonable power supply spec I've seen, switching or linear.
As a part of the preparation for the replacement of c1susaux with Acromag, I made inspection of the coil-osem transfer function measurements for the vertex SUSs.
The TFs showed typical f^-2 with the whitening on except for ITMY UL (Attachment 1). Gautam told me that this is a known issue for ~5 years.
We made a thorough inspection/replacement of the components and identified the mechanism of the problem.
It turned out that the inputs to MAX333s are as listed below.
The switching voltage for UL is obviously incorrect. We thought this comes from the broken BIO board and thus swapped the corresponding board. But the issue remained. There are 4 BIO boards in total on c1sus, so maybe we have replaced a wrong board?
Initially, we thought that the BIO can't drive the pull-up resistor of 5KOhm from 15V to 0V (=3mA of current). So I have replaced the pull-up resistor to be 30KOhm. But this did not help. These 30Ks are left on the board.
[steve, rana, gautam]
Rana pointed out that the OSEM cabling, because of lack of a plastic shielding, is grounded directly to the table on which it is resting. A glass baking dish at the base of the seismic stack prevents electrical shorting to the chamber. However, there are some LEMO/BNC cables as well on the east side of the stack, whose BNC ends are just lying on the base of the stack. We should use this opportunity to think about whether anything needs to be done / what the influence of this kind of grounding is (if any) on actuator noise.
Steve also pointed out that we should replace the rubber pads which the vacuum chamber is resting on (Attachment #1, not from this vent, but just to indicate what's what). These serve the purpose of relieving small amounts of strain the chamber may experience relative to the beam tube, thus helping preserve the vacuum joints b/w chamber and tube. But after (~20?) years of being under compression, Steve thinks that the rubber no longer has any elasticity, and so should be replaced.
[chub, bob, gautam]
We took the heavy door off the EY chamber at ~930am.
Waiting for the table to level off now. Plan for later today / tomorrow is as follows:
While restoring OSEMs on ETMY, I noticed that the open voltages for the UR and LL OSEMs had significantly (>30%) changed from their values from ~2 years ago. The fact that it only occurred in 2 coils seemed to rule out gradual wear and tear, so I looked up the trends from Nov 25 - Nov 28 (Sundance visited on Nov 26 which is when we removed the cage). Not surprisingly, these are the exact two OSEMs that show a decrease in sensor voltage when the OSEMs were pulled out. I suspect that when I placed them in their little Al foil boats, I shorted out some contacts on the rear (this is reminiscent of the problem we had on PRM in 2016). I hope the problem is with the current buffer IC in the satellite box and not the physical diode, I'll test with the tester box and evaluate the problem further.
Chamber work by Chub and gautam:
Y arm was locked at low power in air.
We are operating with 1/10th the input power we normally have, so we expect the IR transmission of the Y arm to max out at 1 when well aligned. However, it is hovering around 0.05 right now, and the dominant source of instability is the angular motion of ETMY due to the Oplev loop being non-functional. I am hesitant to do in-chamber work without an extra pair of eyes/hands around, so I'll defer that for tomorrow morning when Chub gets in. With the cavity axis well defined, I plan to align the green beam to this axis, and use the two to confirm that we are well clear of the Parabola.
* Paola, our vertex laptop, and indeed, most of the laptops inside the VEA, are not ideal to work on this kind of alignmment procedure, it would be good to set up some workstations on which we can easily interact with multiple MEDM screens,
Does anyone know what the purpose of the indicated optic in Attachment #1 is? Can we remove it? It will allow a little more space around the elliptical reflector...
I don't think it was used. It is not on the diagram too. You can remove it.
After diagnosis with the tester box, as I suspected, the fully open DC voltages on the two problematic channels, LL and UR, were restored once I replaced the LM6321 ICs in those two channel paths. However, I've been puzzled by the inability to turn on the Oplev loops on ETMY. Furthermore, the DC bias voltages required to get ETMY to line up with the cavity axis seemed excessively large, particularly since we seemed to have improved the table levelling.
I suspected that the problem with the OSEMs hasn't been fully resolved, so on Thursday night, I turned off the ETMY watchdog, kicked the optic, and let it ringdown. Then I looked at the time-series (Attachment #1) and spectra (Attachment #2) of the ringdowns. Clearly, the LL channel seems to saturate at the lower end at ~440 counts. Moreover, in the time domain, it looks like the other channels see the ringdown cleanly, but I don't see the various suspension eigenmodes in any of the sensor signals. I confirmed that all the magnets are still attached to the optic, and that the EQ stops are well clear of the optic, so I'm inclined to think that this behavior is due to an electrical fault rather than a mechanical one.
For now, I'll start by repeating the ringdown with a switched out Satellite Box (SRM) and see if that fixes the problem.
Short update on latest Satellite box woes.
What's more - I did some Sat box switcheroo, swapping the SRM and ETM boxes back and forth in combination with the tester box. In the process, I seem to have broken the SRM sat box - all the shadow sensors are reporting close to 0 volts, and this was confirmed to be an electronic problem as opposed to some magnet skullduggery using the tester box. Once we get to the bottom of the ETMY sat box, we will look at SRM. This is more or less the last thing to look at for this vent - once we are happy the cavity axis can be recovered reliably, we can freeze the position of the elliptical reflector and begin the F.C.ing.
While Chub is making new cables for the EY satellite box...
While the position of the reflector could possibly be optimized further, since we are already seeing a temperature gradient on the optic, I propose pushing on with other vent activities. I'm almost certain the current positioning places the optic closer to the second focus, and we already saw shifts of the HOM resonances with the old configuration, so I'd say we run with this and revisit if needed.
If Chub gives the Sat. Box the green flag, we will work on F.C.ing the mirrors in the evening, with the aim of closing up tomorrow/Friday.
All raw images in this elog have been uploaded to the 40m google photos.
In preparation for the FC cleaning, I did the following:
Tomorrow, I will start with the cleaning of ETMY HR. While the FC is drying, I will position ITMY at the edge of the IY cable for cleaning (Chub will setup the mini-cleanroom at the IY table). The plan is to clean both HR surfaces and have the optics back in place by tomorrow evening. By my count, we have done everything listed in the IY and EY chambers. I'd like to minimize the time between cleaning and pumpdown, so if all goes well (Sat Box problems notwithstanding), we will check the table leveling on Friday morning, and put on the heavy doors and at least rough the main volume down to 1 torr on Friday.
The attached photo shows the two optics with FC applied.
My original plan was to attempt to close up tomorrow. However, we are still struggling with Satellite box issues. So rather than rush it, we will attempt to recover the Y arm cavity alignment on Monday, satellite box permitting. The main motivation is to reduce the deadtime between peeling off the F.C and starting the pumpdown. We will start working on recovering the cavity alignment once the Sat box issues are solved.
Since we may want to close up tomorrow, I did the following prep work:
Rather than try and rush and close up tomorrow, I propose spending the day tomorrow cleaning the peripheral areas of the optic, suspension cage, and chamber. Then on Thursday morning, we can replace the Y-arm optics, try and recover the cavity alignment, and then aim for a Thursday afternoon pumpdown. The main motivation is to reduce the time the optics spend in air after F.C. peeling and going to vacuum.
Procedure tomorrow [comments / suggestions welcome]:
All photos have been uploaded to google photos.
Squishing cables at the ITMX satellite box seems to have fixed the wandering ITM that I observed yesterday - the sooner we are rid of these evil connectors the better.
I had changed the input pointing of the green injection from EX to mark a "good" alignment of the cavity axis, so I used the green beam to try and recover the X arm alignment. After some tweaking of the ITM and ETM angle bias voltages, I was able to get good GTRX values [Attachment #1], and also see clear evidence of (admittedly weak) IR resonances in TRX [Attachment #2]. I can't see the reflection from ITMX on the AS camera, but I suspect this is because the ITMY cage is in the way. This will likely have to be redone tomorrow after setting the input pointing for the Y arm cavity axis, but hopefully things will converge faster and we can close up sooner. Closing the PSL shutter for now...
I also rebooted the unresponsive c1susaux to facilitate the alignment work tomorrow.
[koji, chub, jon, rana, gautam]
Full story tomorrow, but we went through most of the required pre close-up checks/tasks (i.e. both arms were locked, PRC and SRC cavity flashes were observed). Tomorrow, it remains to
The ETMY suspension chain needs to be re-characterized (neither the old settings, nor a +/- 1 gain setting worked well for us tonight), but this can be done once we are back under vacuum.