As of 8pm local time, the IFO seems to have equilibriated to atmospheric pressure (I don't hear the hiss of in-rushing air near 1X8 and P1a reports 760 torr). The pumpspool looks healthy and there are no signs in the TP diagnostics channels that anything bad happened to the pumps. Chub is working on getting the N2 setup more robust, we plan to take the EY door off at 9am tomorrow morning with Bob's help.
* I took this opportunity to follow instructions on pg 29 of the manual and set the calibration for the SuperBee pirani gauge to 760 torr so that it is in better agreement with our existing P1a Pirani gauge. The correction was ~8% (820-->760).
[chub, bob, gautam]
We took the heavy door off the EY chamber at ~930am.
Waiting for the table to level off now. Plan for later today / tomorrow is as follows:
VEA is now a laser hazard area as usual, several 1064nm lasers in the lab have been turned back on. Apart from this
While restoring OSEMs on ETMY, I noticed that the open voltages for the UR and LL OSEMs had significantly (>30%) changed from their values from ~2 years ago. The fact that it only occurred in 2 coils seemed to rule out gradual wear and tear, so I looked up the trends from Nov 25 - Nov 28 (Sundance visited on Nov 26 which is when we removed the cage). Not surprisingly, these are the exact two OSEMs that show a decrease in sensor voltage when the OSEMs were pulled out. I suspect that when I placed them in their little Al foil boats, I shorted out some contacts on the rear (this is reminiscent of the problem we had on PRM in 2016). I hope the problem is with the current buffer IC in the satellite box and not the physical diode, I'll test with the tester box and evaluate the problem further.
Chamber work by Chub and gautam:
Perhaps the ETMY Oplev HeNe is also giving up - the power has fallen by ~30% over 1 year (Attachment #2), nearly twice as much as ETMX but the RIN spectrum (Attachment #1, didn't even need to rotate it!) certainly seems suspicious. Some "nominal" RIN levels for HeNes can be found earlier in this thread. I can't close any of the EY Oplev loops in this condition. I'll double check to make sure I'm routing the right beam onto the QPD, but if the problem persists, I'll replace the HeNe. ITMX HeNe also looks to be near EOL.
Finally I reallized what is killing the ETMY oplev laser. Wrong power supply, it was driving the HeNe laser by 600V higher voltage than recommended. Power supply 101T-2300Vdc replaced by 101T-1700Vdc ( Uniphase model 1201-1, sn 2712420 )
The laser head 1103P, sn P947049 lived for 120 days and it was replaced by sn P964431 New laser output 2.8 mW, quadrant sum 19,750 counts
Y arm was locked at low power in air.
We are operating with 1/10th the input power we normally have, so we expect the IR transmission of the Y arm to max out at 1 when well aligned. However, it is hovering around 0.05 right now, and the dominant source of instability is the angular motion of ETMY due to the Oplev loop being non-functional. I am hesitant to do in-chamber work without an extra pair of eyes/hands around, so I'll defer that for tomorrow morning when Chub gets in. With the cavity axis well defined, I plan to align the green beam to this axis, and use the two to confirm that we are well clear of the Parabola.
* Paola, our vertex laptop, and indeed, most of the laptops inside the VEA, are not ideal to work on this kind of alignmment procedure, it would be good to set up some workstations on which we can easily interact with multiple MEDM screens,
I replaced the BS/PRM Oplev HeNe with one of the heads from the SP table where Steve was setting up the OL RIN/pointing noise experiment. The old one was dead. The new one outputs 3.2 mW of power, I've labelled it with this number, serial number and date of replacement. The beam comes out of the vacuum chamber for both the BS and PRM, and the RIN spectra (Attachment #1) look alright. The calibration into urad and loop gains possibly have to be tweaked. Since the beam comes out of vacuum, I say that we shouldn't open the BS/PRM chamber for this vent - we don't have a proper plan for the in-air layout yet, so we can add this to the list of to-dos for the next vent.
I think we are down to our last spare HeNe head in the lab - @Chub, please look into ordering some more, the ITMX HeNe is going to need replacement soon.
Nobody documented this, but here is the part number with mechanical drawings of the elliptical reflector installed at EY: Optiforms E180. Heater is from Induceramics, but I can't find the exact part which matches the dimensions of the heater we have (diameter = 3.8mm, length = 30mm), perhaps it's a custom part?
The geometry dictates that if we want the heater to be at one focus and the ETM to be at the other, the separation has to be 7.1 inches. It certainly wasn't arranged this way before. It seems unrealistic to do this without clipping the main beam, I propose we leave sufficient clearane between the main beam and the reflector, and accept the reduced heating efficiency.
Thanks to Steve for digging this up from his secret stash.
Steve came by the lab today, and looked at the status of the upgraded vacuum system. He recommended pumping on the RGA volume, since it has not been pumped on for ~3 months on account of the vacuum upgrade. The procedure (so we may script this operation in the future) was:
CC4 pressure has been steadily falling. Steve recommends leaving things in this state over the weekend. He recommends also turning the RGA unit on so that the temperature rises and there is a bakeout of the RGA. The temperature may be read off manually using a probe attached to it.
Does anyone know what the purpose of the indicated optic in Attachment #1 is? Can we remove it? It will allow a little more space around the elliptical reflector...
I don't think it was used. It is not on the diagram too. You can remove it.
After diagnosis with the tester box, as I suspected, the fully open DC voltages on the two problematic channels, LL and UR, were restored once I replaced the LM6321 ICs in those two channel paths. However, I've been puzzled by the inability to turn on the Oplev loops on ETMY. Furthermore, the DC bias voltages required to get ETMY to line up with the cavity axis seemed excessively large, particularly since we seemed to have improved the table levelling.
I suspected that the problem with the OSEMs hasn't been fully resolved, so on Thursday night, I turned off the ETMY watchdog, kicked the optic, and let it ringdown. Then I looked at the time-series (Attachment #1) and spectra (Attachment #2) of the ringdowns. Clearly, the LL channel seems to saturate at the lower end at ~440 counts. Moreover, in the time domain, it looks like the other channels see the ringdown cleanly, but I don't see the various suspension eigenmodes in any of the sensor signals. I confirmed that all the magnets are still attached to the optic, and that the EQ stops are well clear of the optic, so I'm inclined to think that this behavior is due to an electrical fault rather than a mechanical one.
For now, I'll start by repeating the ringdown with a switched out Satellite Box (SRM) and see if that fixes the problem.
Short update on latest Satellite box woes.
What's more - I did some Sat box switcheroo, swapping the SRM and ETM boxes back and forth in combination with the tester box. In the process, I seem to have broken the SRM sat box - all the shadow sensors are reporting close to 0 volts, and this was confirmed to be an electronic problem as opposed to some magnet skullduggery using the tester box. Once we get to the bottom of the ETMY sat box, we will look at SRM. This is more or less the last thing to look at for this vent - once we are happy the cavity axis can be recovered reliably, we can freeze the position of the elliptical reflector and begin the F.C.ing.
The N2 ran out this weekend (again no reminder email, but I haven't found the time to setup the Python mailer yet). So all the valves Steve and I had opened, closed (rightly so, that's what the interlocks are supposed to do). Chub will post an elog about the new N2 valve setup in the Drill-press room, but we now have sufficient line pressure in the N2 line again. So Chub and I re-opened the valves to keep pumping on the RGA.
While Chub is making new cables for the EY satellite box...
While the position of the reflector could possibly be optimized further, since we are already seeing a temperature gradient on the optic, I propose pushing on with other vent activities. I'm almost certain the current positioning places the optic closer to the second focus, and we already saw shifts of the HOM resonances with the old configuration, so I'd say we run with this and revisit if needed.
If Chub gives the Sat. Box the green flag, we will work on F.C.ing the mirrors in the evening, with the aim of closing up tomorrow/Friday.
All raw images in this elog have been uploaded to the 40m google photos.
For the last week, I noticed that I was unable to turn the EY chamber illuminator on using the remote python scripts. This was turning out to be really annoying, having to turn the light on/off manually. Today, I looked into the problem and found that there is a conflict in the IP addresses of the EY Ethernet Strip (which Chas assigned a static IP but did not include detailed procedures for) and the vertex area laptop, paola. The failure of the python control of the power strip coincided exactly with when Chub and I turned on paola for working at the IY chamber - but how was I supposed to know these events are correlated? I tried shutting down paola , power cycling the Ethernet power strip, and restarting the bind9 services on chiara, but remote control of the ethernet power strip remains elusive. I suspect reconfiguring the static IP for the Ethernet switch will require some serial port enabled device...
In preparation for the FC cleaning, I did the following:
Tomorrow, I will start with the cleaning of ETMY HR. While the FC is drying, I will position ITMY at the edge of the IY cable for cleaning (Chub will setup the mini-cleanroom at the IY table). The plan is to clean both HR surfaces and have the optics back in place by tomorrow evening. By my count, we have done everything listed in the IY and EY chambers. I'd like to minimize the time between cleaning and pumpdown, so if all goes well (Sat Box problems notwithstanding), we will check the table leveling on Friday morning, and put on the heavy doors and at least rough the main volume down to 1 torr on Friday.
The attached photo shows the two optics with FC applied.
My original plan was to attempt to close up tomorrow. However, we are still struggling with Satellite box issues. So rather than rush it, we will attempt to recover the Y arm cavity alignment on Monday, satellite box permitting. The main motivation is to reduce the deadtime between peeling off the F.C and starting the pumpdown. We will start working on recovering the cavity alignment once the Sat box issues are solved.
I had taken Satellite box S/N 102, from the SRM suspension, down to the Y-end as part of debugging. However, at some point, I stopped getting readbacks from the shadow sensor PDs, even with the Sat. Box tester hooked up (so as to rule out anything funky with the actual OSEMs). Today evening, I did a more systematic investigation. Schematic with component references is here.
The question remains as to what caused this failure mode - I can't think of why that particular IC was damaged during the Satellite box swapping process - is this indicative of some problem elsewhere in the ETMY OSEM/coil driver electronics chain?
To avoid the annoying excercise of having to manually toggle the illuminators, I solved the IP conflict. Made a wiki page for the ethernet power strips since the documentation was woeful (the way the power strips are mounted in the racks, you can't even see the manufacturer/model/make). All chamber illuminators can now be turned on/off by the MEDM scripts . Note that there is a web interface available too, which can be useful in case of some python socket issues. The main lesson is: avoid using the "reset" button on the power strips, it destroys the static IP config.
Unrelated to this work: The EY laptop, asia, won't boot up anymore, with a "Fan Error" message being the red flag. I've temporarily recommissioned the vacuum rack laptop, belladonna, to be the EY machine for this vent. Can we get 3 netbooks that actually work and don't need to be tethered to a power strip for the VEA?
I reset the remote of this git repo to the 40m version instead of Jon's personal one, to ensure consistency between what's on the vacuum machine and in the git repo. There is now a N2 checker python mailer that will email the 40m list if all the tank pressures are below 600 PSI (>12 hours left for someone to react before the main N2 line pressure drops and the interlocks kick in). For now, the script just runs as a cron job every 3 hours, but perhaps we should integrate it with the interlock process?
All the python code running on c1vac is archived to the git repo:
The foam in the cable tray wall passage had been falling on the floor in little bite-sized pieces, so I investigated and found a fiber cable that had be chewed/clawed through. I didn't find any droppings anywhere in the 40m, but I decided to bait an un-set trap and see if we'd find activity around it. There has been none so far. If there is still none tomorrow, I will move the trap and keep looking for signs of rodentia. At the moment, the trap is in a box in front of the double doors at the north end of the control room. Next it will be place in the IFO room, up in the cable tray.
gautam: the fiber that was damaged was the one from the LSC rack FiBox to the control room FiBox. So no DAFI action for a bit...
[chub, koji, gautam]
Attachment #1 shows the signal routing near the Satellite box. Somehow, the female 64 pin IDC connector that brings the signals from the coil driver board wasn't mating well with the mail connector on the Satellite box front panel. This is a connector specific problem - plugging the female end into one of the male connectors inside the Satellite box yielded signal continuity. The problem was resolved by re-making both connections -by driving the EPICS bias slider through its full range, we were able to see the full voltage swing at the DB connectors going to the flange
This kind of flakiness could be all around the lab, and could be responsible for many of the suspension "mysteries". To re-iterate, the problem seems to be the way the female sockets of the connector mates with the male pins - while the actual crimping points may look secure, there may not be signal continuity.
Now that this problem is resolved, tomorrow we will recover the cavity alignment and possibly start a pumpdown.
Unrelated to this work - the spare satellite box (S/N #100), which had a note on it that said "low voltages", was tested. The "low voltages" referred to the OSEM shadow sensor voltages being low when the LED was completely unobscured. The reason was that the mod to increase the drive current to 25 mA had not yet been implemented on this unit. I added the appropriate 806 ohm resistors, and verified that the voltages were correct, so now we have a working spare. It is stored in the "photodiode" cabinet along the east arm, together with the tester boxes.
Since we may want to close up tomorrow, I did the following prep work:
Rather than try and rush and close up tomorrow, I propose spending the day tomorrow cleaning the peripheral areas of the optic, suspension cage, and chamber. Then on Thursday morning, we can replace the Y-arm optics, try and recover the cavity alignment, and then aim for a Thursday afternoon pumpdown. The main motivation is to reduce the time the optics spend in air after F.C. peeling and going to vacuum.
Procedure tomorrow [comments / suggestions welcome]:
All photos have been uploaded to google photos.
Squishing cables at the ITMX satellite box seems to have fixed the wandering ITM that I observed yesterday - the sooner we are rid of these evil connectors the better.
I had changed the input pointing of the green injection from EX to mark a "good" alignment of the cavity axis, so I used the green beam to try and recover the X arm alignment. After some tweaking of the ITM and ETM angle bias voltages, I was able to get good GTRX values [Attachment #1], and also see clear evidence of (admittedly weak) IR resonances in TRX [Attachment #2]. I can't see the reflection from ITMX on the AS camera, but I suspect this is because the ITMY cage is in the way. This will likely have to be redone tomorrow after setting the input pointing for the Y arm cavity axis, but hopefully things will converge faster and we can close up sooner. Closing the PSL shutter for now...
I also rebooted the unresponsive c1susaux to facilitate the alignment work tomorrow.
[koji, chub, jon, rana, gautam]
Full story tomorrow, but we went through most of the required pre close-up checks/tasks (i.e. both arms were locked, PRC and SRC cavity flashes were observed). Tomorrow, it remains to
The ETMY suspension chain needs to be re-characterized (neither the old settings, nor a +/- 1 gain setting worked well for us tonight), but this can be done once we are back under vacuum.
[Attachment #1]: ITMY HR face after cleaning. I determined this to be sufficiently clean and re-installed the optic.
[Attachment #2]: ETMY HR face after cleaning. This is what the HR face looks like after 3 rounds of First-Contact application. After the first round, we noticed some arc-shaped lines near the center of the optic's clear aperture. We were worried this was a scratch, but we now believe it to be First-Contact residue, because we were able to remove it after drag wiping with acetone and isopropanol. However, we mistrust the quality of the solvents used - they are not any special dehydrated kind, and we are looking into acquiring some dehydrated solvents for future cleaning efforts.
[Attachment #3]: Top view of ETMY cage meant to show increased clearance between the IFO axis and the elliptical reflector.
Many more photos (including table leveling checks) on the google-photos page for this vent. The estimated time between F.C. peeling and pumpdown is ~24 hours for ITMY and ~15 hours for ETMY, but for the former, the heavy doors were put on ~1 hour after the peeling.
The first task is to fix the damping of ETMY.
[jon, koji, gautam]
I'm leaving all suspension watchdogs tripped over the weekend as part of the suspension diagonalization campaign...
The pressure of the main volume increased from ~1mtorr to 50mtorr for the past 24 hours (86ksec). This rate is about x1000 of the reported number on Jan 10. Do we suspect vacuum leak?
Overnight, the pressure increased from 247 uTorr to 264 uTorr over a period of 30000 seconds. Assuming an IFO volume of 33,000 liters, this corresponds to an average leak rate of ~20 uTorr L / s.
I looked into this a bit today. Did a walkthrough of the lab, didn't hear any obvious hissing (makes sense, that presumably would signal a much larger leak rate).
Attachment #1: Data from the 30 ksec we had the main vol valved off on Jan 10, but from the gauges we have running right now (the CC gauges have not had their HV enabled yet so we don't have that readback).
Attachment #2: Data from ~150 ksec from Friday night till now.
Interpretation: The number quoted from Jan 10 is from the cold-cathode gauge (~20 utorr increase). In the same period, the Pirani gauge reports a increase of ~5 mtorr (=250x the number reported by the cold-cathode gauge). So which gauge do we trust in this regime more? Additionally, the rate at which the annuli pressures are increasing seem consistent between Jan 10 and now, at ~100 mtorr every 30 ksec.
I don't think this is conclusive, but at least the leak rates between Jan 10 and now don't seem that different for the annuli pressures. Moreover, for the Jan 10 pumpdown, we had the IFO at low pressure for several days over the chirstmas break, which presumably gave time for some outgassing which was cleaned up by the TPs on Jan 10, whereas for this current pumpdown, we don't have that luxury.
Do we want to do a systematic leak check before resuming the pumpdown on Monday? The main differences in vacuum I can think of are
This entry by Steve says that the "expected" outgassing rate is 3-5 mtorr per day, which doesn't match either the current observation or that from Jan 10.
We can pump down (or vent) annuli. If this is the leak between the main volume and the annuli, we will be able to see the effect on the leak rate. If this is the leak of an outer o-ring, again pumping down (or venting) of the annuli should temporarily decrease (or increase) the leak rate..., I guess. If the leak rate is not dependent on the pressure of the annuli, we can conclude that it is internal outgassing.
As planned, we valved off the main volume and the annuli from the turbo-pumps at ~730 PM PST. At this time, the main volume pressure was 30 uTorr. It started rising at a rate of ~200 uTorr/hr, which translates to ~5 mtorr/day, which is in the ballpark of what Steve said is "normal". However, the calibration of the Hornet gauge seems to be piecewise-linear (see Attachment #1), so we will have to observe overnight to get a better handle on this number.
We decided to vent the IY and EY chamber annular volumes, and check if this made anu dramatic changes in the main volume pressure increase rate, presumably signalling a leak from the outside. However, we saw no such increase - so right now, the working hypothesis is still that the main volume pressure increase is being driven by outgassing of something from the vacuum.
Let's leave things in this state overnight - V1 and V5 closed so that neither the main volume nor the annuli are being pumped, and get some baseline numbers for what the outgassing rate is.
I looked at the free-swinging sensor data from two nights ago, and am struggling with the interpretation.
[Attachment #1] - Fine resolution spectral densities of the 5 shadow sensor signals (y-axis assumes 1ct ~1um). The puzzling feature is that there are only 3 resonant peaks visible around the 1 Hz region, whereas we would expect 4 (PIT, YAW, POS and SIDE). afaik, Lydia looked into the ETMY suspension diagonalization last, in 2016. Compared to her plots (which are in the Euler basis while mine are in the OSEM basis), the ~0.73 Hz peak is nowhere to be seen. I also think the frequency resolution (<1 mHz) is good enough to be able to resolve two closely spaced peaks, so it looks like due to some reason (mechanical or otherwise), there are only 3 independent modes being sensed around 1 Hz.
[Attachment #2] - Koji arrived and we looked at some transfer functions to see if we could make sense of all this. During this investigation, we also think that the UL coil actuator electronics chain has some problem. This test was done by driving the individual coils and looking for the 1/f^2 pendulum transfer function shape in the Oplev error signals. The ~ 4dB difference between UR/LL and LR is due to a gain imbalance in the coil output filter bank, once we have solved the other problems, we can reset the individual coil balancing using this measurement technique.
[Attachment #3] - Downsampled time-series of the data used to make Attachment #1. The ringdown looks pretty clean, I don't see any evidence of any stuck magnets looking at these signals. The X-axis is in kilo-seconds.
We found that the POS and SIDE local damping loops do not result in instability building up. So one option is to use only Oplevs for angular control, while using shadow-sensor damping for POS and SIDE.
I guess we forgot to close V5, so we were indeed pumping on the ITMY and ETMY annuli, but the other three were isolated suggest a leak rate of ~200-300 mtorr/day, see Attachment #1 (consistent with my earlier post).
As for the main volume - according to CC1, the pressure saturates at ~250 uTorr and is stable, while the Pirani P1a reports ~100x that pressure. I guess the cold-cathode gauge is supposed to be more accurate at low pressures, but how well do we believe the calibration on either gauge? Either ways, based on last night's test (see Attachment #2), we can set an upper limit of 12 mtorr/day. This is 2-3x the number Steve said is normal, but perhaps this is down to the fact that the outgassing from the main volume is higher immediately after a vent and in-chamber work. It is also 5x lower rate of pressure increase than what was observed on Feb 2.
I am resuming the pumping down with the turbo-pumps, let's see how long we take to get down to the nominal operating pressure of 8e-6 torr, it ususally takes ~ 1 week. V1, VASV, VASE and VABS were opened at 1030am PST. Per Chub's request (see #14435), I ran RP1 and RP3 for ~30 seconds, he will check if the oil level has changed.
I added lubricating oil to roughing pumps RP1 and RP3 yesterday and this morning. Also, I found a nearly full 5 gallon jug of grade 19 oil in the lab. This should set us up for quite a while. If you need to add oil the the roughing pumps, use the oil in the quart bottle in the flammables cabinet. It is labeled as Leybold HE-175 Vacuum Pump Oil. This bottle is small enough to fill the pumps in close quarters.
Pumpdown looks healthy, so I'm leaving the TPs on overnight. At some point, we should probably get the RGA going again. I don't know that we have a "reference" RGA trace that we can compare the scan to, should check with Steve. The high power (1 W) beam has not yet been sent into the vacuum, we should probably add the interlock condition that shuts off the PSL shutter before that.
The Central Plant building will be undergoing seismic upgrades in the near future. The adjoining north wall along the Y arm will be the first to have this work done, from inside the Central Plant. Project manager Eugene Kim has explained the work to me and also noted our concerns. He assured me that the seismic noise from the construction will be minimized and we will always be contacted when the heaviest construction is to be done.
Tomorrow at 11am, I will bring Mr. Kim and a few others from the construction team to look at the wall from inside the lab. If you have any questions or concerns that you want to have addressed, please email them to me or contact Mr. Kim directly at x4860 or through email at email@example.com .
[chub, steve, gautam]
Steve came by the lab today. He advised us to turn the RGA on again, now that the main volume pressure is < 20 uTorr. I did this by running the RGAset.py script on c0rga - the temperature of the unit was 22C in the morning, after ~3 hours of the filament being turned on, the temperature has already risen to 34 C. Steve says this is normal. We also opened VM1 (I had to edit the interlocks.yaml to allow VM1 to open when CC1 < 20uTorr instead of 10uTorr), so that the RGA volume is exposed to the main volume. So the nightly scans should run now, Steve suggests ignoring the first few while the pumpdown is still reaching nominal pressure. Note that we probably want to migrate all the RGA stuff to the new c1vac machine.
Other notes from Steve:
The full 1 W is again being sent into the IMC. We have left the PBS+HWP combo installed as Rana pointed out that it is good to have polarization control after the PMC but before the EOM. The G&H mirror setup used to route a pickoff of the post-EOM beam along the east edge of the PSL table to the AUX laser beat setup was deemed too flaky and has been bypassed. Centering on the steering mirror and subsequently the IMC REFL photodiode was done using an IR viewer - this technique allows one to geometrically center the beam on the steering mirror and PD, to the resolution of the eye, whereas the voltage maximization technique using the monitor port and an o'scope doesn't allow the former. Nominal IMC transmission of ~15,000 counts has been recovered, and the IMC REFL level is also around 0.12, consistent with the pre-vent levels.
I did some tests of the electronics chain today.
Hypothesising a bad connection between the sat box output J1 and the flange connection cable. Indeed, measuring the OSEM inductance from the DSUB end at the coil-driver board, the UL coil pins showed no inductance reading on the LCR meter, whereas the other 4 coils showed numbers between 3.2-3.3 mH. Suspecting the satellite box, I swapped it out for the spare (S/N 100). This seemed to do the trick, all 5 coil channels read out ~3.3 mH on the LCR meter when measured from the Coil driver board end. What's more, the damping behavior seemed more predictable - in fact, Rana found that all the loops were heavily overdamped. For our suspensions, I guess we want the damping to be critically damped - overdamping imparts excess displacement noise to the optic, while underdamping doesn't work either - in past elogs, I've seen a directive to aim for Q~5 for the pendulum resonances, so when someone does a systematic investigation of the suspensions, this will be something to look out for.. These flaky connectors are proving pretty troublesome, let's start testing out some prototype new Sat Boxes with a better connector solution - I think it's equally important to have a properly thought out monitoring connector scheme, so that we don't have to frequently plug-unplug connectors in the main electronics chain, which may lead to wear and tear.
The input and output matrices were reset to their "naive" values - unfortunately, two eigenmodes still seem to be degenerate to within 1 mHz, as can be seen from the below spectra (Attachment #1). Next step is to identify which modes these peaks actually correspond to, but if I can lock the arm cavities in a stable way and run the dither alignment, I may prioritize measurement of the loss. At least all the coils show the expected 1/f**2 response at the Oplev error point now. The coil output filter gains varied by ~ factor of 2 among the 4 coils, but after balancing the gains, they show identical responses in the Oplev - Attachment #2.
As it turns out, now ITMY has a tendency to get stuck. I found it MUCH more difficult to release the optic using the bias jiggling technique, it took me ~ 2 hours. Best to avoid c1susaux reboots, and if it has to be done, take precautions that were listed for ITMX - better yet, let's swap out the new Acromag chassis ASAP. I will do the arm locking tests tomorrow.
Several housekeeping tasks were carried out today in preparation for the Y-arm loss measurement.
Rich came by the 40m to photocopy some pages from Hobbs, and saw me working on the 60 Hz hunting. As I suspected, the problem was being generated in the D040060. This board receives the photodiode signal single-ended, but has a different power ground than the photodiode (even though the PD is plugged into a power strip that claims to come from 1Y4). The mechanism is not entirely clear - the presence of these 60 Hz features seemed to be dependent on the light level on the TRY photodiode (i.e. they were absent when the PSL shutter is closed, and were more prominent when TRY was 0.9 rather than 0.5) but the PD certainly wasn't saturated - the DC signal was only ~100 mV when viewed on a scope. In any case, Rich suggested the simplest test would be to ground the BNC shield bringing TRY to the rack, to the local ground on the board, which I did using a crocodile clip. This did the trick, the TRY signal RMS is now dominated by the ~1 Hz seismic-driven variation.
On a more pessimistic note - it looks like the elliptical reflector moving did not work, and the clipping in the Y arm persists . I am able to recover TRY~1 with the yaw offset on the ETM (which is still lower than the 1.06-1.07 Koji reported in Aug 2018, but I can believe that being down to the MC transmission being a few % lower at 15000cts rather than 15500), while the maximum I see without it is ~0.9. This is puzzling, because when the chamber was open, we saw that there was ~1.5" clearance between the edge of the reflector and the beam on an IR card. I suppose the input pointing could have been off by a small amount. So one of the primary vent objectives wasn't acheieved... But I will push ahead with the loss measurement.
Since we changed the HeNe, I updated the calibration factors, and accepted the changes in the SDF.
One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.
Pressure of the main volume seems to have stabilized - see Attachment #3, so it should be fine to leave the IFO in this state overnight.
The whole point of the upgrade was to move to a more reliable system - but seems quite flaky already.
I sent Gautam instructions to first try stopping the modbus service, power cycling the Acromag chassis, then restarting the service. I've seen the Acromags go into an unresponsive state after a strong electrical transient or shorted signal wires, and the unit has to be power cycled to be reset.
If this doesn't resolve it, I'll come in tomorrow to help with the Acromag replacement. We have plenty of spares.
The pressure is still 2e-4 torr according to CC1 so I thought I'd give ASS debugging a go tonight. But the arm transmission signal isn't coming through to the LSC model from the end PDs - so a resurfacing of this problem. Rebooting the sender model, c1scy, did not fix the problem. Moreover, c1susaux is dead. The last time I rebooted it, ITMY got stuck so I'm not going to attempt a revival tonight.
The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals) that could only be cleared by power cycling the units. After resetting the system, the main volume pressure dropped quickly and is now < 2e-5 torr, so normal operations can resume. For future reference, below is the procedure to safely reset these units from a trouble state.