While working on the vac controls today, I also took care of some of the remaining to-do items. Below is a summary of what was done, and what still remains.
The acromags are on the UPS. I suspect the transient came in on one of the signal lines. Chub tells me he unplugged one of the signal cables from the chassis around the time things died on Monday, although we couldn't reproduce the problem doing that again today.
In this situation it wasn't the software that died, but the acromag units themselves. I have an idea to detect future occurrences using a "blinker" signal. One acromag outputs a periodic signal which is directly sensed by another acromag. The can be implemented as another polling condition enforced by the interlock code.
If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?
The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)
A more permanent fix than a crocodile clip was implemented. Should probably look to do this for the X end unit as well.
I restarted c1scy, c1rfm (so both sender and receiver models were cycled) and power-cycled the c1iscey and c1sus machines. The TRY PD is certainly seeing light - it is just not getting piped over to c1rfm. dmesg doesn't give any clues. I'm out of ideas.
P.S. The new reality seems to be that getting ITMY stuck in the event of a c1susaux reboot is inevitable. As is the practise for ITMX, I tried slowly ramping the PIT and YAW biases to 0 slowly - but in the process of ramping YAW to 0, the optic got stuck. I am ramping in steps of 0.1 (in units of the PIT/YAW sliders, waiting ~3 seconds between steps), I guess I can try ramping even more slowly.
Update: I power cycled the physical RFM switch. This necessitated reboot of all vertex FEs. But seems like things are back to normal now...
Note: to unstick ITMY, seems like the best approach is:
The pressure is still 2e-4 torr according to CC1 so I thought I'd give ASS debugging a go tonight. But the arm transmission signal isn't coming through to the LSC model from the end PDs - so a resurfacing of this problem. Rebooting the sender model, c1scy, did not fix the problem. Moreover, c1susaux is dead. The last time I rebooted it, ITMY got stuck so I'm not going to attempt a revival tonight.
The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals) that could only be cleared by power cycling the units. After resetting the system, the main volume pressure dropped quickly and is now < 2e-5 torr, so normal operations can resume. For future reference, below is the procedure to safely reset these units from a trouble state.
From the measurements I have, the Y arm loss is estimated to be 58 +/- 12 ppm. The quoted values are the median (50th percentile) and the distance to the 25th and 75th quantiles. This is significantly worse than the ~25 ppm number Johannes had determined. The data quality is questionable, so I would want to get some better data and run it through this machinery and see what number that yields. I'll try and systematically fix the ASS tomorrow and give it another shot.
Model and analysis framework:
Johannes and I have cleaned up the equations used for this calculation - while we may make more edits, the v1 of the document lives here. The crux of it is that we would like to measure the quantity , where is the power reflected from the resonant cavity (just the ITM). This quantity can then be used to back out the round-trip loss in the resonant cavity, with further model parameters which are:
If we ignore the 3rd for a start, we can calculate the "expected" value of as a function of the round-trip loss, for some assumed uncertainties on the above-mentioned model parameters. This is shown in the top plot in Attachment #1, and while this was generated using emcee, is consistent with the first order uncertainty propagation based result I posted in my previous elog on this subject. The actual samples of the model parameters used to generate these curves are shown in the bottom. What this is telling us is that even if we have no measurement uncertainty on , the systematic uncertainties are of the order of 5 ppm, for the assumed variation in model parameters.
The same machinery can be run backwards - assuming we have multiple measurements of , we then also have a sample variance, . The uncertainty on the sample variance estimator is also known, and serves to quantify the prior distribution on the parameter for our Monte-Carlo sampling. The parameter itself is required to quantify the likelihood of a given set of model parameters, given our measurement. For the measurements I did this week, my best estimate of . Plugging this in, and assuming uncorrelated gaussian uncertainties on the model parameters, I can back out the posterior distributions.
For convenience, I separate the parameters into two groups - (i) All the model parameters excluding the RT loss, and (ii) the RT loss. Attachment #2 and Attachment #3 show the priors (orange) and posteriors (black) of these quantities.
So that the experts on MC analysis can correct me wheere I'm wrong.
I sent Gautam instructions to first try stopping the modbus service, power cycling the Acromag chassis, then restarting the service. I've seen the Acromags go into an unresponsive state after a strong electrical transient or shorted signal wires, and the unit has to be power cycled to be reset.
If this doesn't resolve it, I'll come in tomorrow to help with the Acromag replacement. We have plenty of spares.
One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.
Pressure of the main volume seems to have stabilized - see Attachment #3, so it should be fine to leave the IFO in this state overnight.
The whole point of the upgrade was to move to a more reliable system - but seems quite flaky already.
Attachment #1 shows estimated systematic uncertainty contributions due to
In all the measurements so far, the ratio seems to be < 1, so this would seem to set a lower bound on the loss of ~35 ppm. The dominant source of systematic uncertainty is the 5% assumed fudge in the mode-matching
Bottom line: I think we need to have other measurements and simultaenously analyse the data to get a more precise estimate of the loss.
There are still several data quality issues that can be improved. I think there is little point in reading too much into this until some of the problems outlined below are fixed and we get a better measurement.
As an interim fix, I'm going to try and use the Oplevs as a DC reference, and run the dither alignment from zero each time, as this prevents the runaway problem at least. Data run started at 11:20 pm.
Another arm loss measurement started at 6pm.
To measure the Y-arm loss, I decided to start with the classic reflectivity method. To prepare for this measurement, I did the following:
I'm running a measurement tonight, starting now (~1130PM), should be done in ~1hour, may need to do more data-quality improvements to get a realistic loss number, but I figured I'd give this a whirl.
I'm rather pleased with an initial look at the first align/misalign cycle, at least there is discernable contrast between the two states - Attachment #2. The data is normalized by MC transmission, and then sig.decimated by x512 (8**3).
Since we changed the HeNe, I updated the calibration factors, and accepted the changes in the SDF.
Rich came by the 40m to photocopy some pages from Hobbs, and saw me working on the 60 Hz hunting. As I suspected, the problem was being generated in the D040060. This board receives the photodiode signal single-ended, but has a different power ground than the photodiode (even though the PD is plugged into a power strip that claims to come from 1Y4). The mechanism is not entirely clear - the presence of these 60 Hz features seemed to be dependent on the light level on the TRY photodiode (i.e. they were absent when the PSL shutter is closed, and were more prominent when TRY was 0.9 rather than 0.5) but the PD certainly wasn't saturated - the DC signal was only ~100 mV when viewed on a scope. In any case, Rich suggested the simplest test would be to ground the BNC shield bringing TRY to the rack, to the local ground on the board, which I did using a crocodile clip. This did the trick, the TRY signal RMS is now dominated by the ~1 Hz seismic-driven variation.
On a more pessimistic note - it looks like the elliptical reflector moving did not work, and the clipping in the Y arm persists . I am able to recover TRY~1 with the yaw offset on the ETM (which is still lower than the 1.06-1.07 Koji reported in Aug 2018, but I can believe that being down to the MC transmission being a few % lower at 15000cts rather than 15500), while the maximum I see without it is ~0.9. This is puzzling, because when the chamber was open, we saw that there was ~1.5" clearance between the edge of the reflector and the beam on an IR card. I suppose the input pointing could have been off by a small amount. So one of the primary vent objectives wasn't acheieved... But I will push ahead with the loss measurement.
Several housekeeping tasks were carried out today in preparation for the Y-arm loss measurement.
[Attachment #1]: Computed spectral power transmissivity (according to my model) for the coating design at a few angles of incidence. Behavior lines up well with what FNO measured, although I get a transmission that is slightly lower than measured at 45 degrees. I suspect this is because of slight changes in the dispersion relation assumed and what was used for the coating in reality.
[Attachment #2]: Similar information as Attachment #1, but with the angle of incidence as the independent parameter in a continuous sweep.
Conclusion: The coating behaves in a way that is in reasonable agreement with our model. At 41.1 degrees, which is what the PR3 angle of incidence will be, T<50 ppm, which was what we specified. The larger range of angles was included because originally, we thought of using this optic as a substitute for SR3 as well. But I claim that for the shorter SRC (signal recycling as opposed to RSE), we will not end up using the new optic, but rather go for the G&H mirror. In any case, as Koji pointed out, ~50 ppm extra loss in the RC will not severely limit the recycling gain. Such large variation was not seen in the MC analysis because we only varied the angle of incidence by +/- 0.5 degrees about the nominal design value of 41.1 degrees.
As it turns out, now ITMY has a tendency to get stuck. I found it MUCH more difficult to release the optic using the bias jiggling technique, it took me ~ 2 hours. Best to avoid c1susaux reboots, and if it has to be done, take precautions that were listed for ITMX - better yet, let's swap out the new Acromag chassis ASAP. I will do the arm locking tests tomorrow.
They have been stored on the 3rd shelf from top in the clean optics cabinet at the south end. EX
5 PR3/SR3 optics from FiveNine Optics were delivered. The data sheets were scanned and uploaded to the following wiki page. https://wiki-40m.ligo.caltech.edu/Aux_Optics
I did some tests of the electronics chain today.
Hypothesising a bad connection between the sat box output J1 and the flange connection cable. Indeed, measuring the OSEM inductance from the DSUB end at the coil-driver board, the UL coil pins showed no inductance reading on the LCR meter, whereas the other 4 coils showed numbers between 3.2-3.3 mH. Suspecting the satellite box, I swapped it out for the spare (S/N 100). This seemed to do the trick, all 5 coil channels read out ~3.3 mH on the LCR meter when measured from the Coil driver board end. What's more, the damping behavior seemed more predictable - in fact, Rana found that all the loops were heavily overdamped. For our suspensions, I guess we want the damping to be critically damped - overdamping imparts excess displacement noise to the optic, while underdamping doesn't work either - in past elogs, I've seen a directive to aim for Q~5 for the pendulum resonances, so when someone does a systematic investigation of the suspensions, this will be something to look out for.. These flaky connectors are proving pretty troublesome, let's start testing out some prototype new Sat Boxes with a better connector solution - I think it's equally important to have a properly thought out monitoring connector scheme, so that we don't have to frequently plug-unplug connectors in the main electronics chain, which may lead to wear and tear.
The input and output matrices were reset to their "naive" values - unfortunately, two eigenmodes still seem to be degenerate to within 1 mHz, as can be seen from the below spectra (Attachment #1). Next step is to identify which modes these peaks actually correspond to, but if I can lock the arm cavities in a stable way and run the dither alignment, I may prioritize measurement of the loss. At least all the coils show the expected 1/f**2 response at the Oplev error point now. The coil output filter gains varied by ~ factor of 2 among the 4 coils, but after balancing the gains, they show identical responses in the Oplev - Attachment #2.
The full 1 W is again being sent into the IMC. We have left the PBS+HWP combo installed as Rana pointed out that it is good to have polarization control after the PMC but before the EOM. The G&H mirror setup used to route a pickoff of the post-EOM beam along the east edge of the PSL table to the AUX laser beat setup was deemed too flaky and has been bypassed. Centering on the steering mirror and subsequently the IMC REFL photodiode was done using an IR viewer - this technique allows one to geometrically center the beam on the steering mirror and PD, to the resolution of the eye, whereas the voltage maximization technique using the monitor port and an o'scope doesn't allow the former. Nominal IMC transmission of ~15,000 counts has been recovered, and the IMC REFL level is also around 0.12, consistent with the pre-vent levels.
[chub, steve, gautam]
Steve came by the lab today. He advised us to turn the RGA on again, now that the main volume pressure is < 20 uTorr. I did this by running the RGAset.py script on c0rga - the temperature of the unit was 22C in the morning, after ~3 hours of the filament being turned on, the temperature has already risen to 34 C. Steve says this is normal. We also opened VM1 (I had to edit the interlocks.yaml to allow VM1 to open when CC1 < 20uTorr instead of 10uTorr), so that the RGA volume is exposed to the main volume. So the nightly scans should run now, Steve suggests ignoring the first few while the pumpdown is still reaching nominal pressure. Note that we probably want to migrate all the RGA stuff to the new c1vac machine.
Other notes from Steve:
The Central Plant building will be undergoing seismic upgrades in the near future. The adjoining north wall along the Y arm will be the first to have this work done, from inside the Central Plant. Project manager Eugene Kim has explained the work to me and also noted our concerns. He assured me that the seismic noise from the construction will be minimized and we will always be contacted when the heaviest construction is to be done.
Tomorrow at 11am, I will bring Mr. Kim and a few others from the construction team to look at the wall from inside the lab. If you have any questions or concerns that you want to have addressed, please email them to me or contact Mr. Kim directly at x4860 or through email at email@example.com .
Pumpdown looks healthy, so I'm leaving the TPs on overnight. At some point, we should probably get the RGA going again. I don't know that we have a "reference" RGA trace that we can compare the scan to, should check with Steve. The high power (1 W) beam has not yet been sent into the vacuum, we should probably add the interlock condition that shuts off the PSL shutter before that.
I added lubricating oil to roughing pumps RP1 and RP3 yesterday and this morning. Also, I found a nearly full 5 gallon jug of grade 19 oil in the lab. This should set us up for quite a while. If you need to add oil the the roughing pumps, use the oil in the quart bottle in the flammables cabinet. It is labeled as Leybold HE-175 Vacuum Pump Oil. This bottle is small enough to fill the pumps in close quarters.
I guess we forgot to close V5, so we were indeed pumping on the ITMY and ETMY annuli, but the other three were isolated suggest a leak rate of ~200-300 mtorr/day, see Attachment #1 (consistent with my earlier post).
As for the main volume - according to CC1, the pressure saturates at ~250 uTorr and is stable, while the Pirani P1a reports ~100x that pressure. I guess the cold-cathode gauge is supposed to be more accurate at low pressures, but how well do we believe the calibration on either gauge? Either ways, based on last night's test (see Attachment #2), we can set an upper limit of 12 mtorr/day. This is 2-3x the number Steve said is normal, but perhaps this is down to the fact that the outgassing from the main volume is higher immediately after a vent and in-chamber work. It is also 5x lower rate of pressure increase than what was observed on Feb 2.
I am resuming the pumping down with the turbo-pumps, let's see how long we take to get down to the nominal operating pressure of 8e-6 torr, it ususally takes ~ 1 week. V1, VASV, VASE and VABS were opened at 1030am PST. Per Chub's request (see #14435), I ran RP1 and RP3 for ~30 seconds, he will check if the oil level has changed.
Let's leave things in this state overnight - V1 and V5 closed so that neither the main volume nor the annuli are being pumped, and get some baseline numbers for what the outgassing rate is.
I looked at the free-swinging sensor data from two nights ago, and am struggling with the interpretation.
[Attachment #1] - Fine resolution spectral densities of the 5 shadow sensor signals (y-axis assumes 1ct ~1um). The puzzling feature is that there are only 3 resonant peaks visible around the 1 Hz region, whereas we would expect 4 (PIT, YAW, POS and SIDE). afaik, Lydia looked into the ETMY suspension diagonalization last, in 2016. Compared to her plots (which are in the Euler basis while mine are in the OSEM basis), the ~0.73 Hz peak is nowhere to be seen. I also think the frequency resolution (<1 mHz) is good enough to be able to resolve two closely spaced peaks, so it looks like due to some reason (mechanical or otherwise), there are only 3 independent modes being sensed around 1 Hz.
[Attachment #2] - Koji arrived and we looked at some transfer functions to see if we could make sense of all this. During this investigation, we also think that the UL coil actuator electronics chain has some problem. This test was done by driving the individual coils and looking for the 1/f^2 pendulum transfer function shape in the Oplev error signals. The ~ 4dB difference between UR/LL and LR is due to a gain imbalance in the coil output filter bank, once we have solved the other problems, we can reset the individual coil balancing using this measurement technique.
[Attachment #3] - Downsampled time-series of the data used to make Attachment #1. The ringdown looks pretty clean, I don't see any evidence of any stuck magnets looking at these signals. The X-axis is in kilo-seconds.
We found that the POS and SIDE local damping loops do not result in instability building up. So one option is to use only Oplevs for angular control, while using shadow-sensor damping for POS and SIDE.
As planned, we valved off the main volume and the annuli from the turbo-pumps at ~730 PM PST. At this time, the main volume pressure was 30 uTorr. It started rising at a rate of ~200 uTorr/hr, which translates to ~5 mtorr/day, which is in the ballpark of what Steve said is "normal". However, the calibration of the Hornet gauge seems to be piecewise-linear (see Attachment #1), so we will have to observe overnight to get a better handle on this number.
We decided to vent the IY and EY chamber annular volumes, and check if this made anu dramatic changes in the main volume pressure increase rate, presumably signalling a leak from the outside. However, we saw no such increase - so right now, the working hypothesis is still that the main volume pressure increase is being driven by outgassing of something from the vacuum.
We can pump down (or vent) annuli. If this is the leak between the main volume and the annuli, we will be able to see the effect on the leak rate. If this is the leak of an outer o-ring, again pumping down (or venting) of the annuli should temporarily decrease (or increase) the leak rate..., I guess. If the leak rate is not dependent on the pressure of the annuli, we can conclude that it is internal outgassing.
I looked into this a bit today. Did a walkthrough of the lab, didn't hear any obvious hissing (makes sense, that presumably would signal a much larger leak rate).
Attachment #1: Data from the 30 ksec we had the main vol valved off on Jan 10, but from the gauges we have running right now (the CC gauges have not had their HV enabled yet so we don't have that readback).
Attachment #2: Data from ~150 ksec from Friday night till now.
Interpretation: The number quoted from Jan 10 is from the cold-cathode gauge (~20 utorr increase). In the same period, the Pirani gauge reports a increase of ~5 mtorr (=250x the number reported by the cold-cathode gauge). So which gauge do we trust in this regime more? Additionally, the rate at which the annuli pressures are increasing seem consistent between Jan 10 and now, at ~100 mtorr every 30 ksec.
I don't think this is conclusive, but at least the leak rates between Jan 10 and now don't seem that different for the annuli pressures. Moreover, for the Jan 10 pumpdown, we had the IFO at low pressure for several days over the chirstmas break, which presumably gave time for some outgassing which was cleaned up by the TPs on Jan 10, whereas for this current pumpdown, we don't have that luxury.
Do we want to do a systematic leak check before resuming the pumpdown on Monday? The main differences in vacuum I can think of are
This entry by Steve says that the "expected" outgassing rate is 3-5 mtorr per day, which doesn't match either the current observation or that from Jan 10.
The pressure of the main volume increased from ~1mtorr to 50mtorr for the past 24 hours (86ksec). This rate is about x1000 of the reported number on Jan 10. Do we suspect vacuum leak?
Overnight, the pressure increased from 247 uTorr to 264 uTorr over a period of 30000 seconds. Assuming an IFO volume of 33,000 liters, this corresponds to an average leak rate of ~20 uTorr L / s.
[jon, koji, gautam]
I'm leaving all suspension watchdogs tripped over the weekend as part of the suspension diagonalization campaign...
[Attachment #1]: ITMY HR face after cleaning. I determined this to be sufficiently clean and re-installed the optic.
[Attachment #2]: ETMY HR face after cleaning. This is what the HR face looks like after 3 rounds of First-Contact application. After the first round, we noticed some arc-shaped lines near the center of the optic's clear aperture. We were worried this was a scratch, but we now believe it to be First-Contact residue, because we were able to remove it after drag wiping with acetone and isopropanol. However, we mistrust the quality of the solvents used - they are not any special dehydrated kind, and we are looking into acquiring some dehydrated solvents for future cleaning efforts.
[Attachment #3]: Top view of ETMY cage meant to show increased clearance between the IFO axis and the elliptical reflector.
Many more photos (including table leveling checks) on the google-photos page for this vent. The estimated time between F.C. peeling and pumpdown is ~24 hours for ITMY and ~15 hours for ETMY, but for the former, the heavy doors were put on ~1 hour after the peeling.
The first task is to fix the damping of ETMY.
[chub, bob, gautam]
[koji, chub, jon, rana, gautam]
Full story tomorrow, but we went through most of the required pre close-up checks/tasks (i.e. both arms were locked, PRC and SRC cavity flashes were observed). Tomorrow, it remains to
The ETMY suspension chain needs to be re-characterized (neither the old settings, nor a +/- 1 gain setting worked well for us tonight), but this can be done once we are back under vacuum.
Squishing cables at the ITMX satellite box seems to have fixed the wandering ITM that I observed yesterday - the sooner we are rid of these evil connectors the better.
I had changed the input pointing of the green injection from EX to mark a "good" alignment of the cavity axis, so I used the green beam to try and recover the X arm alignment. After some tweaking of the ITM and ETM angle bias voltages, I was able to get good GTRX values [Attachment #1], and also see clear evidence of (admittedly weak) IR resonances in TRX [Attachment #2]. I can't see the reflection from ITMX on the AS camera, but I suspect this is because the ITMY cage is in the way. This will likely have to be redone tomorrow after setting the input pointing for the Y arm cavity axis, but hopefully things will converge faster and we can close up sooner. Closing the PSL shutter for now...
I also rebooted the unresponsive c1susaux to facilitate the alignment work tomorrow.
Procedure tomorrow [comments / suggestions welcome]:
All photos have been uploaded to google photos.
Since we may want to close up tomorrow, I did the following prep work:
Rather than try and rush and close up tomorrow, I propose spending the day tomorrow cleaning the peripheral areas of the optic, suspension cage, and chamber. Then on Thursday morning, we can replace the Y-arm optics, try and recover the cavity alignment, and then aim for a Thursday afternoon pumpdown. The main motivation is to reduce the time the optics spend in air after F.C. peeling and going to vacuum.
[chub, koji, gautam]
Attachment #1 shows the signal routing near the Satellite box. Somehow, the female 64 pin IDC connector that brings the signals from the coil driver board wasn't mating well with the mail connector on the Satellite box front panel. This is a connector specific problem - plugging the female end into one of the male connectors inside the Satellite box yielded signal continuity. The problem was resolved by re-making both connections -by driving the EPICS bias slider through its full range, we were able to see the full voltage swing at the DB connectors going to the flange
This kind of flakiness could be all around the lab, and could be responsible for many of the suspension "mysteries". To re-iterate, the problem seems to be the way the female sockets of the connector mates with the male pins - while the actual crimping points may look secure, there may not be signal continuity.
Now that this problem is resolved, tomorrow we will recover the cavity alignment and possibly start a pumpdown.
Unrelated to this work - the spare satellite box (S/N #100), which had a note on it that said "low voltages", was tested. The "low voltages" referred to the OSEM shadow sensor voltages being low when the LED was completely unobscured. The reason was that the mod to increase the drive current to 25 mA had not yet been implemented on this unit. I added the appropriate 806 ohm resistors, and verified that the voltages were correct, so now we have a working spare. It is stored in the "photodiode" cabinet along the east arm, together with the tester boxes.
The foam in the cable tray wall passage had been falling on the floor in little bite-sized pieces, so I investigated and found a fiber cable that had be chewed/clawed through. I didn't find any droppings anywhere in the 40m, but I decided to bait an un-set trap and see if we'd find activity around it. There has been none so far. If there is still none tomorrow, I will move the trap and keep looking for signs of rodentia. At the moment, the trap is in a box in front of the double doors at the north end of the control room. Next it will be place in the IFO room, up in the cable tray.
gautam: the fiber that was damaged was the one from the LSC rack FiBox to the control room FiBox. So no DAFI action for a bit...
I reset the remote of this git repo to the 40m version instead of Jon's personal one, to ensure consistency between what's on the vacuum machine and in the git repo. There is now a N2 checker python mailer that will email the 40m list if all the tank pressures are below 600 PSI (>12 hours left for someone to react before the main N2 line pressure drops and the interlocks kick in). For now, the script just runs as a cron job every 3 hours, but perhaps we should integrate it with the interlock process?
All the python code running on c1vac is archived to the git repo:
To avoid the annoying excercise of having to manually toggle the illuminators, I solved the IP conflict. Made a wiki page for the ethernet power strips since the documentation was woeful (the way the power strips are mounted in the racks, you can't even see the manufacturer/model/make). All chamber illuminators can now be turned on/off by the MEDM scripts . Note that there is a web interface available too, which can be useful in case of some python socket issues. The main lesson is: avoid using the "reset" button on the power strips, it destroys the static IP config.
Unrelated to this work: The EY laptop, asia, won't boot up anymore, with a "Fan Error" message being the red flag. I've temporarily recommissioned the vacuum rack laptop, belladonna, to be the EY machine for this vent. Can we get 3 netbooks that actually work and don't need to be tethered to a power strip for the VEA?
I had taken Satellite box S/N 102, from the SRM suspension, down to the Y-end as part of debugging. However, at some point, I stopped getting readbacks from the shadow sensor PDs, even with the Sat. Box tester hooked up (so as to rule out anything funky with the actual OSEMs). Today evening, I did a more systematic investigation. Schematic with component references is here.
The question remains as to what caused this failure mode - I can't think of why that particular IC was damaged during the Satellite box swapping process - is this indicative of some problem elsewhere in the ETMY OSEM/coil driver electronics chain?
The attached photo shows the two optics with FC applied.
My original plan was to attempt to close up tomorrow. However, we are still struggling with Satellite box issues. So rather than rush it, we will attempt to recover the Y arm cavity alignment on Monday, satellite box permitting. The main motivation is to reduce the deadtime between peeling off the F.C and starting the pumpdown. We will start working on recovering the cavity alignment once the Sat box issues are solved.
In preparation for the FC cleaning, I did the following:
Tomorrow, I will start with the cleaning of ETMY HR. While the FC is drying, I will position ITMY at the edge of the IY cable for cleaning (Chub will setup the mini-cleanroom at the IY table). The plan is to clean both HR surfaces and have the optics back in place by tomorrow evening. By my count, we have done everything listed in the IY and EY chambers. I'd like to minimize the time between cleaning and pumpdown, so if all goes well (Sat Box problems notwithstanding), we will check the table leveling on Friday morning, and put on the heavy doors and at least rough the main volume down to 1 torr on Friday.
For the last week, I noticed that I was unable to turn the EY chamber illuminator on using the remote python scripts. This was turning out to be really annoying, having to turn the light on/off manually. Today, I looked into the problem and found that there is a conflict in the IP addresses of the EY Ethernet Strip (which Chas assigned a static IP but did not include detailed procedures for) and the vertex area laptop, paola. The failure of the python control of the power strip coincided exactly with when Chub and I turned on paola for working at the IY chamber - but how was I supposed to know these events are correlated? I tried shutting down paola , power cycling the Ethernet power strip, and restarting the bind9 services on chiara, but remote control of the ethernet power strip remains elusive. I suspect reconfiguring the static IP for the Ethernet switch will require some serial port enabled device...
While Chub is making new cables for the EY satellite box...
While the position of the reflector could possibly be optimized further, since we are already seeing a temperature gradient on the optic, I propose pushing on with other vent activities. I'm almost certain the current positioning places the optic closer to the second focus, and we already saw shifts of the HOM resonances with the old configuration, so I'd say we run with this and revisit if needed.
If Chub gives the Sat. Box the green flag, we will work on F.C.ing the mirrors in the evening, with the aim of closing up tomorrow/Friday.
All raw images in this elog have been uploaded to the 40m google photos.
The N2 ran out this weekend (again no reminder email, but I haven't found the time to setup the Python mailer yet). So all the valves Steve and I had opened, closed (rightly so, that's what the interlocks are supposed to do). Chub will post an elog about the new N2 valve setup in the Drill-press room, but we now have sufficient line pressure in the N2 line again. So Chub and I re-opened the valves to keep pumping on the RGA.