[Gautam/Chub/Koji] ~ Mini discussion
Maintenance / Upgrade Items
(Priority high to low)
Oplev HeNe was replaced this afternoon. We did some HeNe shuffling:
Attachment #1 shows the RIN and Attachment #2 and #3 show the PIT and YAW TFs with the new HeNe.
The ITMX Oplev path is still not great - the ingoing beam is within 2mm of clipping on a 2" lens used in the POX path, and there is a bunch of scattered red light everywhere. We should take the opportunity when the chamber is open to try and have a better layout (it may be tricky to optize without touching the two in-vacuum steering optics).
I'll ask Chub to replace it this afternoon.
The goal for this week is to test out the ALS system, so this is kind of a workable state since POX/POY locking is working. But the number of broken things is accumulating fast.
The goal was to characterise the new amplifier (AP1053). For a practice, I did the characterisation of the old amplifier.This test is similar to that reported in Elog ID 13602.
We characterized the power splitter ( Minicircuit- ZAPD-2-252-S+). The schematic of the measurement setup is shown in attachment #1. The network/spectrum/impedance analyzer (Agilent 4395A) was used in the network analyzer mode for the characterisation. The RF output is enabled in the network analyser mode. We used an other spliiter (Power splitter #1) to splitt the RF power such that one part goes to the network analzer and the other part goes to the power spliiter (Power splitter #2) . We are characterising power splitter #2 in this test. The characterisation results and comparison with the data sheet values are shown in Attachment # 2-4.
Attachment #2 : Comparison of total loss in port 1 and 2
Attachment #3 : Comparison of amplitude unbalance
Attachment #4 : Comparison of phase unbalance
Per the manual (pg12) of the NF 1611 photodiode, the "Input Noise Current" is 16 pA/rtHz. It also specifies that for "Linear Operation", the max input power is 1 mW, which at 1um corresponds to a current shot noise of ~14 pA/rtHz. Therefore,
Attachment #1: Here, I plot the expected voltage noise due to shot noise of the incident light, assuming 0.75 A/W for InGaAs and 700V/A transimpedance gain.
Attachment #1 shows the schematic of the test setup. Signal generator (Marconi) was used to supply the RF input. We observed the IF output in the following three test conditions.
This test was done, and I determine the frequency discriminant to be (for an RF signal level of ~2 dBm).
Attachment #1: Measured and predicted value of the DFD discriminant for a few RF signal levels.
Attachment #2: Measured noise spectrum in the 1Y2 (LSC) electronics rack, calibrated to Hz/rtHz using the discriminant from Attachment #1.
I'm still waiting on some parts for the new BeatMouth before giving the whole system a whirl. In the meantime, I'll work on the EX and EY green setups, to try and improve the mode-matching and better characterize the expected suppressed frequency noise of the end NPROs - the goal here is to rule out the excess low-frequency noise that was seen in the ALS signals coming from unsuppressed frequency noise.
This Hanford alog may be of relevance as we are using the aLIGO AA chassis for the IR ALS channels. We aren't expecting any large amplitude high frequency signals for this application, but putting this here in case it's useful someday.
The schematic of the homodyne configuration is shown below.
Following are the list of components
One set of fiber is now kept along the arm of the interferometer
Fiber coupled (3 No's)
Free space ( 2 No's)
The restoration of the delay-line electronics is complete. The chassis has not been re-installed yet, I will put it back in tomorrow. I think the calculations and measurements are in good agreement.
Apart from restoring the transimpedance of the I/F amplifier, I also had to replace the two differential-sending AD8672s in the RF Log detector circuit for both LO and RF paths in the ALS-X board. I performed the same tests as I did the last time on the electronics bench, results will be uploaded to the DCC page for the 40m version of the board. I think the board is performing as advertised, although there is some variation in the noise of the two pairs of I/Q readouts. Sticking with the notation of the HP Application Note for delay line frequency discriminators, here are some numebrs for our delay line system:
In conclusion: the ALS noise is very likely limited by ADC noise (~1 Hz/rtHz frequency noise for 5uV/rtHz ADC noise). We need some whitening. Why whiten the demodulated signal instead of directly incorporating the whitening into the I/F amplifier input stage? Because I couldn't find a design that satisfies all the following criteria (this was why my previous design was flawed):
So Rich suggested separating the transimpedance and whitening operations. The output noise of the differential outputs of the demodulator unit is <100 nV/rtHz at 100 Hz, so we should be able to saturate that noise level with a whitening unit whose input referred noise level is < 100 nV/rtHz. I'm going to see if there are any aLIGO whitening board spares - the existing whitening boards are not a good candidate I think because of the large DC signal level.
Chub, Koji and I have been talking about Udit's re-design. Here are a few points that were raised. Chub/Koji can add to/correct where necessary. Summary is that this needs considerable work before we can order the parts for a prototype and characterize it. I think the requirements may be stated as:
Some problems with Udit's design as it stands:
IMC was not locked for the past several hours. Turned out MC autolocker was stuck, and I could not ssh into megatron because it was in some unresponsive state. I had to hard-reboot megatron, and once it came back up, I restarted the MCautolocker, FSS slow servo and nds2 processes. IMC re-locked immediately.
I was pulling long stretches of OSEM data from the NDS2 server (megatron) last night, I wonder if this flakiness is connected. Megatron is still running Ubuntu12.
PSL NPRO PZT voltage showed large low frequency (hour timescale) excursions on the control room StripTool trace, leading me to suspect the slow servo wasn't working as expected. Yesterday evening, I keyed the unresponsive c1psl crate at ~9 PM PST, and had to run the burtrestore to get the PMC locking working. I must have pressed the wrong button on burtgooey or something, because all the FSS_SLOW channels were reset to 0. What's more, their values were not being saved by the hourly burt-snap script, so I don't have any lookback on what these values were. There isn't any detailed record on the elog about what the optimal values for these are, and the most recent reference I could find was Ki=0.1, Kp=Kd=0, which is what I've set it now to. The servo isn't running away, so I'm leaving things in this state, PID tuning can be done later.
I also added the FSS Slow servo channels to the burt snapshot requirement file at /cvs/cds/caltech/target/c1psl/autoBurt.req, and confirmed that the snapshots are getting the channels from now onwards.
While looking at the req file, I saw a bunch of *_MOPA* channels and also several other currently unused channels. Probably would benefit from going through these and commenting out all the legacy channels, to minimize disk space wastage (though we compress the snapshot files every few years anyways I guess).
Reminder that this (unrelated) issue still needs to be looked into... Note also that the new vacuum system does not have burt snapshot set up (i.e. it is still trying to get the old channels from the c1vac1 and c1vac2 databases, which while has significant overlap with the new system, should probably be setup correctly).
In my effort to understand what's going on with the suspensions, I've kicked all the suspensions and shutdown the watchdogs at 1235366912. PSL shutter is closed to avoid trying to lock to the swinging cavity. The primary aims are
All the tests I have done so far (looking at free swinging data, resonant frequencies in the Oplev error signals etc) seem to suggest that the problem is mechanical rather than electrical. I'll do a quick check of the OSEM PD whitening unit in 1Y4 to be sure.But the fact that the same three peaks appear in the OSEM and Oplev spectra suggests to me that the problem is not electrical.
Watchdogs restored at 10 AM PST
The forthcoming Acromag c1susaux is supposed to use the backplane connectors of the sus euro card modules.
However, the backplane connectors of the vertex sus coil drivers were already used by the fast switches (dewhitening) of c1sus.
Our plan is to connect the Acromag cables to the upper connectors, while the switch channels are wired to the lower connector by soldering jumper wires between the upper and lower connectors on board.
To make the lower 96pin DIN connector available for this, we needed DIN 41612 (96pin) shroud. Tyco Electronics 535074-2 is the correct component for this purpose. The shrouds have been installed to the backplane pins of the coil driver circuit D010001. The shroud has the 180deg rotation dof. The direction of the shroud was matched with the ones on the upper connectors.
To debug the issue of the suspected drifting TTs further, I temporarily hijacked CH0-CH8 of ADC1 in the c1lsc expansion chassis, and connected the "MON" outputs of the coil drivers (D010001) to them via some DB9 breakouts. The idea is to see if the problem is electrical. We should see some slow drift in the voltage to the TTs correlated with the spot walking off the IPPOS QPD. From the wiring diagram, it doesn't look like there is any monitoring (slow or fast) of the control voltages to the TT coils, this should be factored into the Acromag upgrade of c1iscaux/c1iscaux2. EPICS monitoring should be sufficient for this purpose so I didn't setup any new DQ channels, I'll just look at the EPICS from the IOP model.
Last year, I worked on the ALS delay line electronics, thinking that we were in danger of saturation. The analysis was incorrect. I find that for RF signal levels between -10 dBm and +15 dBm, assuming 3dB insertion loss due to components and 5 dB conversion loss in the mixer, there is no danger of saturation in the I/F part of the circuit.
The key is that the MOSFET mixer used in the demodulation circuit drives an I/F current and not voltage. The I-to-V conversion is done by a transimpedance amplifier and not a voltage amplifier. The confusion arose from interpreting the gain of the first stage of the I/F amplifier as 1 kohm/10 ohm = 100. The real figures of merit we have to look at are the current through, and voltage across, the transimpedance resistor. So I think we should revert to the old setup. This analysis is consistent with an actual test I did on the board, details of which may be found here.
We may still benefit from some whitening of the signal before digitization between 10-100 Hz, need to check what is an appropriate place in the signal chain to put in some whitening, there are some constraints to the circuit topology because of the MOSFET mixer.
One part of the circuit topology I'm still confused by is the choice of impedance-matching transformer at the RF-input of this demod board - why is a 75 ohm part used instead of a 50 ohm part? Isn't this going to actually result in an impedance mismatch given our RG405 cabling?
Update: Having pulled out the board, it looks like the input transformer is an ADT-1-1, and NOT an ADT1-1WT as labelled on the schematic. The former is indeed a 50ohm part. So it makes sense to me now.
Since we have the NF1611 fiber coupled PDs, I'm going to try reviving the X arm ALS to check out what the noise is after bypassing the suspect Menlo PDs we were using thus far. My re-analysis can be found in the attached zip of my ipynb (in PDF form).
I've suspected that the TTs are drifting significantly over the course of the last couple of days, because despite repeated alignment efforts, the AS beam spot has drifted off the center of the camera view. I tried looking at IPPOS, but found that there was no data. Looking at the table, the QPD was turned backwards, and the DAQ cable wasn't connected (neither at the PD end, nor at 1Y2, where instead, a cable labelled "Spare QPD" was plugged in). Fortunately, the beam was making it out of the vacuum. So as to have a quantitative diagnostic, I reconnected the QPD, turned it the right way round, and adjusted the steering onto it such that with the AS spot on the center of the CCD monitor, the beam is also centered on the QPD. The calibration is uncertain, but at least we will be able to see how much the spot drifts on the QPD over some days. Also, we only have 16 Hz readback of this stuff.
I leave it to Chub to take the high-res photo and update the wiki, which was last done in 2012.
Already, in the last ~1 hour, there has been considerable drift - see Attachment #2. The spot, which started at the center of the CCD monitor, has now nearly drifted off the top end. The ITMX and BS Oplev spots have been pretty constant over the same timescale, so it has to be the TTs?
In an earlier elog, I had claimed that the suspected clipping of the cavity axis in the Y arm was not solved even after shifting the heater. I now think that it is extremely unlikely that there is still clipping due to the heater. Nevertheless, the ASS system is not working well. Some notes:
We have to systematically re-commission the ASS system to get to the bottom of this.
I have swapped our martian router's WiFi security over to WPA2 (AES) from the previous, less-secure, system. Creds are in the secrets-40-red.
The old IBM laptop (Asia) has died from a fan error after 7 years. WE have a new Lenovo 330 IdeaPad to replace it:
Install done. Touchpad not recognized by linux - lots of forum posts about kernel patching...Arrgh!
To complete the story before moving on to ALS, I decided to measure the X arm loss. It is estimated to be 20 +/- 5 ppm. This is surprising to say the least, so I'm skeptical - the camera image of the ETMX spot when locked almost certainly looks brighter than in Oct 2016, but I don't have numerical proof. But I don't see any obvious red flags in the data quality/analysis yet. If true, this suggests that the "cleaning" of the Yarm optics actually did more harm than good, and if that's true, we should attempt to identify where in the procedure the problem lies - was it in my usage of non-optical grade solvents?
sudo mkfs -t ext4 /dev/sdb
sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync
controls@c1vac:~$ sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync
[sudo] password for controls:
^C283422+0 records in
283422+0 records out
18574344192 bytes (19 GB) copied, 719.699 s, 25.8 MB/s
While working on the vac controls today, I also took care of some of the remaining to-do items. Below is a summary of what was done, and what still remains.
The acromags are on the UPS. I suspect the transient came in on one of the signal lines. Chub tells me he unplugged one of the signal cables from the chassis around the time things died on Monday, although we couldn't reproduce the problem doing that again today.
In this situation it wasn't the software that died, but the acromag units themselves. I have an idea to detect future occurrences using a "blinker" signal. One acromag outputs a periodic signal which is directly sensed by another acromag. The can be implemented as another polling condition enforced by the interlock code.
If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?
The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)
A more permanent fix than a crocodile clip was implemented. Should probably look to do this for the X end unit as well.
I restarted c1scy, c1rfm (so both sender and receiver models were cycled) and power-cycled the c1iscey and c1sus machines. The TRY PD is certainly seeing light - it is just not getting piped over to c1rfm. dmesg doesn't give any clues. I'm out of ideas.
P.S. The new reality seems to be that getting ITMY stuck in the event of a c1susaux reboot is inevitable. As is the practise for ITMX, I tried slowly ramping the PIT and YAW biases to 0 slowly - but in the process of ramping YAW to 0, the optic got stuck. I am ramping in steps of 0.1 (in units of the PIT/YAW sliders, waiting ~3 seconds between steps), I guess I can try ramping even more slowly.
Update: I power cycled the physical RFM switch. This necessitated reboot of all vertex FEs. But seems like things are back to normal now...
Note: to unstick ITMY, seems like the best approach is:
The pressure is still 2e-4 torr according to CC1 so I thought I'd give ASS debugging a go tonight. But the arm transmission signal isn't coming through to the LSC model from the end PDs - so a resurfacing of this problem. Rebooting the sender model, c1scy, did not fix the problem. Moreover, c1susaux is dead. The last time I rebooted it, ITMY got stuck so I'm not going to attempt a revival tonight.
The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals) that could only be cleared by power cycling the units. After resetting the system, the main volume pressure dropped quickly and is now < 2e-5 torr, so normal operations can resume. For future reference, below is the procedure to safely reset these units from a trouble state.
From the measurements I have, the Y arm loss is estimated to be 58 +/- 12 ppm. The quoted values are the median (50th percentile) and the distance to the 25th and 75th quantiles. This is significantly worse than the ~25 ppm number Johannes had determined. The data quality is questionable, so I would want to get some better data and run it through this machinery and see what number that yields. I'll try and systematically fix the ASS tomorrow and give it another shot.
Model and analysis framework:
Johannes and I have cleaned up the equations used for this calculation - while we may make more edits, the v1 of the document lives here. The crux of it is that we would like to measure the quantity , where is the power reflected from the resonant cavity (just the ITM). This quantity can then be used to back out the round-trip loss in the resonant cavity, with further model parameters which are:
If we ignore the 3rd for a start, we can calculate the "expected" value of as a function of the round-trip loss, for some assumed uncertainties on the above-mentioned model parameters. This is shown in the top plot in Attachment #1, and while this was generated using emcee, is consistent with the first order uncertainty propagation based result I posted in my previous elog on this subject. The actual samples of the model parameters used to generate these curves are shown in the bottom. What this is telling us is that even if we have no measurement uncertainty on , the systematic uncertainties are of the order of 5 ppm, for the assumed variation in model parameters.
The same machinery can be run backwards - assuming we have multiple measurements of , we then also have a sample variance, . The uncertainty on the sample variance estimator is also known, and serves to quantify the prior distribution on the parameter for our Monte-Carlo sampling. The parameter itself is required to quantify the likelihood of a given set of model parameters, given our measurement. For the measurements I did this week, my best estimate of . Plugging this in, and assuming uncorrelated gaussian uncertainties on the model parameters, I can back out the posterior distributions.
For convenience, I separate the parameters into two groups - (i) All the model parameters excluding the RT loss, and (ii) the RT loss. Attachment #2 and Attachment #3 show the priors (orange) and posteriors (black) of these quantities.
So that the experts on MC analysis can correct me wheere I'm wrong.
I sent Gautam instructions to first try stopping the modbus service, power cycling the Acromag chassis, then restarting the service. I've seen the Acromags go into an unresponsive state after a strong electrical transient or shorted signal wires, and the unit has to be power cycled to be reset.
If this doesn't resolve it, I'll come in tomorrow to help with the Acromag replacement. We have plenty of spares.
One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.
Pressure of the main volume seems to have stabilized - see Attachment #3, so it should be fine to leave the IFO in this state overnight.
The whole point of the upgrade was to move to a more reliable system - but seems quite flaky already.
Attachment #1 shows estimated systematic uncertainty contributions due to
In all the measurements so far, the ratio seems to be < 1, so this would seem to set a lower bound on the loss of ~35 ppm. The dominant source of systematic uncertainty is the 5% assumed fudge in the mode-matching
Bottom line: I think we need to have other measurements and simultaenously analyse the data to get a more precise estimate of the loss.
There are still several data quality issues that can be improved. I think there is little point in reading too much into this until some of the problems outlined below are fixed and we get a better measurement.
As an interim fix, I'm going to try and use the Oplevs as a DC reference, and run the dither alignment from zero each time, as this prevents the runaway problem at least. Data run started at 11:20 pm.
Another arm loss measurement started at 6pm.
To measure the Y-arm loss, I decided to start with the classic reflectivity method. To prepare for this measurement, I did the following:
I'm running a measurement tonight, starting now (~1130PM), should be done in ~1hour, may need to do more data-quality improvements to get a realistic loss number, but I figured I'd give this a whirl.
I'm rather pleased with an initial look at the first align/misalign cycle, at least there is discernable contrast between the two states - Attachment #2. The data is normalized by MC transmission, and then sig.decimated by x512 (8**3).
Since we changed the HeNe, I updated the calibration factors, and accepted the changes in the SDF.
Rich came by the 40m to photocopy some pages from Hobbs, and saw me working on the 60 Hz hunting. As I suspected, the problem was being generated in the D040060. This board receives the photodiode signal single-ended, but has a different power ground than the photodiode (even though the PD is plugged into a power strip that claims to come from 1Y4). The mechanism is not entirely clear - the presence of these 60 Hz features seemed to be dependent on the light level on the TRY photodiode (i.e. they were absent when the PSL shutter is closed, and were more prominent when TRY was 0.9 rather than 0.5) but the PD certainly wasn't saturated - the DC signal was only ~100 mV when viewed on a scope. In any case, Rich suggested the simplest test would be to ground the BNC shield bringing TRY to the rack, to the local ground on the board, which I did using a crocodile clip. This did the trick, the TRY signal RMS is now dominated by the ~1 Hz seismic-driven variation.
On a more pessimistic note - it looks like the elliptical reflector moving did not work, and the clipping in the Y arm persists . I am able to recover TRY~1 with the yaw offset on the ETM (which is still lower than the 1.06-1.07 Koji reported in Aug 2018, but I can believe that being down to the MC transmission being a few % lower at 15000cts rather than 15500), while the maximum I see without it is ~0.9. This is puzzling, because when the chamber was open, we saw that there was ~1.5" clearance between the edge of the reflector and the beam on an IR card. I suppose the input pointing could have been off by a small amount. So one of the primary vent objectives wasn't acheieved... But I will push ahead with the loss measurement.
Several housekeeping tasks were carried out today in preparation for the Y-arm loss measurement.
[Attachment #1]: Computed spectral power transmissivity (according to my model) for the coating design at a few angles of incidence. Behavior lines up well with what FNO measured, although I get a transmission that is slightly lower than measured at 45 degrees. I suspect this is because of slight changes in the dispersion relation assumed and what was used for the coating in reality.
[Attachment #2]: Similar information as Attachment #1, but with the angle of incidence as the independent parameter in a continuous sweep.
Conclusion: The coating behaves in a way that is in reasonable agreement with our model. At 41.1 degrees, which is what the PR3 angle of incidence will be, T<50 ppm, which was what we specified. The larger range of angles was included because originally, we thought of using this optic as a substitute for SR3 as well. But I claim that for the shorter SRC (signal recycling as opposed to RSE), we will not end up using the new optic, but rather go for the G&H mirror. In any case, as Koji pointed out, ~50 ppm extra loss in the RC will not severely limit the recycling gain. Such large variation was not seen in the MC analysis because we only varied the angle of incidence by +/- 0.5 degrees about the nominal design value of 41.1 degrees.
As it turns out, now ITMY has a tendency to get stuck. I found it MUCH more difficult to release the optic using the bias jiggling technique, it took me ~ 2 hours. Best to avoid c1susaux reboots, and if it has to be done, take precautions that were listed for ITMX - better yet, let's swap out the new Acromag chassis ASAP. I will do the arm locking tests tomorrow.
They have been stored on the 3rd shelf from top in the clean optics cabinet at the south end. EX
5 PR3/SR3 optics from FiveNine Optics were delivered. The data sheets were scanned and uploaded to the following wiki page. https://wiki-40m.ligo.caltech.edu/Aux_Optics
I did some tests of the electronics chain today.
Hypothesising a bad connection between the sat box output J1 and the flange connection cable. Indeed, measuring the OSEM inductance from the DSUB end at the coil-driver board, the UL coil pins showed no inductance reading on the LCR meter, whereas the other 4 coils showed numbers between 3.2-3.3 mH. Suspecting the satellite box, I swapped it out for the spare (S/N 100). This seemed to do the trick, all 5 coil channels read out ~3.3 mH on the LCR meter when measured from the Coil driver board end. What's more, the damping behavior seemed more predictable - in fact, Rana found that all the loops were heavily overdamped. For our suspensions, I guess we want the damping to be critically damped - overdamping imparts excess displacement noise to the optic, while underdamping doesn't work either - in past elogs, I've seen a directive to aim for Q~5 for the pendulum resonances, so when someone does a systematic investigation of the suspensions, this will be something to look out for.. These flaky connectors are proving pretty troublesome, let's start testing out some prototype new Sat Boxes with a better connector solution - I think it's equally important to have a properly thought out monitoring connector scheme, so that we don't have to frequently plug-unplug connectors in the main electronics chain, which may lead to wear and tear.
The input and output matrices were reset to their "naive" values - unfortunately, two eigenmodes still seem to be degenerate to within 1 mHz, as can be seen from the below spectra (Attachment #1). Next step is to identify which modes these peaks actually correspond to, but if I can lock the arm cavities in a stable way and run the dither alignment, I may prioritize measurement of the loss. At least all the coils show the expected 1/f**2 response at the Oplev error point now. The coil output filter gains varied by ~ factor of 2 among the 4 coils, but after balancing the gains, they show identical responses in the Oplev - Attachment #2.
The full 1 W is again being sent into the IMC. We have left the PBS+HWP combo installed as Rana pointed out that it is good to have polarization control after the PMC but before the EOM. The G&H mirror setup used to route a pickoff of the post-EOM beam along the east edge of the PSL table to the AUX laser beat setup was deemed too flaky and has been bypassed. Centering on the steering mirror and subsequently the IMC REFL photodiode was done using an IR viewer - this technique allows one to geometrically center the beam on the steering mirror and PD, to the resolution of the eye, whereas the voltage maximization technique using the monitor port and an o'scope doesn't allow the former. Nominal IMC transmission of ~15,000 counts has been recovered, and the IMC REFL level is also around 0.12, consistent with the pre-vent levels.
[chub, steve, gautam]
Steve came by the lab today. He advised us to turn the RGA on again, now that the main volume pressure is < 20 uTorr. I did this by running the RGAset.py script on c0rga - the temperature of the unit was 22C in the morning, after ~3 hours of the filament being turned on, the temperature has already risen to 34 C. Steve says this is normal. We also opened VM1 (I had to edit the interlocks.yaml to allow VM1 to open when CC1 < 20uTorr instead of 10uTorr), so that the RGA volume is exposed to the main volume. So the nightly scans should run now, Steve suggests ignoring the first few while the pumpdown is still reaching nominal pressure. Note that we probably want to migrate all the RGA stuff to the new c1vac machine.
Other notes from Steve:
The Central Plant building will be undergoing seismic upgrades in the near future. The adjoining north wall along the Y arm will be the first to have this work done, from inside the Central Plant. Project manager Eugene Kim has explained the work to me and also noted our concerns. He assured me that the seismic noise from the construction will be minimized and we will always be contacted when the heaviest construction is to be done.
Tomorrow at 11am, I will bring Mr. Kim and a few others from the construction team to look at the wall from inside the lab. If you have any questions or concerns that you want to have addressed, please email them to me or contact Mr. Kim directly at x4860 or through email at firstname.lastname@example.org .
Pumpdown looks healthy, so I'm leaving the TPs on overnight. At some point, we should probably get the RGA going again. I don't know that we have a "reference" RGA trace that we can compare the scan to, should check with Steve. The high power (1 W) beam has not yet been sent into the vacuum, we should probably add the interlock condition that shuts off the PSL shutter before that.