I found the c1lsc machine to be completely unresponsive today. Looking at the trend of the state word, it happened sometime yesterday (Saturday). The usual reboot procedure did not work - I am not able to bring back any of the models on any of the machines, during the restart procedure, they all fail. The logfile reads (for the c1ioo front end, but they all behave the same):
Not sure what is going on here, or what "Corrutped EPICS data" is supposed to mean. Thinking that something was messed up the last time the model was compiled, I tried recompiling the IOP model. But I'm not able to even compile the model, it fails giving the error message
I suspect this is some kind of path problem - the EPICS_BASE bash variable is set to /cvs/cds/rtapps/epics-18.104.22.168_long/base on the FEs, while /cvs isn't even mounted on the FEs (nor do I think it should be). I think the correct path should be /opt/rtapps/epics-22.214.171.124_long/base. Why should this have changed?
I've shutdown all watchdogs until this is resolved.
As suspected, this was indeed a path problem. Johannes will elog about it later, but in short, it is related to some path variables being changed in order to try and streamline the EPICS processes on the new c1auxex machine (Acromag Era). It is confusing that futzing around with the slow computing system messes with the realtime system as well - aren't these supposed to be decoupled? Once the paths were restored by Johannes, everything compiled and restarted fine. We even have a beam on the AS camera, which was what triggered this whole thing.
Anyways, Attachment #1 shows the current status. I am puzzled by the red TIMING indicators on the c1x04 and c1x02 processes, it is absent from any other processes. How can this be debugged further?
[jon, steve, gautam]
Some points which Jon will elaborate upon (and put photos of) in his detailed elog about this setup:
We are now in a state where the PLL can be locked remotely from the control room by tweaking the AUX laser temperature . Tomorrow, Keerthana will work on getting Craig's/Johannes' Digital Frequency Counter script working here, I think we can easily implement a PLL autolocker if we have some diagostic that tells us if the PLL us locked or not.
Steve informed me that there is an acoustic hum inside the PSL enclosure which wasn't there before. Indeed, it is at ~295Hz, and is from the Bench power supply used to power the ZFL500HLN amplifier. This will have to go...
Since we think we already know the stack mass to ~25% (i.e. 5000 +/- 1000 lbs), we decided to restore the ETMX stack. Procedure followed was:
I will upload the photos to the PICASA page and post the link here later.
In this case, we only need a mass estimate of the end chamber contents with an accuracy of ~25%. If we think we have that already, we don't need to keep doing the jacks-strain gauge adventure.
Since there have been various software/hardware activity going on (stack weighing, AUX laser PLL, computing timing errors etc etc), I decided to do a check on the state of the IFO.
As I suspected, when the SR560 is operated in 1 Hz, first order LPF mode, the (electronic) transfer function has a zero at ~5kHz (!!!).
This is what allowed the PLL to be locked with this setting with UGF of ~30kHz. On the evidence of Attachment #3, there is also some flattening of the electrical TF at low frequencies when the SR560 is driving the NPRO PZT. I'm pretty sure the flattening is not a data download error but since this issue needs further investigation anyway, I'm not reading too much into it. I fit the model with LISO but since we don't have low frequency (~1Hz) data, the fit isn't great, so I'm excluding it from the plots.
We also did some PLL loop characterization. We decided that the higher output range (10Vp bs 10Vpp for the SR560) of the LB1005 controller means it is a better option for the PLL. The lock state can also be triggered remotely. It was locked with UGF ~ 60kHz, PM ~45deg.
We also measured the actuation coefficient of the NPRO laser PZT to be 4.89 +/- 0.02 MHz/V. Quoted error is (1-sigma) from the fit of the linear part of the measured transfer function to a single pole at DC with unknown gain. I used the "clean" part of the measurement that extends to lower frequencies for the fit, as can be seen from the residuals plot. Good to know that even though the LDs are dying, the PZT is still going strong :D.
Remaining loop characterization (i.e. verification of correct scaling of in loop suppression with loop gain etc.) is left to Jon.
Some other remarks:
The EPICS process on the c1ioo front end had died mysteriously. As a result, MC autolocker wasn't working, since the autolocker control variables are EPICS channels defined in the c1ioo model. I restarted the model, and now MCautolocker works.
I'm working near 1X5 and there is an SR785 adjacent to the electronics rack with some cabling running along the floor. I plan to continue in the evening so please leave the setup as is.
During the course of this work, I noticed the +15V Sorensen in 1X6 has 6.8 A of current draw, while Steve's February2018 label says the current draw is 8.6A. Is this just a typo?
Steve: It was most likely my mistake. Tag is corrected to 6.8A
I'm still in the process of electronics characterization, so the SR785 is still hooked up. MC3 coil driver signal is broken out to measure the output voltage going to the coil (via Gainx100 SR560 Preamp), but MC is locked.
I setup a basic MEDM screen for remote control of the PLL.
The Slow control voltage slider allows the frequency of the laser to be moved around via the front panel slow control BNC.
The TTL signal slider provides 0/5V to allow triggering of the servo. Eventually this functionality will be transferred to the buttons (which do not work for now).
The screen can be accessed from the PSL dropdown menu in sitemap. We can make this better eventually, but this should suffice for initial setup.
In the IMC actuation chain, it looks like the MC1/MC3 de-whitening boards, and also all three MC optics' coil driver boards, are showing higher noise than expected from LISO modeling. One possible candidate is thick film resistors on the coil driver boards. The plan is to debug these further by pulling the board out of the Eurocrate and investigating on the electronics bench.
Why bother? Mainly because I want to see how good the IR ALS noise is, and currently, the PSL frequency noise is causing the measurement to be worse than references taken from previous known good times.
Sometime ago, rana suggested to me that I should do this measurement more systematically.
I've now restored all the wiring at 1X6 to their state before this work.
I guess it's fine for now while we are still finalizing the setup at EX, but we should eventually line up the seismometer axes with the IFO axes. Is there a photo of the orientation of the seismometer pre heater can tests? If not, probably good to make some sort of markings on the granite slab / seismometer to allow easy lining up of these axes...
Since we've been hijacking channels like there is no tomorrow for the AUX-PLL setup, I'm documenting the channel names here. The next time c1psl requires a reboot, I'll rename these channels to something more sensible. To find the channel mapping, Koji suggested I use this. Has worked well for us so far... We've labelled all pairs of wires pulled out of the cross connects and insulation taped the stripped ends, in case we ever need to go back to the original config.
To mitigate integrator railing
I have pulled out MC1 coil driver board from its Eurocrate, so IMC is unavailable until further notice. Plans:
If there are no objections, I will execute Step #5 in the next couple of hours. I'm going to start with Steps 1-4.
This work is now complete. MC1 coil driver board has been reinstalled, local damping of MC1 restored, and IMC has been locked. Detailed report + photos to follow, but measurement of the noise (for one channel) on the electronics workbench shows a broadband noise level of 5nV/rtHz () around 100Hz, which is lower than what was measured here and consistent with what we expect from LISO modeling (with fast input terminated with 50ohm, slow input grounded).
I have pulled out MC1 coil driver board from its Eurocrate, so IMC is unavailable until further notice.
In any case, if it is indeed true that the optic sees this current noise, the place to make the measurement is probably the Sat. Box. Who knows what the pickup is over the ~15m of cable from 1X6 to the optic.
Detailed report + photos to follow
All models on the c1lsc front end were dead. Looking at slow trend data, looks like this happened ~6hours ago. I rebooted c1lsc and now all models are back up and running to their "nominal state".
Details and discussion: (diagrams to follow)
I find this hard to believe.
As I see it, the possibilities are:
I guess #3 can be tested by varying the polarization content of one of the input beams through 90 degrees.
A couple of months ago, I took 21 measurements of the delay line transfer function. As shown in Attachment #2, the unwrapped phase is more consistent with a cable length closer to 45m rather than 50m (assuming speed of light is 0.75c in the cable, as the datasheet says it is).
Attachment #1 shows the TF magnitude for the same measurements. There are some ripples consistent with reflections, so something in this system is not impedance matched. I believe I used the same power splitter to split the RF source between delayed and undelayed paths to make these TFs as is used in the current DFD setup to split the RF beatnote.
I had made some TF measurements of the delay sometime ago, need to dig up the data and see what number that measurement yields.
Last night, Rana fact-checked my story about the coil driver noise measurement. Conclusions:
Note: All measurements were made with the fast input of the coil driver board terminated with 50ohms and bias input shorted to ground with a crocodile clip cable.
The first goal is to figure out where this pickup is happening, and if it is actually going to the optic. To this end, I will put a passive 100 kHz filter between the coil driver output and the preamp (Busby Box instead of SR560). By getting a clean measurement of the noise floor with the coil driver board in the Eurocrate (with the bias input driven), we can confirm that the optic isn't being buffeted by the excess coil driver noise. If we confirm that the excess noise is not a measurement artefact, we need to think about were the pickup is actually happening and come up with mitigation strategies.
RXA: good section EMI/RFI in Op Amp Applications handbook (2006) by Walt Jung. Also this page: http://www.electronicdesign.com/analog/what-was-noise
We noticed quite a strong burning smell in the office area and control room ~20mins ago. We did a round of the bake lab, 40m VEA and the perimeter of the CES building, and saw nothing burning. But the smell persists inside the office area/control room (although it may be getting less noticeable). There is a whining noise coming from the fan belt on top of the office area. Anyways, since nothing seems to be burning down, we are not investigating further.
Steve [ 10am 5-31 ] we should always check partical count in IFO room
Seems like as a result of my recent poking around at 1X6, MC3 is more glitchy than usual (I've noticed that the IMC lock duty cycle seems degraded since Tuesday). I'll try the usual cable squishing voodo.
gautam 8.15pm: Glitches persisted despite my usual cable squishing. I've left PSL shutter closed and MC watchdog shutdown to see if the glitches persist. I'll restore the MC a little later in the eve.
Jon informed me that there are some EPICS channels that JoeB's camera server code looks for that don't exist. I thought Jigyasa and I had added everything last year but turned out not to be the case. I followed my instructions from here, did the trick. While cleaning up, I also re-named the "*MC1" channels to "*ETMX", since that's where the camera now resides. New channels are:
C1: CAM-ETMX_ARCHIVE_INTERVAL (Archival interval in minutes)
C1: CAM-ETMX_ARCHIVE_RESET (Reset Archival interval in minutes)
C1: CAM-ETMX_CONFIG_FILE (Config file)
I wanted to recover the DRMI locking. Among other things, Jon mentioned that his mode spectroscopy can be done in the DRMI config. But I was foiled last night by a rogue waveplate in the AS beampath, and today evening, I noticed the resurfacing of this problem. Clearly, this is indicative of some issue in the analog whitening electronics, as the DC light level on the AS55 PD is consistent with previous measurements. Moreover, last time, the problem "fixed itself" so I don't know what exactly the problem was in the first place. I'll try doing the same test in the linked elog tomorrow. As a quick test, I cycled through the whitening gains (0-45dB) to see if it was some stuck ADC register, but that didn't fix the problem.
The problem seems to be with REFL55 only - I am able to lock the PRMI with carrier resonant without any issues, and the error signal levels are consistent with what I remember them being while the PRMI is swinging around. AS55 lives on the same whitening board and doesn't seem to suffer from the same probelms.
Decided to do the check tonight, but as Attachment #1 shows, no real red flags from the whitening gain side.
As it happened last time, the problem apparently fixed itself - somehow the act of me disconnecting the cables and reconnecting them seems to solve the problem, need to think about this.
Anyway, DRMI was locked a few times tonight. I got in a good long stretch where I ran some sensing lines and collected some data, analysis tomorrow. I am going to center the vertex oplevs as an alignment reference for now. A major source of lockloss seems to be angular instability - see for example this video grab of POP:
Could be due to noise injection from the noisy PRM Oplev HeNe, or just TT mirror angular motion (I couldn't get the PRC angular FF going tonight).
I couldn't locate an appropriate heat sink for the driver, which is still in factory condiction, but since the PSL AOM also runs on 80MHz I used that one instead.
We have the appropriate heatsink - I'd like to minimize interference with the main beam wherever possible.
For the PSL beat the AOM drive is not needed, and the power in the optical fiber should not exceed 100 mW, so the offset voltage to the AOM RF driver has to remain below 300 mV.
If damage to the fiber is a concern, I think it's better to use a PBS + waveplate to attenuate the power going into the fiber. When the AOM switching is hooked up to CDS, it's easy to imagine a wrong button being pressed or a wrong value being typed in.
It would probably also be good to have a pickoff monitor for the NPRO DC power so that we can confirm its health (in the short run, we can hijack a PSL Acromag channel for this purpose, as we now do for FSS_RMTEMP). I don't know that we need an EOM for the PLL, as in order to get that going, we probably need some fast electronics for the EOM path, like an FSS box.
STEVE: I ordered the right heatsink for the acousto after Koji pointed out that the vertical fins are 20% more efficient. Why? Because hot air rises. It will be here in 3-4 days.
I was wondering why the PMC modulation sidebands are showing up on the control room analyzer with ~6dB difference in amplitude. Then I realized that it is reasonable for the cabling to have 6dB higher loss at 80 MHz compared to 20 MHz.
We added the following channels to C0EDCU.ini and restarted the daqd processes. Channels seem to have been added successfully, we will check trend writing later today. Motivation is to have a long term record of annulus pressure (even though we are not currently pumping on the annulus).
plot next day
For some time now, I've been puzzled by the unreliability of the ASS_X dither alignment servo. Leaving the servo on, TRX often begins to decay to a lower value, and even after freezing the dither at the maximum TRX values, I can manually align the mirrors to increase TRX. We have suspected some kind of clipping in the TRX path that is responsible for this behaviour. Today I decided to investigate this a bit further. To have the arm locked and to inspect the beam, we have to change the locking trigger - TRX is what is normally used, but I misaligned the Y arm completely, and used AS110 as a trigger instead. There is some strangeness in the triggering topology, but this deserves a separate elog.
Once the arm was locked (and relocks using the AS110 trigger in the event of an unlock), I was able to trace the beampath on the EX table with an IR card. The TRX beam is rather large and weak, so it is hard to see, but as best as I can tell, the only real danger of clipping (or perhaps the beam is already clipped) is on the final steering mirror before the beam hits the (Thorlabs) PD. Steve/Pooja are working on getting a photo of this, and will upload it here shortly. Options to mitigate this:
The EX QPD has stopped working since the Acromag install. If it were working, we wouldn't have to rely on the alternate triggering with AS110 and instead just use the QPD as TRX, while we debug the Thorlabs PD path.
I though that the "C1LSC_TRIG_MTRX" MEDM screen completely controls the triggring of LSC signals. But today while trying to trigger the X-arm locking servo on AS110 instead of TRX, I found some strange behaviour. Summary of important points:
All very strange, not sure what's going on here. The simulink model diagram also didn't give me any clues. Need's further investigation.
FSS slow wasn't running so PSL PZT voltage was swinging around a lot. Reason was that was c1psl unresponsive. I keyed the crate, now it's okay. Now ITMX is stuck - Johannes just told be about an un-elogged c1susaux reboot. Seems that ITMX got stuck at ~4:30pm yesterday PT. After some shaking, the optic was loosened. Please follow the procedure in future and if you do a reboot, please elog it and verify that the optic didn't get stuck.
I opted for the quickest fix - I raised the height of the offending steering mirror using a 0.25" shim. In the long term, we can get a taller post machined. After raising the mirror height, I then checked the DC centering of the spot on the DC PD using a scope.
Looking at the performance of the X arm ASS, I no longer see the strange oscillatory behaviour I described in my previous post . Moreover, the TRX level was ~1 before be raising the steering mirror - but it is now ~1.2. So we were certainly losing some power.
It isn't clear to me in the drawing where the Agilent is during this measurement. Over 40m of cabling, the loss of signal can be a few dB, and considering we don't have a whole lot of signal in the first place, it may be better to send the stronger RF signal (i.e. Marconi pickoff) over the long cable rather than the weak beat signal from the Transmission photodiode.
Given the various changes to the IFO config since last Thursday when I was last able to lock the DRMI, I wanted to try once again tonight. However, I had no success. By my judgement, the alignment is fine as judged by looking at mode flashes on the cameras. However, despite following the usual alignment procedures, I did not get a single lock in tonight.
Perhaps we can use a flip mount on the BS that combines the PSL and AUX beams on the AS table, so we have the option of recovering the usual IFO config when we so desire - while Jon needs the SRC locked for his measurement, it would be nice to not have to figure out the correct demod phases etc each time there is a change in the optical setup of the AUX beam.
Unfortunately, this has happened (and seems like it will happen) enough times that I set up a script for rebooting the machine in a controlled way, hopefully it will negate the need to repeatedly go into the VEA and hard-reboot the machines. Script lives at /opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh. SVN committed. It worked well for me today. All applicable CDS indicator lights are now green again. Be aware that c1oaf will probably need to be restarted manually in order to make the DC light green. Also, this script won't help you if you try to unload a model on c1lsc and the FE crashes. It relies on c1lsc being ssh-able. The basic logic is:
Why is this happening so frequently now? Last few lines of error log:
I fixed it by running the reboot script.
Per this elog, we don't need any AIOut channels or Oplev channels. However, the latest wiring diagram I can find for the EX Acromag situation suggests that these channels are hooked up (physically). If this is true, there are 12 ADC channels that are occupied which we can use for other purposes. Question for Johannes: Is this true? If so, Kira has plenty of channels available for her Temperature control stuff..
As an aside, we found that the EPICS channel names for the TRX/TRY QPD gain stages are somewhat strangely named. Looking closely at the schematic (which has now been added to the 40m DCC tree, we can add out custom mods later), they do (somewhat) add up, but I think we should definitely rename them in a more systematic manner, and use an MEDM screen to indicate stuff like x4 or x20 or "Active" etc. BTW, the EX and EY QPDs have different settings. But at least the settings are changed synchronously for all four quadrants, unlike the WFS heads...
Unrelated: I had to key the c1iscaux and c1auxey crates.
I worked a bit on recovering the DRMI locking again tonight. I decided to shutter the AUX laser on the PSL table at least until I figured out the correct locking settings. As has become customary now, there was a cable in the AS beampath (leading from the AS55 DC monitor to nothing, through the enclosure side panel, it is visible in Attachment #3 in this elog) which I only found after 30mins of futility - please try and remove all un-necessary cables and leave the AS beampath in a usable state after working on the AS table! In the end, I got several short (~3mins) stretches in tonight, but never long enough to do the loop characterization I wanted to get in tonight, probably wrong gains in one or more of the loops. In the last 30 minutes, the IMC has been frequently losing lock, so I am quitting for now. The AUX laser remains shuttered.
Steve mentioned two unlabelled optics were found at EX, relics from the Endtable upgrade.
These are now labelled and forked down on the SP table.
With Koji's help, I got repeatable and reliable DRMI locking going again tonight - this is with the AS path optics for the spectroscopy measurement in place, although the AUX laser remained shuttered tonight. Results + spectra tomorrow, but here's what I did:
As I have found before, it is significantly easier to get the locking going post 11pm - the wall Seis BLRMS don't look that much quieter at midnight compared to 10pm, but this might be a scaling issue. I'll do a quantitative assessment next time... Also, Foton takes between 25-45 secs to save an updated filter (timed twice today).
Attachment #1 shows the measured PRCL loop shape. The blue line is meant to be the "expected" loop shape. While the measured loop shape tracks the expectation down to ~100 Hz, I cannot explain the shape below it. I am also not sure what to make of the fact that there is high coherence down to 10 Hz fron IN2 to IN1, but no coherence between EXC/IN2. I confirmed that the low-frequency boost filters were ON during the measurement. I don't understand how a pendulum TF + the digital filters we used can account for the shape below 100Hz.
gautam 11pm: After discussing with Koji, I conclude that the low frequency loop shape is consistent with the excitation amplitude being insufficient below 100 Hz. Coherence is good between In1/In2 because they are the same signal effectively - what we need is coherence between In1 and EXC, which isn't plotted. It is still strange that Coherence between In2/EXC is ZERO....
Measured loop TFs - PRCL is a big mystery. Used these to finalize loop gains.
I want to use the Fiber Coupled laser from the PDFR system to characterize the response of the fiber coupled PDs we use in the BeatMouth. The documentation is pretty good: for a first test, I did the following in this order:
Seems like stuff is working as expected. I don't know what the correct setpoint for the TEC is, but once that is figured out, the 1x16 splitter should give me 250 uW from each output for 4mW input. This is well below any damage threshold of the Menlo PDs. Then the plan is to modulate the intensity of the diode laser using the Agilent, and measure the optoelectronic response of the PD in the usual way. I don't know if we have a Fiber coupled Reference Photodiode we can use in the way we use the NF1611 in the Jenne laser setup. If not, the main systematic measurement error will come from the power measurement using a Fiber Power Meter.
Neither of the Menlo FPD310 fiber coupled PDs in the beat mouth have an optoelectronic response (V/W) as advertised. This possibly indicates a damaged RF amplification stage inside the PD.
I have never been able to make the numbers work out for the amount of DC light I put on these PDs, and how much RF beat power I get out. Today, I decided to measure the PD response directly.
In the end, I decided that slightly modifying the Jenner laser setup was the way to go, instead of futzing around with the PDFR laser. These PDs have a switchable gain setting - for this measurement, both were set to the lower gain such that the expected optoelectronic response is 409 V/W.
[Attachment #1] - Sketch of the experimental setup.
[Attachment #2] - Measured TF responses, the RF modulation was -20dBm for all curves. I varied the diode laser DC current a little to ensure I recovered identical transfer functions. Assumptions used in making these plots:
[Attachment #3] - Tarball of data + script used to make Attachment #2.
don't use IN_1/IN_2: recall pizza meeting from a few weeks back: use IN1/EXC + Al-Gebra
Do we really have 2 free ADC channels at EX now? I was under the impression we had ZERO free, which is why we wanted to put a new ADC unit in. I think in the wiring diagram, the Vacuum gauge monitor channel, Seis Can Temp Sensor monitor, and Seis Can Heater channels are missing. It would also be good to have, in the wiring diagram, a mapping of which signals go to which I/O ports (Dsub, front panel BNC etc) on the 4U(?) box housing all the Acromags, this would be helpful in future debugging sessions.
Jon is doing some characterization of the AUX laser setup for which he wanted only the prompt retroreflection from the SRM on the AS table, so the PSL shutter is closed, and both ITMs and ETMs are misaligned. The prompt reflection from the SRM was getting clipped on something in vacuum - the ingoing beam looked pretty clean, but the reflection was totally clipped, as I think Johannes aligned the input beam with the SRM misaligned. So the input steering of the AUX laser beam into the vacuum, and also the steering onto AS110, were touched... Also, there were all manner of stray, undumped beams from the fiber on the AS table Jon will post photos.
Before we began this work, we found that c1susaux was dead so we rebooted it.
I think this is because /cvs/cds is getting too big. lsblk reveals:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 446.9G 0 part /
├─sda2 8:2 0 1K 0 part
└─sda5 8:5 0 18.9G 0 part [SWAP]
sdb 8:16 0 2.7T 0 disk
└─sdb1 8:17 0 2T 0 part /home/cds
sr0 11:0 1 1024M 0 rom
sdc 8:32 0 1.8T 0 disk
└─sdc1 8:33 0 1.8T 0 part /media/40mBackup
sdd 8:48 0 1.8T 0 disk
└─sdd1 8:49 0 1.8T 0 part
I believe one of sdc or sdd is connected via SATA while the other is an external USB drive. Maybe we have to get bigger backup disks, but this may be a huge pain to setup as it will involve taking chiara down. Actually, now that I check the backup log, seems like backup is executing successfully - not sure if this is due to my unelogged mounting of sdc (using sudo mount /dev/sdc1 /media/40mBackup) last week, or if this is some LDAS backup. But in any case, seems undesirable that sdb1 is larger than sdc1 or sdd1.
2018-06-06 07:00:01,086 INFO Updating backup image of /cvs/cds
2018-06-06 07:00:01,086 ERROR External drive not mounted!!!
2018-06-07 07:00:01,147 INFO Updating backup image of /cvs/cds
2018-06-07 07:00:01,147 ERROR External drive not mounted!!!
2018-06-08 07:00:01,244 INFO Updating backup image of /cvs/cds
2018-06-08 08:23:32,939 INFO Backup rsync job ran successfully, transferred 316870 files.
2018-06-09 07:00:01,465 INFO Updating backup image of /cvs/cds
2018-06-09 07:12:11,865 INFO Backup rsync job ran successfully, transferred 1926 files.
2018-06-10 07:00:01,842 INFO Updating backup image of /cvs/cds
2018-06-10 07:12:28,931 INFO Backup rsync job ran successfully, transferred 1656 files.
2018-06-11 07:00:01,294 INFO Updating backup image of /cvs/cds
2018-06-11 07:06:14,748 INFO Backup rsync job ran successfully, transferred 1664 files.
2018-06-12 07:00:02,081 INFO Updating backup image of /cvs/cds
2018-06-12 07:07:36,775 INFO Backup rsync job ran successfully, transferred 1870 files.
2018-06-13 07:00:02,194 INFO Updating backup image of /cvs/cds
2018-06-13 07:08:37,356 INFO Backup rsync job ran successfully, transferred 1818 files.
2018-06-14 07:00:01,753 INFO Updating backup image of /cvs/cds
2018-06-14 07:01:43,270 INFO Backup rsync job ran successfully, transferred 1744 files.
Local backup on chiara seems not working since Nov 19, 2017.
2017-11-18 07:00:01,504 INFO Updating backup image of /cvs/cds
2017-11-18 07:03:00,113 INFO Backup rsync job ran successfully, transferred 1954 files.
2017-11-19 07:00:02,564 INFO Updating backup image of /cvs/cds
2017-11-19 07:00:02,592 ERROR External drive not mounted!!!
I finally analyzed the sensing measurement I ran on Tuesday evening. Sensing responses for the DRMI DOFs seems consistent with what I measured in October 2017, although the relative phasing of the DoFs in the sensing PDs has changed significantly. For what it's worth, my Finesse simulation is here.
All optics have been re-aligned. Jon/Johannes will elog about the work today.
Using the numbers from the sensing measurement, I calibrated the measured in-loop MICH spectrum from Tuesday night into free-running displacement noise. For convenience, I used the noise-budgeting utilities to make this plot, but I omitted all the technical noise curves as the coupling has probably changed and I did not measure these. The overall noise seems ~x3 higher everywhere from the best I had last year, but this is hardly surprising as I haven't optimized anything for low noise recently. To summarize:
I will do a more thorough careful characterization and add in the technical noises in the coming days. The dominant uncertainty in the sensing matrix measurement, and hence this free-running noise spectrum, is that I haven't calibrated the actuators in a while.