I also have a functional one on my desk, which has one of the wires repaired.
One is broken, two are ready to steer green and 3 available in un known condition
I've been playing around with Evan's NB code trying to put together a noise budget for the data collected during the DRMI locks last week. Here is what I have so far.
Attachment #1: Sensing matrix measurement.
Attachment #2: MICH OLTF measurement vs model
Attachment #3: Noise budget
I think I did the various conversions/calibrations/loop algebra correctly, but I may have overlooked something. Now that the framework for doing this is somewhat set up, I will try and put together analogous NBs for PRCL and SRCL.
GV 22 August 2017: Attachment #4 is the summary of my demod board efficiency investigations, useful for converting sensing measurement numbers from cts/m to W/m.
I think the most important next two items to budget are the optical lever noise, and the coil driver noise. The coil driver noise is dominated at the moment by the DAC noise since we're operating with the dewhitening filters turned off.
In order to do so, I took a swept sine measurement with a few points between 50 Hz and 500 Hz. The transfer function between C1:LSC-MICH_OUT_DQ and the Oplev Servo Output point (e.g. C1:SUS-BS_OL_PIT_OUT etc) was measured. I played around with the excitation amplitude till I got coherence > 0.9 for the TF measurement, while making sure I wasn't driving the Oplev error point too hard that side-lobes began to show up in the MICH control signal spectrum.
I was also looking at the Oplev servo shapes and noticed that they are different for the ITMs and the BS (Attachment #1). Specifically, for the ITM Oplevs, an "ELP15" is used to do the roll-off while an "ELP35" is employed in the BS servo (though an ELP35 also exists in the ITM Oplev filter banks). I got lost in an elog search for when these were tuned, but I guess the principles outlined in this elog still hold and can serve as a guideline for Oplev loop tweaking.
Coil driver noise estimation to follow
GV 10 May 12:30pm: I've uploaded another copy of the NB (Attachment #3) with the contributions from the ITMs and BS separated. Looks like below 100Hz, the BS coupling dominates, while the hump/plateau around 350Hz is coming from ITMX.
That's a good find.
I first shutdown the SRM watchdog, noted cabling between these boards and also the AI board as well as output to Sat. Box. I also needed to shutdown the MC2 watchdog as I had to remove the DAC output to MC2 in order to remove the SRM Dewhitening board from the rack. This connection has been restored, MC locks fine now.
I believe the ETMs and ITMs are different from the others.
I've added marked-up schematics + high-res photographs of the SRM coil driver board and dewhitening board to the 40m DCC Document tree (D1700217 and D1700218).
In the attached marked-up schematics, I've also added the proposed changes which Rana and I discussed earlier today. For the thick-film -> thin-film resistor switching, I will try and make a quick LISO model to see if we can get away with replacing just a few rather than re-stuff the whole board.
Another change I think should be made, but I forgot to include on the markups: On the dewhitening board, we should probably replace the decoupling capacitors C41 and C52 with equivalent value electrolytic caps (they are currently tantalum caps which I think are susceptible to fail by shorting input to output).
I've made the LISO models for the dewhitening board and coil driver boards I pulled out.
Attached is a plot of the current noise in the current configuration (i.e. dewhitening board just has a gain x3 stage, and then propagated through the coil driver path), with the top 3 noise contributions: The op-amps (op3 and op5) are the LT1125s on the coil driver board in the bias path, while "R12" is the Johnson noise from the 1k input resistace to the OP27 in the signal path.
Assuming the OSEMs have an actuation gain of 0.016 N/A (so 0.064 N/A for 4 OSEMs), the current noise of ~1e-10 A/rtHz translates to a displacement noise of ~3e-15m/rtHz at ~100Hz (assuming a mirror mass of 0.25kg).
I have NOT included the noise from the LM6321 current buffers as I couldn't find anything about their noise characteristics in the datasheet. LISO files used to generate this plot are attached.
I first set the bias sliders to 0 on the MEDM screen (after checking that the nominal values were stored), then shut down the watchdogs, and then pulled out the boards for inspection + photo-taking.
I've uploaded high-res photos + marked up schematics to the same DCC page linked in the previous page. I've noted the S/Ns of the ITM, BS and SRM boards on the page, I think it makes sense to collect everything on one page, and I guess eventually we will unify everything to a one or two versions.
To take the photos, I tried to reproduce the "LED light painting" technique reported here. I mounted the Canon EOS Rebel T3i on a tripod, and used some A3 sheets of paper to make a white background against which the board to be photographed was placed. I also used the new Macro lens we recently got. I then played around with the aperture and exposure time till I got what I judged to be good photos. The room lights were turned off, and I used the LED on my phone to do the "painting", from ~a metre away. I think the photos have turned out pretty well, the component values are readable.
I got some hands-on-experience on using RF photodetectors and the Network Analyzer from Koji. There were newly purchased RF photodetectors from Electro-Optics Technology, Inc.. These were InGaAs Photodetectors with model no.: 120-10050-0001(ET-3010) and 120-10056-0001(ET-3040). The User Guide for the two detectors can be found here. This is the first time we bought the ET-3010 model PD for the 40m lab. It has an operation bandwith >1.5GHz(not tested yet), much higher than other PDs of its kind. This can be used for detecting the output as we 'sweep' the laser frequency for getting data on the optical cavities and the resonating modes inside the cavity. We just tested out the ET-3040 model today but will test out the ET-3010 next week.
Tools and Machines Used:
We worked on the optical bench right in front of the main entrance to the lab. We put the cables, power chords, etc. to their respective places. We used screws, poles, T's, I's, multimeter, Network/Spectrum Analyzer(along with the moving table), a lab computer, Oscilloscope, power supply and the aforementioned PDs for our testing. We took these items from the stack of tools at the Y-arm and the boxes of various different labelled palced near the X-arm. We moved the Network Analyzer(along with the bench) from near the Y-arm to our workplace.
I will include a rough schematic of the setup later.
We alligned the reference PD(High Speed Photoreceiver model 1611) and the test PD(ET-3040 in this case) to get optimal power output. We had set the pump current for the laser at 19.5mA which produced a power of 1.00mW at the output of the fiber couple. At the reference detector the measured voltage was about 1.8V and at the DUT it was about 15mV. The DC transimpedance for the reference detector is 10kOhm and its responsivity to 1064 nm is around 0.75A/W. Using this we calculate the power at the reference detector to be 0.24mW. The DC transimpedance for the DUT is 50Ohm and the responsivity of about 0.9A/W. This amounts to a power of about 0.33mW. After measuring the DC voltages, we connected the laser input to the Network Analyzer and gave in an RF signal with -10dBm and frequency modulation from 100 kHz to 500 MHz. The RF output from the Analyzer is coupled to the Reference Channel(CHR) of the analyzer via a 20dB directional coupler. The AC output of the reference detector is given at Channel A(CHA) and the output from the DUT is given to Channel B(CHB). We got plots of the ratios between the reference detector, DUT and the coupled refernce for the Transfer Function and the Phase. We found that the cut-off frequency for the ET3040 model was at arounf 55 MHz(stated as >50MHz in the data sheet). We have stored the data using the lab PC in the directory .../scripts/general/netgpibdata/data.
The bandwidth of the ET-3040 PD is as stated in the data sheet, >50 MHz.
These PDs have an internal power supply of 3V for ET-3040 and 6V for ET-3010. Do not leave these connected to any instruments after the experiments have been performed or else the batteries will get drained if there is any photocurrent on the PDs.
A similar procedure has to be followed in order to test the ET-3010 PD. I will be doing this tentatively on Monday.
I've spent the last week investigating various parts of the DAC -> OSEM coil signal chain in order to add these noises to the MICH NB. Here is what I have thus far.
I am adding the text files with the data readings and paramater settings along with the Bode Plot of the data. I plotted these graphs using matplotlib module with python 2.7.
I got some hands-on-experience on using RF photodetectors and the Network Analyzer from Koji. There were newly purchased RF photodetectors from Electro-Optics Technology, Inc.. These were InGaAs Photodetectors with model no.: 120-10050-0001(ET-3010) and 120-10056-0001(ET-3040). The User Guide for the two detectors can be found here. This is the first time we bought the ET-3010 model PD for the 40m lab. It has an operation bandwith >1.5GHz(not tested yet), much higher than other PDs of its kind. This can be used for detecting the output as we 'sweep' the laser frequency for getting data on the optical cavities and the resonating modes inside the cavity. We just tested out the ET-3040 model today but will test out the ET-3010 next week...
In continuation with the previous(ET-3040 PD) test.
The ET-3010 PD requires to be fiber coupled for optimal use. I will try to test this model without the fiber couple tomorrow and see whether it works or not.
I wanted to match a noise model to noise measurement for the coil-driver de-whitening boards. The main objectives were:
In continuation to the previous test conducted on the ET-3040 PD, I performed a similar test on the ET-3010 model. This model requires a fiber couple input for proper testing, but I tested it in free space without a fiber couple as the laser power was only 1.00 mW and there was not much danger of scattering of the laser beam. The Data Sheet can be found here
The schematic(attached below) and the procedure are the same as the previous time. The pump current was set to 19.5 mA giving us a laser beam of power 1.00mW at the fiber couple output. The measured voltage for the reference detector was 1.8V. For the DUT, the voltage is amplified using a low noise amplifier(model SR-560) with a gain of 100. Without any laser incidence on the DUT, the multimeter reads 120.6 mV. After alligning the laser with the DUT, the multimeter reads 348.5 mV, i.e. the voltage for the DUT is 227.9/100 ~ 2.28 mV. The DC transimpedance of the reference detector is 10kOhm and its responsivity to 1064 nm is around 0.75 A/W. Using this we calculate the power at the reference detector to be 0.24 mW. The DC transimpedance for the DUT is 50Ohm and the responsivity is around 0.85 A/W. Using this we calculate the power at the DUT to be 0.054 mW. After this we connect the the laser input to the Netwrok Analyzer(AG4395A) and give an RF signal with -10dBm and frequency modulation from 100 kHz to 500 MHz.The RF output from the Analyzer is coupled to the Reference Channel(CHR) of the analyzer via a 20dB directional coupler. The AC output of the reference detector is given at Channel A(CHA) and the output from the DUT is given to Channel B(CHB). We got plots of the ratios between the reference detector, DUT and the coupled refernce for the Transfer Function and the Phase. I stored the data under the directory.../scripts/general/netgpibdata/data. The Bode Plot has been attached below and seeing it we observe that the cut-off frequency for the ET-3010 model is atleast over 500 MHz(stated as >1.5 GHz in the data sheet).
The bandwidth of the ET-3010 PD is atleast 500MHz, stated in the data sheet as >1.5GHz.
The ET-3010 PD has an internal power supply of 6V. Don't leave the PD connected to any instrument after the experimentation is done or else the batteries will get drained if there is any photocurrent on the PDs.
Caliberate the vertical axis in the Bode Plot with transimpedance(Ohms) for the two PDs. Automate the procedure by making a Python script for taking multiple set of readings from the Netwrok Analyzer and aslo plot the error bands.
This measurement has been troublesome - I was plagued by large 60Hz harmonics (see Attachment #1), the cause of which was unknown. I powered all electronics used in the measurement set up from the same power strip (one of the new surge-protecting ones Steve recently acquired for us), but these remained present. Yesterday, Koji helped me troubleshoot this issue. We did the various things, I try to put them here in the order we did them:
Today, I tried to repeat the measurement, with the newly made twisted ribbon cable, but the large 60Hz harmonics were back. Then I realized we had also disconnected the WiFi extender and GPIB box yesterday.
Turns out that connecting the Prologix box to the SR785 (even with no power) is the culprit! Disconnecting the Prologix box makes these harmonics go away. I was using the box labelled "Santuzza.martian" (192.168.113.109), but I double-checked with the box labelled "vanna.martian" (192.168.113.105, also a different DC power supply adapter for the box), the effect is the same. I checked various combinations like
but it looks like connecting the GPIB box to the analyzer is what causes the problem. This was reproducible on both SR785s in the lab. So to make this measurement, I had to do things the painful way - acquire the spectrum by manually pushing buttons with the GPIB box disconnected, then re-connect the box and download the data using SRmeasure --getdata. I don't fully understand what is going on, especially since if the input connector is directly terminated using a 50ohm BNC terminator, there are no harmonics, regardless of whether the GPIB box is connected or not. But it is worth keeping this problem in mind for future low-noise measurements. My elog searches did not reveal past reports of similar problems, has anyone seen something like this before?
It also looks like my previous measurement of the de-whitening board noises was plagued by the same problem (I took all those spectra with the GPIB boxes connected). I will repeat this measurement.
At the meeting this week, it was decided that
I also think it would be a good idea to up the 100-ohm resistors in the bias path on the ITM coil driver boards to 1kohm wire-wound. Since the dominant noise on the coil-driver boards is from the voltage noise of the Op-Amps in the bias path, this would definitely be an improvement. Looking at the current values of the bias MEDM sliders, a 10x increase in the resistance for ITMX will not be possible (the yaw bias is ~-1.5V), but perhaps we can go for a 4x increase?
The plan is to then re-install the boards, and see if we can
We can then take a call on how much to up the series resistance in the DAC signal path.
Now that I have figured out the cause of the harmonics, I will also try and measure the combined electronics noise of de-whitening board + coil driver board and compare it to the model.
Using Alberto's paper LIGO-T10002-09-R titled "40m RF PDs Upgrade", I calibrated the vertical axis in the bode plots I had obtained for the two PDs ET-3010 and ET-3040.
I am not sure whether the values I have obtained are correct or not(i.e. whether the calibration is correct or not). Kindly review them.
EDIT: Attached the formula used to calculate transimpedance for each data point and the values of other paramaters.
EDIT 2: Updated the plots by changing the conversion for gettin ghte ratio of the transfer functions from 10^(y/10) to 10^(y/20).
I've given Steve a list of the thin-film resistors we need to implement the changes discussed in the preceeding elogs - but I figured it would be good to see if we can realize the projected improvement in MICH displacement noise just by fixing the BS Oplev loop shape and turning the existing whitening on. Before re-installing them however, I did make a few changes:
Photos of all the boards were taken prior to re-installation, and have been uploaded to the 40m Google Photos page - I will update schematics + photos on the DCC page once other planned changes are implemented.
I also measured the transfer functions on the de-whitened signal paths on all the boards before re-installing them. I then fit everything using LISO, and updated the filter banks in Foton to match these measurements - the original filters were copied over from FM9 and FM10 to FM7 and FM8. The new filters are appended with the suffix "_0517", and live in FM9 and FM10 of the coil output filter banks. The measured TFs (for ITMs and BS) are summarized in Attachment #1, while Attachment #2 contains the data and LISO file used to do the fits (path to the .bod files in the .fil file will have to be changed appropriately). I used 2 complex pole pairs at ~10 Hz, two complex zero pairs at ~100Hz, real poles at ~15Hz and ~3kHz, and real zeros at ~100Hz and ~550Hz for the fits. The fits line up well with the measured data, and are close enough to the "expected" values (as calculated from component values) to be explained by tolerances on the installed components - I omit the plots here.
After re-installing the boards in the Eurocrate, restoring rough alignment, and updating the filter banks with the most recent measured values, I wanted to see if I could turn the whitening on for one of the optics (ITMY) smoothly before trying to do so in the full DRMI - switching off the "SimDW_0517" filter (FM9) should switch the signal path on the de-whitening board from bypass to de-whitened, and I had confirmed last week with an extender board that the voltage at the appropriate backplane connector pin does change as expected when the FM9 MEDM button is toggled (for both ITMs, BS and SRM). But today I was not able to engage this transition smoothly, the optic seems to be getting kicked around when I engage the whitening. I will need to investigate this further.
Unrelated to this work: the ETMY Oplev HeNe is dead (see Attachment #3). I thought we had just replaced this laser a couple of months ago - what is the expected lifetime of these? Perhaps the power supply at the Y-end is wonky and somehow damaging the HeNe heads?
I think the reason I am unable to engage the de-whitening is that the OL loop is injecting a ton of control noise - see Attachment #1. With the OL loop off (i.e. just local damping loops engaged for the ITMs), the RMS control signal at 100Hz is ~6 orders of magnitude (!) lower than with the OL loop on. So turning on the whitening was just railing the DAC I guess (since the whitening has something like 60dB gain at 100Hz).
The Oplev loops for the ITMs use an "Ellip15" low-pass filter to do the roll-off (2nd order Elliptic low pass filter with 15dB stopband atten and 2dB ripple). I confirmed that if I disable the OL loops, I was able to turn on the whitening for ITMY smoothly.
Now that the ETMY OL HeNe has been replaced, I restored alignment of the IFO. Both arms lock fine (I was also able to engage the ITMY Coil Driver whitening smoothly with the arm locked). However, something funny is going on with ASS - running the dither seems to inject huge offsets into the ITMY pit and yaw such that it almost immediately breaks the lock. This probably has to do with some EPICS values not being reset correctly since the recent slow-machine restarts (for instance, the c1iscaux restart caused all the LSC RFPD whitening gains to be reset to random values, I had to burt-restore the POX11 and POY11 values before I could get the arms to lock), I will have to investigate further.
GV edit 2pm 31 May: After talking to Koji at the meeting, I realized I did not specify what channel the attached spectra are for - it is C1:SUS-ITMY_ULCOIL_OUT.
But today I was not able to engage this transition smoothly, the optic seems to be getting kicked around when I engage the whitening. I will need to investigate this further.
We tried to debug the mysterious sudden failure of ASS - here is a summary of what we did tonight. These are just notes for now, so I don't forget tomorrow.
What are the problems/symptoms?
What are the (known) changes since the servos were last working?
Hypotheses plus checks (indented bullets) to test them:
For whatever reasons, it appears that dithering the cavity mirrors at frequencies with amplitudes that worked ~3 weeks ago is no longer giving us the correct error signals for dither alignment. We are out of ideas for tonight, TBC tomorrow...
Looks like there was a power glitch at around 10am today.
All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).
Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.
GV Jun 5 6pm: From my discussion with jamie, I gather that the fact that the dmesg output is not written to file is because our front-ends are diskless (this is also why the ring buffer, which is what we are reading from when running "dmesg", gets cleared periodically)
[Koji, Rana, Gautam]
The state this work was started in was as indicated in the previous elog - c1ioo wasn't ssh-able, but was responding to ping. We then did the following:
Why does ntpdate behave this way? And only on one of the frontends? And what is the remaining RFM error?
Koji then restarted the IMC autolocker and FSS slow processes on megatron. The IMC locked almost immediately. The MC2 transmon indicated a large shift in the spot position, and also the PMC transmission is pretty low (while the lab temperature equilibriates after the AC being off during peak daytime heat). So the MC transmission is ~14500 counts, while we are used to more like 16,500 counts nowadays.
Re-alignment of the IFO remains to be done. I also did not restart the end lasers, or set up the Marconi with nominal params.
Attachment #3 - Status of the Master Timing Sequencer after various reboots and power cycling of front ends and associated electronics.
Attachment #4 - Warning lights on C1IOO
Now IFO work like fixing ASS can continue...
I tried all versions of power cycling and debugging this problem known to me, including those suggested in this thread and from a more recent time. I am leaving things as it for the night, will look into this more tomorrow. I've also shutdown the ETMX watchdog for the time being. Looks like this has been down since 24Jun 8am UTC.
I have spent my first few days as a SURF getting experience working with the Network/Spectrum Analyzer (AG 4395A). After an introduction to the 40m by Koji, I was tasked with using the AG4395A to measure the transfer function of several filters (for example, Mini-Circuits Low Pass Filter SLP-30). I am now familiar with configuring the AG 4395A, taking a single set of data using a command from one of the control computers, and plotting the dataset as a Bode plot (separate plots for magnitude and phase) using Python.
To experiment with plotting multiple datasets on a single Bode plot, I used a single dataset from the Network Analyzer using the SLP-30 filter and added random noise to create ten datasets to plot. I am attaching the resulting Bode plot, which has the ten generated sets of data plotted along with their average.
We discussed with Rana and Koji how to interpret this type of dataset from the Network Analyzer. Instead of considering the magnitude and phase as separate quantities, we should consider them together as a single complex number in the form H(f) = M exp(iπP/180), where M is the magnitude and P is the phase in degrees. We can then find the average value of the measured quantity in its complex number form (x + iy), as opposed to just taking the average of the magnitude and phase separately.
I tried a couple of things, but no fundamental improvement of the missing LED light on the timing board.
- The power supply cable to the timing board at c1iscex indicated +12.3V
- I swapped the timing fiber to the new one (orange) in the digital cabinet. It didn't help.
- I swapped the opto-electronic I/F for the timing fiber with the Y-end one. The X-end one worked at Y-end, and Y-end one didn't work at X-end.
- I suspected the timing board itself -> I brought a "spare" timing board from the digital cabinet and tried to swap the board. This didn't help.
- Bring the X-end fiber to C1SUS or C1IOO to see if the fiber is OK or not.
- We checked the opto-electronic I/F is OK
- Try to swap the IO chassis with the Y-end one.
- If this helps, swap the timing board only to see this is the problem or not.
There were a few more flaky things in the Expansion chassis - the IDE connectors don't have "keys" that fix the orientation they should go in, and the whole timing card assembly is kind of difficult and not exactly secure. But for now, things are back to normal it seems.
I attempted to re-lock the DRMI and try and realize some of the noise improvements we have identified. Summary elog, details to follow.
Basically after this point, I was unable to repeat stuff I did earlier in the evening just a couple of hours ago. The single arm locks catch quickly, and seem stable over the hour timescale, but when I run the X arm dither, the BS PITCH loop starts to oscillate at ~0.1 Hz. Moreover, I am unable to acquire PRMI carrier lock. I must have changed a setting somewhere that I am not catching right now (although I've scripted most of these things for repeatability, so I am at a loss what I'm missing ). The only change I can think of is that I changed the BS Oplev loop shape. But I went back into the filter file archives and restored these to their original configuration. Hopefully I'll have better luck figuring this out tomorrow.
I'm going to go squish cables and the usual sat. box voodoo, hopefully that settles it.
I am not attempting a full characterization tonight, but the important changes since the May locks are in the de-whitening boards and coil driver boards. I did not attempt to engage the coil-dewhitening, but the PD whitening works fine.
As a quick check, I tested the hypothesis that the BS OL loop A2L coupling dominates between ~10-50Hz. The attached control signal spectra [Attachment #2] supports this hypothesis. Now to actually change the loop shape.
I've centered Oplevs of all vertex optics, and also the beams on the REFL and AS PDs. The ITMs and BS have been repeatedly aligned since re-installing their respective coil driver electronics, but the SRM alignment needed some adjustment of the bias sliders.
Full characterization to follow. Some things to check:
Lesson learnt: Don't try and change too many things at once!
GV July 5 1130am: Looks like the MICH loop gain wasn't set correctly when I took the attached spectra, seems like the bump around 300Hz was caused by this. On later locks, this feature wasn't present.
Basically we use the arm cavities as the reference of the beam alignment. The incident beam is aligned such that the ITMY angle dither is minimized (at least at the dither freq).
This means that we have no capability to adjust the spot poisitions on the PRM, SRM, BS, ITMX optics.
We are still able to minimize A2L by adding intentional asymmetry to the coil actuators.
I've been making NBs on my laptop, thought I would get the copy under version control up-to-date since I've been negligent in doing so.
The code resides in /ligo/svncommon/NoiseBudget, which as a whole is a git directory. For neatness, most of Evan's original code has been put into the sub-directory /ligo/svncommon/NoiseBudget/H1NB/, while my 40m NB specific adaptations of them are in the sub-directory /ligo/svncommon/NoiseBudget/NB40. So to make a 40m noise budget, you would have to clone and edit the parameter file accordingly, and run python C1NB.py C1NB_2017_04_30.py for example. I've tested that it works in its current form. I had to install a font package in order to make the code run (with sudo apt-get install tex-gyre ), and also had to comment out calls to GwPy (it kept throwing up an error related to the package "lal", I opted against trying to debug this problem as I am using nds2 instead of GwPy to get the time series data anyways).
There are a few things I'd like to implement in the NB like sub-budgets, I will make a tagged commit once it is in a slightly neater state. But the existing infrastructure should allow making of NBs from the control room workstations now.
We spent some time trying to get the noise-budgeting code running today. I guess eventually we want this to be usable on the workstations so we cloned the git repo into /ligo/svncommon. The main objective was to see if we had all the dependencies for getting this code running already installed. The way Evan has set the code up is with a bunch of dictionaries for each of the noise curves we are interested in - so we just commented out everything that required real IFO data. We also commented out all the gwpy stuff, since (if I remember right) we want to be using nds2 to get the data.
Running the code with just the gwinc curves produces the plots it is supposed to, so it looks like we have all the dependencies required. It now remains to integrate actual IFO data, I will try and set up the infrastructure for this using the archived frame data from the 2016 DRFPMI locks..
About 2 weeks ago, I noticed some odd behaviour of the LSC TRY data stream. Its DC value seems to be drifting ~10x more than TRX. Both signals come from the transmission QPDs. At the time, we were dealing with various CDS FE issues but things have been stable on that end for the last two weeks, so I looked into this a bit more today. It seems like one particular channel is bad - Quadrant 4 of the ETMY TRANS QPD. Furthermore, there is a bump around 150Hz, and some features above 2kHz, that are only present for the ETMY channels and not the ETMX ones.
Since these spectra were taken with the PSL shutter closed and all the lab room lights off, it would suggest something is wrong in the electronics - to be investigated.
The drift in TRY can be as large as 0.3 (with 1.0 being the transmitted power in the single arm lock). This seems unusually large, indeed we trigger the arm LSC loops when TRY > 0.3. Attachment #2 shows the second trend of the TRX and TRY 16Hz EPICS channels for 1 day. In the last 12 hours or so, I had left the LSC master switch OFF, but the large drift of the DC value of TRY is clearly visible.
In the short term, we can use the high-gain THORLABS PD for TRY monitoring.
Indeed, the whole point of the high/low gain setup is to never use the QPDs for the single arm work. Only use the high gain Thorlabs PD and then the switchover code uses the QPD once the arm powers are >5.
I don't know how the operation procedure went so higgledy piggledy.
Attachment #1: State of CDS overview screen as of 9.30AM today morning when I came in.
Looks like there may have bene a power glitch, although judging by the wall StripTool traces, if there was one, it happened more than 8 hours ago. FB is down atm so can't trend to find out when this happened.
All FEs and FB are unreachable from the control room workstations, but Megatron, Optimus and Chiara are all ssh-able. The latter reports an uptime of 704 days, so all seems okay with its UPS. Slow machines are all responding to ping as well as telnet.
Recovery process to begin now. Hopefully it isn't as complicated as the most recent effort [FAMOUS LAST WORDS]
I am unable to get FB to reboot to a working state. A hard reboot throws it into a loop of "Media Test Failure. Check Cable".
Jetstor RAID array is complaining about some power issues, the LCD display on the front reads "H/W Monitor", with the lower line cycling through "Power#1 Failed", "Power#2 Failed", and "UPS error". Going to 192.168.113.119 on a martian machine browser and looking at the "Hardware information" confirms that System Power #1 and #2 are "Failed", and that the UPS status is "AC power loss". So far I've been unable to find anything on the elog about how to handle this problem, I'll keep looking.
In fact, looks like this sort of problem has happened in the past. It seems one power supply failed back then, but now somehow two are down (but there is a third which is why the unit functions at all). The linked elog thread strongly advises against any sort of power cycling.
A bit more digging on the diagnostics page of the RAID array reveals that the two power supplies actually failed on Jun 2 2017 at 10:21:00. Not surprisingly, this was the date and approximate time of the last major power glitch we experienced. Apart from this, the only other error listed on the diagnostics page is "Reading Error" on "IDE CHANNEL 2", but these errors precede the power supply failure.
Perhaps the power supplies are not really damaged, and its just in some funky state since the power glitch. After discussing with Jamie, I think it should be safe to power cycle the Jetstor RAID array once the FB machine has been powered down. Perhaps this will bring back one/both of the faulty power supplies. If not, we may have to get new ones.
The problem with FB may or may not be related to the state of the Jestor RAID array. It is unclear to me at what point during the boot process we are getting stuck at. It may be that because the RAID disk is in some funky state, the boot process is getting disrupted.
After a couple of minutes, the front LCD display seemed to indicate that it had finished running some internal checks. The messages indicating failure of power units, which was previously constantly displayed on the front LCD panel, was no longer seen. Going back to the control room and checking the web diagnostics page, everything seemed back to normal.
It's possible the fb bios got into a weird state. fb definitely has it's own local boot disk (*not* diskless boot). Try to get to the BIOS during boot and make sure it's pointing to it's local disk to boot from.
If that's not the problem, then it's also possible that fb's boot disk got fried in the power glitch. That would suck, since we'd have to rebuild the disk. If it does seem to be a problem with the boot disk then we can do some invasive poking to see if we can figure out what's up with the disk before rebuilding.
I think this is the boot disk failure. I put the spare 2.5 inch disk into the slot #1. The OK indicator of the disk became solid green almost immediately, and it was recognized on the BIOS in the boot section as "Hard Disk". On the contrary, the original disk in the slot #0 has the "OK" indicator kept flashing and the BIOS can't find the harddisk.
Jamie suggested verifying that the problem is indeed with the disk and not with the controller, so I tried switching the original boot disk to Slot #1 (from Slot #0 where it normally resides), but the same problem persists - the green "OK" indicator light keeps flashing even in Slot #1, which was verified to be a working slot using the spare 2.5 inch disk. So I think it is reasonable to conclude that the problem is with the boot disk itself.
The disk is a Seagate Savvio 10K.2 146GB disk. The datasheet doesn't explicitly suggest any recovery options. But Table 24 on page 54 suggests that a blinking LED means that the disk is "spinning up or spinning down". Is this indicative of any particular failure moed? Any ideas on how to go about recovery? Is it even possible to access the data on the disk if it doesn't spin up to the nominal operating speed?
If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.
If the disk is completely gone, we need to rebuilt the disk according to Jamie, and I don't know how to do it. (Don't we have any spare copy?)
Seems like the connector on this particular disk is of the SAS variety (and not SATA). I'll ask Steve to order a SAS to USB cable. In the meantime I'm going to see if the people at Downs have something we can borrow.
If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.