40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 139 of 339  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  12972   Thu May 4 19:03:15 2017 gautamUpdateGeneralDRMI locking - preliminary MICH NB

Summary:

I've been playing around with Evan's NB code trying to put together a noise budget for the data collected during the DRMI locks last week. Here is what I have so far.

Attachment #1: Sensing matrix measurement.

  • This is basically to show that the MICH error signal is mostly in AS55Q.
  • The whitening gain used was 0dB, and the demod phase was -82 degrees.
  • The MICH sensing response was 5.31*10^8 V/m, where V is the demod board output. The 40m wiki RFPD page for AS55 says the RF transimpedance is ~550ohms, and I measured the Demod Board puts out 5.1V of IF signal (measured at after the Preamp, which is what goes to the ADC) for 1V of RF signal at the PD input. Using these numbers, and assuming a PD responsivity of 0.8 A/W at 1064nm, the sensing response is 2.37*10^5 W/m. I don't have a feeling yet for whether this is a reasonable number, but it would be a number to compare to what my Finesse model tells me to expect, for example.
  • Actuator calibration used to arrive at these numbers was taken from this elog

Attachment #2: MICH OLTF measurement vs model

  • In order to build the MICH OLTF model, I used MATLAB to put together the following transfer functions:
    • BS pendulum
    • Digital servo filters from LSC_MICH
    • Violin mode filters 
    • Analog/Digital AA and AI filters. For the digital AA/AI filters, I took the coefficients from /opt/rtcds/rtscore/release/src/fe/controller.c
  • The loop measurement was taken with digital filter modules FM1, FM2, FM3, FM7, FM9 engaged. 
  • In order to fit the model to the measurement, I tried finding the best-fit values for an overall loop gain and delay. 
  • The agreement between model and measurement isn't stellar, but I decided to push ahead for a first attempt. This loop TF was used to convert various noises into displacement noise for plotting.

Attachment #3: Noise budget

  • It took me a while to get Evan's code going, the main changes I made were to use nds2 to grab data instead of GWPy, and also to replace reading in .txt files with importing .mat files. This is a work in progress.
  • Noises plotted:
    • Measured - I took the in loop error signal and estimated the free-running displacement noise with the model OLTF, and calibrated it into metres using the sensing response measurement. This looks consistent with what was measured back in Dec 2015.
    • Shot noise - I used the measured DC power incident on the PD, 13mW, RF transimpedance of 550 V/A, and the V/m calibration factor mentioned above, to calculate this (labelled "Quantum Noise").
    • Dark noise - measured with PSL shutter closed.
    • Seismic noise, thermal noise, gas noise - calculated with GWINC

I think I did the various conversions/calibrations/loop algebra correctly, but I may have overlooked something. Now that the framework for doing this is somewhat set up, I will try and put together analogous NBs for PRCL and SRCL. 

GV 22 August 2017: Attachment #4 is the summary of my demod board efficiency investigations, useful for converting sensing measurement numbers from cts/m to W/m.

  12974   Fri May 5 10:13:02 2017 ericqUpdateGeneralMICH NB questions
Is suspension thermal noise missing? I take it "Thermal" refers just to thermal things going on in the optic, since I don't see any peaks at the bounce/roll modes as I would expect from suspension thermal noise.

What goes into the GWINC calculation of seismic noise? Does it include real 40m ground motion data and our seismic stacks?

I'm surprised to see such a sharp corner in the "Dark Noise" trace, did you apply the OLG correction to a measured dark noise ASD? (The OLG correction only needs to be applied to the in-lock error signals to recover open loop behavior, there is no closed loop when you're measuring the dark noise so nothing to correct for.)
  12975   Fri May 5 12:10:53 2017 gautamUpdateGeneralMICH NB questions

Quote:
Is suspension thermal noise missing? I take it "Thermal" refers just to thermal things going on in the optic, since I don't see any peaks at the bounce/roll modes as I would expect from suspension thermal noise. What goes into the GWINC calculation of seismic noise? Does it include real 40m ground motion data and our seismic stacks? I'm surprised to see such a sharp corner in the "Dark Noise" trace, did you apply the OLG correction to a measured dark noise ASD? (The OLG correction only needs to be applied to the in-lock error signals to recover open loop behavior, there is no closed loop when you're measuring the dark noise so nothing to correct for.)


I've included the suspension thermal noise in the "Thermal" trace, but I guess the GWINC file I've been using to generate this trace only computes the thermal noise for the displacement DoF. I think this paper has the formulas to account for them, I will look into including these.

For the seismic noise, I've just been using the seis40.mat file from the 40m SVN. I think it includes a model of our stacks, but I did not re-calculate anything with current seismometer spectra. In the NB I updated yesterday, however, I think I was off by a factor of sqrt(3) as I had only included the seismic noise from 1 suspended optic. I've corrected this in the attached plot.

For the dark noise, you are right, I had it grouped in the wrong dictionary in the code so it was applying the OLG inversion. I've fixed this in the attached plot.
  12976   Sat May 6 21:52:11 2017 ranaUpdateGeneralMICH NB questions

I think the most important next two items to budget are the optical lever noise, and the coil driver noise. The coil driver noise is dominated at the moment by the DAC noise since we're operating with the dewhitening filters turned off.

  12979   Wed May 10 01:56:06 2017 gautamUpdateGeneralMICH NB - OL coupling

Last night, I tried to estimate the contribution of OL feedback signal to the MICH length error signal.

In order to do so, I took a swept sine measurement with a few points between 50 Hz and 500 Hz. The transfer function between C1:LSC-MICH_OUT_DQ and the Oplev Servo Output point (e.g. C1:SUS-BS_OL_PIT_OUT etc) was measured. I played around with the excitation amplitude till I got coherence > 0.9 for the TF measurement, while making sure I wasn't driving the Oplev error point too hard that side-lobes began to show up in the MICH control signal spectrum.

The Oplev control signal is not DQ-ed. So I locked the DRMI again and downloaded the 16k data "live" for ~5min stretch using cdsutils.getdata on the workstation. The Oplev error point is DQ-ed at 2k, but I found that the excitation amplitude needed for good SNR at the error point drove the servo to the limiter value of 2000cts - so I decided to use the control signal instead. Knowing the transfer function from the Oplev *_OUT* channel to C1:LSC-MICH_IN1_DQ, I backed out the coupling - the transfer function was only measured between 50 Hz and 500 Hz, and no extrapolation is done, so the estimation is only really valid in this range, which looks like where it is important anyways (see Attachment #2, contributions from ITMX, ITMY and BS PIT and YAW servos added in quadrature).

I was also looking at the Oplev servo shapes and noticed that they are different for the ITMs and the BS (Attachment #1). Specifically, for the ITM Oplevs, an "ELP15" is used to do the roll-off while an "ELP35" is employed in the BS servo (though an ELP35 also exists in the ITM Oplev filter banks). I got lost in an elog search for when these were tuned, but I guess the principles outlined in this elog still hold and can serve as a guideline for Oplev loop tweaking.

Coil driver noise estimation to follow

Quote:

I think the most important next two items to budget are the optical lever noise, and the coil driver noise. The coil driver noise is dominated at the moment by the DAC noise since we're operating with the dewhitening filters turned off.

GV 10 May 12:30pm: I've uploaded another copy of the NB (Attachment #3) with the contributions from the ITMs and BS separated. Looks like below 100Hz, the BS coupling dominates, while the hump/plateau around 350Hz is coming from ITMX.

  12981   Wed May 10 16:53:38 2017 ranaUpdateGeneralMICH NB - OL coupling

That's a good find.

  1. The OL control signal can be gotten from the DQ error signal. You just need to multiply it by the digital filters and the gain. The state of the filters and the gain can be gotten using matlab tools like getFotonFilt.m. For python ChrisW wrote a tool called foton.py which is in the GDS SVN. You should ask him for it. It requires access to some ROOT libraries to run.
  2. We should have sub budgets for everything like OL and thermal, etc. They should be automatically produced each time you run the main budget and should be separate pages in the same PDF file. Jamie / Chris may have something going along these lines so check to see if they are already on it.
  12983   Wed May 10 17:17:05 2017 gautamUpdateGeneralDAC / Coil Driver noise

Suspension Actuator noise:

There are 3 main sources of electronics noise which come in through the coil driver:

  1. Voltage noise of the coil driver.
    1. The input referred noise is ~5 nV/rHz, so not a big issue.
    2. The Johnson noise of the output resistor which is in series with the coil is sqrt(4*k*T*R) ~ 3 nV/rHz. We probably want to increase this resistor from 200 to 1000 Ohms once Gautam convinces us that we don't need that range for lock acquisition.
  2. Voltage noise of the dewhitening board.
    1. In order to reduce DAC noise, we have a "dewhitening" filter which provides some low passing. There is an "antiDW" filter in the digital part which is the inverse of this, so that when they are both turned on, the result is that the main signal path has a flat transfer function, but the DAC noise gets attenuated.
    2. In particular, ours have 2 second order filters (each with 2 poles at 15 Hz and 2 zeros at 100 Hz).
    3. We also have a passive pole:zero network at the output which has z=130, 530 Hz and p = 14, 3185 Hz.
    4. The dewhitening board has an overall gain of 3 at DC to account for our old DACs having a range of +/-5 V and our coil drivers having +/- 15 V power supplies. We should get rid of this gain of 3.
    5. The dewhitening board (and probably the coil driver) use thick film resistors and so their noise is much worse than expected at low frequencies.
  3. DAC voltage noise. 
    1. The General Standards 16-bit DACs have a noise of ~5 uV/rHz.
  4. the satellite box is passive and not a significant source of noise; its just a flaky construction and so its problematic.
  12984   Wed May 10 17:46:44 2017 gautamUpdateGeneralDAC / Coil Driver noise - SRM coil driver + dewhite board removed

I've removed the SOS coil driver (D010001-B, S/N B151, labelled "SRM") + Universal Dewhitening Board (D000183 Rev C, S/N B5172, labelled "B5") combo for SRM from 1X4, for photo taking + inspection.

I first shutdown the SRM watchdog, noted cabling between these boards and also the AI board as well as output to Sat. Box. I also needed to shutdown the MC2 watchdog as I had to remove the DAC output to MC2 in order to remove the SRM Dewhitening board from the rack. This connection has been restored, MC locks fine now.

 

  12985   Thu May 11 09:45:46 2017 ranaUpdateGeneralDAC / Coil Driver noise - SRM coil driver + dewhite board removed

I believe the ETMs and ITMs are different from the others.

  12986   Thu May 11 18:59:22 2017 gautamUpdateGeneralSRM coil driver + dewhite board initial survey

I've added marked-up schematics + high-res photographs of the SRM coil driver board and dewhitening board to the 40m DCC Document tree (D1700217 and D1700218). 

In the attached marked-up schematics, I've also added the proposed changes which Rana and I discussed earlier today. For the thick-film -> thin-film resistor switching, I will try and make a quick LISO model to see if we can get away with replacing just a few rather than re-stuff the whole board.

Since I have the board out, should I implement some of these changes (like AD797 removal) before sticking it back in and pulling out one of the ITM boards? I need to look at the locking transients and current digital limit-values for the various DoFs before deciding on what is an appropriate value for the output resistance in series with the coil.

Another change I think should be made, but I forgot to include on the markups: On the dewhitening board, we should probably replace the decoupling capacitors C41 and C52 with equivalent value electrolytic caps (they are currently tantalum caps which I think are susceptible to fail by shorting input to output).

  12987   Fri May 12 01:36:04 2017 gautamUpdateGeneralSRM coil driver + dewhite board LISO modeling

I've made the LISO models for the dewhitening board and coil driver boards I pulled out.

Attached is a plot of the current noise in the current configuration (i.e. dewhitening board just has a gain x3 stage, and then propagated through the coil driver path), with the top 3 noise contributions: The op-amps (op3 and op5) are the LT1125s on the coil driver board in the bias path, while "R12" is the Johnson noise from the 1k input resistace to the OP27 in the signal path.

Assuming the OSEMs have an actuation gain of 0.016 N/A (so 0.064 N/A for 4 OSEMs), the current noise of ~1e-10 A/rtHz translates to a displacement noise of ~3e-15m/rtHz at ~100Hz (assuming a mirror mass of 0.25kg). 

I have NOT included the noise from the LM6321 current buffers as I couldn't find anything about their noise characteristics in the datasheet. LISO files used to generate this plot are attached.

Quote:

I've added marked-up schematics + high-res photographs of the SRM coil driver board and dewhitening board to the 40m DCC Document tree (D1700217 and D1700218). 

In the attached marked-up schematics, I've also added the proposed changes which Rana and I discussed earlier today. For the thick-film -> thin-film resistor switching, I will try and make a quick LISO model to see if we can get away with replacing just a few rather than re-stuff the whole board.

Since I have the board out, should I implement some of these changes (like AD797 removal) before sticking it back in and pulling out one of the ITM boards? I need to look at the locking transients and current digital limit-values for the various DoFs before deciding on what is an appropriate value for the output resistance in series with the coil.

Another change I think should be made, but I forgot to include on the markups: On the dewhitening board, we should probably replace the decoupling capacitors C41 and C52 with equivalent value electrolytic caps (they are currently tantalum caps which I think are susceptible to fail by shorting input to output).

 

  12988   Fri May 12 12:34:55 2017 gautamUpdateGeneralITM and BS coil driver + dewhite board pulled out

I first set the bias sliders to 0 on the MEDM screen (after checking that the nominal values were stored), then shut down the watchdogs, and then pulled out the boards for inspection + photo-taking.

  12990   Fri May 12 18:50:08 2017 gautamUpdateGeneralITM and BS coil driver + dewhite board pulled out

I've uploaded high-res photos + marked up schematics to the same DCC page linked in the previous page. I've noted the S/Ns of the ITM, BS and SRM boards on the page, I think it makes sense to collect everything on one page, and I guess eventually we will unify everything to a one or two versions.

To take the photos, I tried to reproduce the "LED light painting" technique reported here. I mounted the Canon EOS Rebel T3i on a tripod, and used some A3 sheets of paper to make a white background against which the board to be photographed was placed. I also used the new Macro lens we recently got. I then played around with the aperture and exposure time till I got what I judged to be good photos. The room lights were turned off, and I used the LED on my phone to do the "painting", from ~a metre away. I think the photos have turned out pretty well, the component values are readable.

Quote:

I first set the bias sliders to 0 on the MEDM screen (after checking that the nominal values were stored), then shut down the watchdogs, and then pulled out the boards for inspection + photo-taking.

 

  12999   Fri May 19 19:18:53 2017 KaustubhSummaryGeneralTesting of the new Photo Detectors ET-3010 and ET-3040

Motivation:

I got some hands-on-experience on using RF photodetectors and the Network Analyzer from Koji. There were newly purchased RF photodetectors from Electro-Optics Technology, Inc.. These were InGaAs Photodetectors with model no.: 120-10050-0001(ET-3010) and 120-10056-0001(ET-3040). The User Guide for the two detectors can be found here. This is the first time we bought the ET-3010 model PD for the 40m lab. It has an operation bandwith >1.5GHz(not tested yet), much higher than other PDs of its kind. This can be used for detecting the output as we 'sweep' the laser frequency for getting data on the optical cavities and the resonating modes inside the cavity. We just tested out the ET-3040 model today but will test out the ET-3010 next week.

Tools and Machines Used:

We worked on the optical bench right in front of the main entrance to the lab. We put the cables, power chords, etc. to their respective places. We used screws, poles, T's, I's, multimeter, Network/Spectrum Analyzer(along with the moving table), a lab computer, Oscilloscope, power supply and the aforementioned PDs for our testing. We took these items from the stack of tools at the Y-arm and the boxes of various different labelled palced near the X-arm. We moved the Network Analyzer(along with the bench) from near the Y-arm to our workplace.

Procedure:

I will include a rough schematic of the setup later.

We alligned the reference PD(High Speed Photoreceiver model 1611) and the test PD(ET-3040 in this case) to get optimal power output. We had set the pump current for the laser at 19.5mA which produced a power of 1.00mW at the output of the fiber couple. At the reference detector the measured voltage was about 1.8V and at the DUT it was about 15mV. The DC transimpedance for the reference detector is 10kOhm and its responsivity to 1064 nm is around 0.75A/W. Using this we calculate the power at the reference detector to be 0.24mW. The DC transimpedance for the DUT is 50Ohm and the responsivity of about 0.9A/W. This amounts to a power of about 0.33mW. After measuring the DC voltages, we connected the laser input to the Network Analyzer and gave in an RF signal with -10dBm and frequency modulation from 100 kHz to 500 MHz. The RF output from the Analyzer is coupled to the Reference Channel(CHR) of the analyzer via a 20dB directional coupler. The AC output of the reference detector is given at Channel A(CHA) and the output from the DUT is given to Channel B(CHB). We got plots of the ratios between the reference detector, DUT and the coupled refernce for the Transfer Function and the Phase. We found that the cut-off frequency for the ET3040 model was at arounf 55 MHz(stated as >50MHz in the data sheet). We have stored the data using the lab PC in the directory .../scripts/general/netgpibdata/data.

Result:

The bandwidth of the ET-3040 PD is as stated in the data sheet, >50 MHz.

Precaution:

These PDs have an internal power supply of 3V for ET-3040 and 6V for ET-3010. Do not leave these connected to any instruments after the experiments have been performed or else the batteries will get drained if there is any photocurrent on the PDs.

To Do:

A similar procedure has to be followed in order to test the ET-3010 PD. I will be doing this tentatively on Monday.

  13003   Mon May 22 13:37:01 2017 gautamUpdateGeneralDAC noise estimate

Summary:

I've spent the last week investigating various parts of the DAC -> OSEM coil signal chain in order to add these noises to the MICH NB. Here is what I have thus far.

Current situation:

  • Coils are operated with no DAC whitening
  • So we expect the DAC noise will dominate any contribution from the electronics noise of the analog De-Whitening and Coil Driver boards
  • There is a factor of 3 gain in the analog De-Whitening board

DAC noise measurement:

  • I essentially followed the prescription in G1401335 and G1401399
  • So far, I only measured one DAC channel (ITMX UL)
  • The noise shaping filter in the above documents was adapted for this measurement. The noise used was uniform between DC and 1kHz for this test.
  • For the >50Hz bandstops, I used 1 complex pole pair at 5Hz, and 1 compelx zero pair at 50Hz to level off the noise.
  • For <50Hz bandstops, I used 1 compelx pole pair at 1Hz and 1 complex zero pair at 5Hz to push the RMS to lower frequencies
  • I set the amplitude ("gain" = 10,000 in awggui) to roughly match the Vpp when the ITM local damping loops are on - this is ~300mVpp (measured with a scope). 
  • The elliptic bandstops were 6th order, with 50dB stopband attenuation.
  • The SR785 input auto-ranging was disabled to allow a fair comparison of the various bandstops - this was fixed to -20 dBVpk for all measurements, and the SR785 noise floor shown is also for this value of the input range. Input was also AC coupled, and since I was using the front-panel LEMO for this test, the signal was effectively single-ended (but the ground of the SR785 was set to "floating" in order to get the differential signal from the DAC) 
  • Attachment #1 shows the results of this measurement - I've subtracted the SR785 noise from the other curves. The noise model was motivated by G1401399, but I use an f^-1/2 model rather than an f^-1 model. It seems to fit the measurement alright (though the "fit" is just done by eye and not by systematic optimization of the parameters of the model function).

Noise budget:

  • I then tried to translate this result into the noise budget
  • The noises for the 4 face coils are added in quadrature, and then the contribution from 3 optics (2 ITMs and BS) are added in quadrature
  • To calibrate into metres, I converted the DAC noise spectral density into cts/rtHz, and used the numbers from this elog. I thought I had missed out on the factor of 3 gain in the de-white board, but the cts-to-meters number from the referenced elog already takes into account this factor.
  • Just to be clear, the black line for DAC noise in Attachment #2 is computed from the single-channel measurement of Attachment #1 according to the following relation: \script{n}_{\mathrm{DAC}} ~ (m/\sqrt{Hz}) = n_{1-ch} (V/\sqrt{Hz}) \times (2^{15}/20) (cts/V) \times G_{act} \times 2 \times \sqrt{6}, where G_act is the coil transfer function from the referenced elog, taken as 5nm/f^2 on average for the 2 ITMs and BS, the factor of 2 comes from adding the noise from 4 coils in quadrature, and the factor of sqrt(6) comes from adding the noise from 3 optics in quadrature (and since the BS has 4 times the noise of the ITMs)
  • Using the 0.016N/A number for each coil gave me an answer than was off by more than an order of magnitude - I am not sure what to make of this. But since the other curves in the NB are made using numbers from the referenced elog, I think the answer I get isn't too crazy...
  • Attachment #2 shows the noise budget in its current form, with DAC noise added. Except for the 30-70Hz region, it looks like the measured noise is accounted for.

Comments:

  • I have made a number of assumptions:
    • All DAC channels have similar noise levels
    • Tried to account for asymmetry between BS and ITMs (BS has 100 ohm resistance in series with the coil driver while the ITMs have 400 ohms) but the individual noises haven't been measured yet
    • This noise estimate holds for the BS, which is the MICH actuator (I didn't attempt to simulate the in-lock MICH control signal and then measure the DAC noise)
  • But this seems sensible as a first estimate
  • The dmesg logs for C1SUS don't tell me what DACs we are using, but I believe they are 16-bit DACs (I'll have to restart the machine to make sure)
  • In the NB, the flattening out of some curves beyond 1kHz is just an artefact of the fact that I don't have data to interpolate in that region, and isn't physical.
  • I had a brief chat with ChrisW who told me that the modified EEPROM/Auto-Cal procedure was only required for 18-bit DACs. So if it is true that our DACs are 16-bit, then he advised that apart from the DAC noise measurement above, the next most important thing to be characterized is the quantization noise (by subtracting the calculated digital control signal from the actual analog signal sent to the coils in lock)
  • More details of my coil driver electronics investigations to follow...
  13005   Mon May 22 18:20:27 2017 KaustubhSummaryGeneralTesting of the new Photo Detectors ET-3010 and ET-3040

I am adding the text files with the data readings and paramater settings along with the Bode Plot of the data. I plotted these graphs using matplotlib module with python 2.7.

Quote:

Motivation:

I got some hands-on-experience on using RF photodetectors and the Network Analyzer from Koji. There were newly purchased RF photodetectors from Electro-Optics Technology, Inc.. These were InGaAs Photodetectors with model no.: 120-10050-0001(ET-3010) and 120-10056-0001(ET-3040). The User Guide for the two detectors can be found here. This is the first time we bought the ET-3010 model PD for the 40m lab. It has an operation bandwith >1.5GHz(not tested yet), much higher than other PDs of its kind. This can be used for detecting the output as we 'sweep' the laser frequency for getting data on the optical cavities and the resonating modes inside the cavity. We just tested out the ET-3040 model today but will test out the ET-3010 next week...

 

  13009   Tue May 23 18:09:18 2017 KaustubhConfigurationGeneralTesting ET-3010 PD

In continuation with the previous(ET-3040 PD) test.

The ET-3010 PD requires to be fiber coupled for optimal use. I will try to test this model without the fiber couple tomorrow and see whether it works or not.

  13010   Tue May 23 22:58:23 2017 gautamUpdateGeneralDe-Whitening board noises

Summary:

I wanted to match a noise model to noise measurement for the coil-driver de-whitening boards. The main objectives were:

  1. Make sure the various poles/zeros of the Bi-Quad stages and the output stage were as expected from the schematics
  2. Figure out which components are dominating the noise contribution, so that these can be prioritized while swapping out the existing thick-film resistors on the board for lower noise thin-film ones
  3. Compare the noise performance of the existing configuration, which uses an LT1128 op-amp (max output current ~20mA) to drive the input of the coil-driver board, with that when we use a TLE2027 (max output current ~50mA) instead. This last change is motivated by the fact that an earlier noise-simulation suggested that the Johnson noise of the 1kohm input resistor on the coil driver board was one of the major noise contributors in the de-whitening board + coil driver board signal chain. Since the TLE2027 can drive an output current of up to 300mA, we could reduce the input impedance of the coil-driver board to mitigate this noise source to some extent. 

Measurement:

  • The back-plane pin controlling the MAX333A that determines whether de-whitening is engaged or not (P1A) was pulled to ground (by means of one of the new extender boards given to us by Ben Abbott). So two de-whitening stages were engaged for subsequent tests.
  • I first measured the transfer function of the signal path with whitening engaged, and then fit my LISO model to the measurement to tweak the values of the various components. This fitted file is what I used for subsequent noise analysis. 
  • ​For the noise measurement, I shorted the input of the de-whitening board (10-pin IDE connector) directly to ground.
  • I then measured the voltage noise at the front-panel SMA connector with the SR785
  • The measurements were only done for 1 channel (CH1, which is the UL coil) for 4 de-whitening boards (2 ITMs, BS, and SRM). The 2 ITM boards are basically identical, and the BS and SRM boards are similar. Here, only results for the board labelled "ITMX" are presented.
  • For this board, I also measured the output voltage noise when the LT1128 was replaced with a TLE2027 (SOIC package, soldered onto a SOIC-to-DIP adaptor). Steve has found (ordered?) some DIP variants of this IC, so we can compare its noise performance when we get it.

Results:

  • Attachment #1 shows the modeled and measured noises, which are in fairly good agreement.
  • The transfer function measurement/fitting (not attached) also suggests that the poles/zeros in the signal path are where we expect as per the schematic. I had already verified the various resistances, but now we can be confident that the capacitance values on the schematic are also correct. 
  • The LT1128 and TLE2027 show pretty much identical noise performance.
  • The SR785 noise floor was low enough to allow this measurement without any pre-amp in between. 
  • I have identified 3 resistors from the LISO model that dominate the noise (all 3 are in the Bi-Quad stages), which should be the first to be replaced. 
  • There are some pretty large 60 Hz harmonics visible. I thought I was careful enough avoiding any ground loops in the measurement, and I have gotten some more tips from Koji about how to better set up the measurement. This was a real problem when trying to characterize the Coil Driver noise.

Next steps:

  • I have data from the other 3 boards I pulled out, to be updated shortly.
  • The last piece (?) in this puzzle is the coil driver noise - this needs to be modeled and measured.
  • Once the coil driver board has been characterized, we need to decide what changes to make to these boards. Some things that come to mind at the moment:
    • Replace critical resistors (from noise-performance point of view) with low noise thin film ones.
    • Remove the "fast analog" path on the coil driver boards - these have potentiometers in series with the coil, which we should remove since we are not using this path anyways.
    • Remove all AD797s from both de-whitening and coil driver boards - these are mostly employed as monitor points that go to the backplane connector, which we don't use, and so can be removed.
    • Increase the series resistor at the output of the coil driver (currently, these are either 100ohm or 400ohm depending on the optic/channel). I need to double check the limits on the various LSC servos to make sure we can live with the reduced range we will have if we up these resistances to 1 kohm (which serves to reduce the current noise to the coils, which is ultimately what matters).
  13011   Wed May 24 18:19:15 2017 KaustubhUpdateGeneralET-3010 PD Test

Summary:

In continuation to the previous test conducted on the ET-3040 PD,  I performed a similar test on the ET-3010 model. This model requires a fiber couple input for proper testing, but I tested it in free space without a fiber couple as the laser power was only 1.00 mW and there was not much danger of scattering of the laser beam. The Data Sheet can be found here

Procedure:

The schematic(attached below) and the procedure are the same as the previous time. The pump current was set to 19.5 mA giving us a laser beam of power 1.00mW at the fiber couple output. The measured voltage for the reference detector was 1.8V. For the DUT, the voltage is amplified using a low noise amplifier(model SR-560) with a gain of 100. Without any laser incidence on the DUT, the multimeter reads 120.6 mV. After alligning the laser with the DUT, the multimeter reads 348.5 mV, i.e. the voltage for the DUT is 227.9/100 ~ 2.28 mV. The DC transimpedance of the reference detector is 10kOhm and its responsivity to 1064 nm is around 0.75 A/W. Using this we calculate the power at the reference detector to be 0.24 mW. The DC transimpedance for the DUT is 50Ohm and the responsivity is around 0.85 A/W. Using this we calculate the power at the DUT to be 0.054 mW. After this we connect the the laser input to the Netwrok Analyzer(AG4395A) and give an RF signal with -10dBm and frequency modulation from 100 kHz to 500 MHz.The RF output from the Analyzer is coupled to the Reference Channel(CHR) of the analyzer via a 20dB directional coupler. The AC output of the reference detector is given at Channel A(CHA) and the output from the DUT is given to Channel B(CHB). We got plots of the ratios between the reference detector, DUT and the coupled refernce for the Transfer Function and the Phase. I stored the data under the directory.../scripts/general/netgpibdata/data. The Bode Plot has been attached below and seeing it we observe that the cut-off frequency for the ET-3010 model is atleast over 500 MHz(stated as >1.5 GHz in the data sheet).

Result:

The bandwidth of the ET-3010 PD is atleast 500MHz, stated in the data sheet as >1.5GHz.

Precaution:

The ET-3010 PD has an internal power supply of 6V. Don't leave the PD connected to any instrument after the experimentation is done or else the batteries will get drained if there is any photocurrent on the PDs.

To Do:

Caliberate the vertical axis in the Bode Plot with transimpedance(Ohms) for the two PDs. Automate the procedure by making a Python script for taking multiple set of readings from the Netwrok Analyzer and aslo plot the error bands.

  13015   Thu May 25 19:27:29 2017 gautamUpdateGeneralCoil driver board noises

[Koji, Gautam]

Summary: 

  • Attachment #1 shows the measured/modeled noise of the coil driver board (labelled ITMX). 
  • Measurement was made with "TEST" input (which is what the DAC drives) is connected to ground via 50ohm terminator, and "BIAS" input grounded.
  • The model tells us to expect a noise of the order of 5nV/rtHz - this is comparable to (or below) the input noise of the SR785, and even the SR560. So this measurement only serves to place an upper bound on the coil driver board noise.
  • There is some excess noise below 40Hz, would be interesting to see if this disappears with swapping out thick-film resistors for thin film ones.
  • The LISO model says that the dominant contribution is from the voltage and input current noise of the two op-amps (LT1125) in the bias LP filter path. 
  • But if we can indeed realize this noise level of ~10-20nV/rtHz, we are already at the ~10^-17m/rtHz displacement noise for MICH at about 200Hz. I suspect there are other noises that will prevent us from realizing this performance in displacement noise.

Details:

This measurement has been troublesome - I was plagued by large 60Hz harmonics (see Attachment #1), the cause of which was unknown. I powered all electronics used in the measurement set up from the same power strip (one of the new surge-protecting ones Steve recently acquired for us), but these remained present. Yesterday, Koji helped me troubleshoot this issue. We did the various things, I try to put them here in the order we did them:

  1. Double check that all electronics were indeed being powered from the same power strip - OK, but harmonics remained present.
  2. Tried using a different DC power supply - no effect.
  3. Checked the signal with an oscilloscope - got no additional insight.
  4. I was using a DB25 breakout board + pomona minigrabbers to measure the output signal and pipe it to the SR785. Koji suggested using twisted ribbon wire + soldered BNC connector (recycled from some used ones lying around the lab). The idea was to minimize stray radiation pickup. We also disconnected the WiFi extender and GPIB box from the analyzer and also disconnected these from the power - this finaly had the desired effect, the large harmonics vanished. 

Today, I tried to repeat the measurement, with the newly made twisted ribbon cable, but the large 60Hz harmonics were back. Then I realized we had also disconnected the WiFi extender and GPIB box yesterday.

Turns out that connecting the Prologix box to the SR785 (even with no power) is the culprit! Disconnecting the Prologix box makes these harmonics go away. I was using the box labelled "Santuzza.martian" (192.168.113.109), but I double-checked with the box labelled "vanna.martian" (192.168.113.105, also a different DC power supply adapter for the box), the effect is the same. I checked various combinations like 

  • GPIB box connected but not powered
  • GPIB box connected with no network cable

but it looks like connecting the GPIB box to the analyzer is what causes the problem. This was reproducible on both SR785s in the lab. So to make this measurement, I had to do things the painful way - acquire the spectrum by manually pushing buttons with the GPIB box disconnected, then re-connect the box and download the data using SRmeasure --getdata. I don't fully understand what is going on, especially since if the input connector is directly terminated using a 50ohm BNC terminator, there are no harmonics, regardless of whether the GPIB box is connected or not. But it is worth keeping this problem in mind for future low-noise measurements. My elog searches did not reveal past reports of similar problems, has anyone seen something like this before?

It also looks like my previous measurement of the de-whitening board noises was plagued by the same problem (I took all those spectra with the GPIB boxes connected). I will repeat this measurement.

Next steps:

At the meeting this week, it was decided that

  • All AD797s would be removed from de-whitening boards and also coil-driver boards (as they are unused).
  • Thick film resistors with the most dominant noise contributions to be replaced with thin-film ones.
  • Gain of 3 on de-whitening board to be changed to gain of 1.

I also think it would be a good idea to up the 100-ohm resistors in the bias path on the ITM coil driver boards to 1kohm wire-wound. Since the dominant noise on the coil-driver boards is from the voltage noise of the Op-Amps in the bias path, this would definitely be an improvement. Looking at the current values of the bias MEDM sliders, a 10x increase in the resistance for ITMX will not be possible (the yaw bias is ~-1.5V), but perhaps we can go for a 4x increase?

The plan is to then re-install the boards, and see if we can 

  1. Turn on the whitening successfully (I checked with an extender board that the switching of the whitening stages works - turning OFF the "simDW" filter in the coil driver filter banks enables the analog de-whitening).
  2. Relize the promised improvement in MICH displacement noise with the existing whitening configuration.

We can then take a call on how much to up the series resistance in the DAC signal path. 

Now that I have figured out the cause of the harmonics, I will also try and measure the combined electronics noise of de-whitening board + coil driver board and compare it to the model.

Quote:
  • The last piece (?) in this puzzle is the coil driver noise - this needs to be modeled and measured.

 

  13016   Sat May 27 10:26:28 2017 KaustubhUpdateGeneralTransimpedance Calibration

Using Alberto's paper LIGO-T10002-09-R titled "40m RF PDs Upgrade", I calibrated the vertical axis in the bode plots I had obtained for the two PDs ET-3010 and ET-3040.

I am not sure whether the values I have obtained are correct or not(i.e. whether the calibration is correct or not). Kindly review them.

EDIT: Attached the formula used to calculate transimpedance for each data point and the values of other paramaters.

EDIT 2: Updated the plots by changing the conversion for gettin ghte ratio of the transfer functions from 10^(y/10) to 10^(y/20).

  13017   Mon May 29 16:47:38 2017 gautamUpdateGeneralCoil driver boards reinstalled

Yesterday, I reinstalled the de-whitening boards + coil driver boards into their respective Eurocrate slots, and reconnected the cabling. I then roughly re-aligned the ITMs using the green beams. 

I've given Steve a list of the thin-film resistors we need to implement the changes discussed in the preceeding elogs - but I figured it would be good to see if we can realize the projected improvement in MICH displacement noise just by fixing the BS Oplev loop shape and turning the existing whitening on. Before re-installing them however, I did make a few changes:

  • Removed the gain of x3 on all the signal paths on the De-Whitening boards, and made them gain x1. For the De-Whitened path, this was done by changing the feedback resistor in the final op-amp (OP27) from 7.5kohm to 2.49kOhm, while for the bypass path, the feedback resistor in the LT1125 stages were changed from 3.01kohm to 1kohm. 
  • To recap - this gain of x3 was originally implemented because the DACs were +/- 5V, while the coil driver electronics had supply voltage of +/- 15V. Now, our DACs are +/- 10V, and even though the supply voltage to the coil driver boards is +/- 15V, in reality, the op-amps saturate at around 12V, so we aren't really losing much in terms of range.
  • I also modified the de-whitening path in the BS de-whitening board to mimic the configuration on the ITM de-whitening boards. Mainly, this involved replacing the final stage AD797 with an OP27, and also implementing the passive pole-zero network at the output of the de-whitened path. I couldn't find capacitors similar to those used on the ITM de-whitening boards, so I used WIMA capacitors.
  • The SRM de-whitening path was not touched for now.
  • On all the boards, I replaced any AD797s that were being used with OP27s, and simply removed AD797s that were in DAQ paths.
  • I removed all the potentiometers on all the boards (FAST analog path on the coil driver boards, and some offset trim Pots on the BS and SRM de-whitening boards for the AD797s, which were also removed).
  • For one signal path on the coil driver board (ITMX ch1), I replaced all of the resistors with thin-film ones and re-measured the noise. However, the excess noise in the measurement below ~40Hz (relative to the model) remained.

Photos of all the boards were taken prior to re-installation, and have been uploaded to the 40m Google Photos page - I will update schematics + photos on the DCC page once other planned changes are implemented.

I also measured the transfer functions on the de-whitened signal paths on all the boards before re-installing them. I then fit everything using LISO, and updated the filter banks in Foton to match these measurements - the original filters were copied over from FM9 and FM10 to FM7 and FM8. The new filters are appended with the suffix "_0517", and live in FM9 and FM10 of the coil output filter banks. The measured TFs (for ITMs and BS) are summarized in Attachment #1, while Attachment #2 contains the data and LISO file used to do the fits (path to the .bod files in the .fil file will have to be changed appropriately). I used 2 complex pole pairs at ~10 Hz, two complex zero pairs at ~100Hz, real poles at ~15Hz and ~3kHz, and real zeros at ~100Hz and ~550Hz for the fits. The fits line up well with the measured data, and are close enough to the "expected" values (as calculated from component values) to be explained by tolerances on the installed components - I omit the plots here. 

After re-installing the boards in the Eurocrate, restoring rough alignment, and updating the filter banks with the most recent measured values, I wanted to see if I could turn the whitening on for one of the optics (ITMY) smoothly before trying to do so in the full DRMI - switching off the "SimDW_0517" filter (FM9) should switch the signal path on the de-whitening board from bypass to de-whitened, and I had confirmed last week with an extender board that the voltage at the appropriate backplane connector pin does change as expected when the FM9 MEDM button is toggled (for both ITMs, BS and SRM). But today I was not able to engage this transition smoothly, the optic seems to be getting kicked around when I engage the whitening. I will need to investigate this further. 


Unrelated to this work: the ETMY Oplev HeNe is dead (see Attachment #3). I thought we had just replaced this laser a couple of months ago - what is the expected lifetime of these? Perhaps the power supply at the Y-end is wonky and somehow damaging the HeNe heads?

  13019   Tue May 30 16:02:59 2017 gautamUpdateGeneralCoil driver boards reinstalled

I think the reason I am unable to engage the de-whitening is that the OL loop is injecting a ton of control noise - see Attachment #1. With the OL loop off (i.e. just local damping loops engaged for the ITMs), the RMS control signal at 100Hz is ~6 orders of magnitude (!) lower than with the OL loop on. So turning on the whitening was just railing the DAC I guess (since the whitening has something like 60dB gain at 100Hz).

The Oplev loops for the ITMs use an "Ellip15" low-pass filter to do the roll-off (2nd order Elliptic low pass filter with 15dB stopband atten and 2dB ripple). I confirmed that if I disable the OL loops, I was able to turn on the whitening for ITMY smoothly.

Now that the ETMY OL HeNe has been replaced, I restored alignment of the IFO. Both arms lock fine (I was also able to engage the ITMY Coil Driver whitening smoothly with the arm locked). However, something funny is going on with ASS - running the dither seems to inject huge offsets into the ITMY pit and yaw such that it almost immediately breaks the lock. This probably has to do with some EPICS values not being reset correctly since the recent slow-machine restarts (for instance, the c1iscaux restart caused all the LSC RFPD whitening gains to be reset to random values, I had to burt-restore the POX11 and POY11 values before I could get the arms to lock), I will have to investigate further.

GV edit 2pm 31 May: After talking to Koji at the meeting, I realized I did not specify what channel the attached spectra are for - it is  C1:SUS-ITMY_ULCOIL_OUT.

Quote:
 

But today I was not able to engage this transition smoothly, the optic seems to be getting kicked around when I engage the whitening. I will need to investigate this further. 


Unrelated to this work: the ETMY Oplev HeNe is dead (see Attachment #3). I thought we had just replaced this laser a couple of months ago - what is the expected lifetime of these? Perhaps the power supply at the Y-end is wonky and somehow damaging the HeNe heads?

 

  13026   Thu Jun 1 00:10:15 2017 gautamUpdateGeneralCoil driver boards reinstalled

[Koji, Gautam]

We tried to debug the mysterious sudden failure of ASS - here is a summary of what we did tonight. These are just notes for now, so I don't forget tomorrow.

What are the problems/symptoms?

  • After re-installing the coil driver electronics, the ASS loops do not appear to converge - one or more loops seem to run away to the point we lose the lock.
  • For the Y-arm dithers, the previously nominal ITM PIT and YAW oscillator amplitudes (of ~1000cts each) now appears far too large (the fuzz on the Y arm transmission increases by x3 as viewed on StripTool).
  • The convergence problem exists for the X arm alignment servos too.

What are the (known) changes since the servos were last working?

  • Gain of x3 on the de-whitening boards for ITMX, ITMY, BS and SRM have been replaced with gain x1. But I had measurements for all transfer functions (De-White board input to De-White Board outputs) before and after this change, so I compensated by adding a filter of gain ~x3 to all the coil filter banks for these optics (the exact value was the ratio of the DC gain of the transfer functions before/after).
  • The ETMY Oplev has been replaced. I walked over to the endtable and there doesn't seem to be any obvious clipping of either the Oplev beam or the IR transmission.

Hypotheses plus checks (indented bullets) to test them:

  1. The actuation on the ITMs are ~x10 times stronger now (for reasons unknown).
    • I locked the Y-arm and drove a line in the channels C1:SUS-ETMY_LSC_EXC and C1:SUS-ITMY_LSC_EXC at ~100Hz and ~30Hz, (one optic at one frequency at a time), and looked at the response in the LSC control signal. The peaks at both frequencies for the ITMs and ETMs were within a factor of ~2. Seems reasonable.
    • We further checked by driving lines in C1:SUS-ETMY_ASCPIT_EXC and C1:SUS-ITMY_ASCPIT_EXC (and also the corresponding YAW channels), and looked at peak heights at the drive frequencies in the OL control signal spectra - the peak heights matched up well in both the ITM and ETM spectra (the drive was in the same number of counts).

      So it doesn't look like there is any strange actuation imbalance between the ITM and ETM as a result of the recent electronics work, which makes sense as the other control loops acting on the suspensions (local damping, Oplevs etc seem to work fine). 
  2. The way the dither servo is set up for the Y-arm, the tip-tilts are used to set the input axis to the cavity axis, while actuation to the ITM and ETM takes care of the spot centering. The problem lies with one of these subsystems.
    • We tried disabling the ASS servo inputs to all the spot-centering loops - but even with just actuation on the TTs, the arm transmission isn't maximized.
    • We tried the other combination - disable actuation path to TTs, leave those to ITM and ETM on - same result, but the divergence is much faster (lock lost within a couple of seconds, large offsets appear in the ETM_PIT_L / ETM_YAW_L error signals.
    • Tried turning on loops one at a time - but still the arm transmission isn't maximized.
  3. Something is funny with the IR transmon QPD / ETMY Oplev.
    • I quickly measured Oplev PIT and YAW OLTFs, they seem normal with upper UGFs around 5Hz and phase margins of ~30 degrees.
    • We had no success using either of the two available Transmon QPDs
    • Looking at the QPD quadrants, the alignment isn't stellar but we get roughly the same number of counts on all quadrants, and the spot isn't drastically misaligned in either PIT or YAW.

For whatever reasons, it appears that dithering the cavity mirrors at frequencies with amplitudes that worked ~3 weeks ago is no longer giving us the correct error signals for dither alignment. We are out of ideas for tonight, TBC tomorrow...

 

  13034   Fri Jun 2 12:32:16 2017 gautamUpdateGeneralPower glitch

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

  13035   Fri Jun 2 16:02:34 2017 gautamUpdateGeneralPower glitch

Today's recovery seems to be a lot more complicated than usual.

  • The vertex area of the lab is pretty warm - I think the ACs are not running. The wall switch-box (see Attachment #1) shows some red lights which I'm pretty sure are usually green. I pressed the push-buttons above the red light, hopefully this fixed the AC and the lab cools down soon.
  • Related to the above - C1IOO has a bunch of warning orange indicator lights ON that suggest it is feeling the heat. Not sure if that is why, but I am unable to bring any of the C1IOO models back online - the rtcds compilation just fails, after which I am unable to ssh back into the machine as well.
  • C1SUS was problematic as well. I found that the expansion chassis was not powered. Fortunately, this was fixed by simply switching to the one free socket on the power strip that powers a bunch of stuff on 1X4 - this brought the expansion chassis back alive, and after a soft reboot of c1sus, I was able to get these models up and running. Fortunately, none of the electronics seem to have been damaged. Perhaps it is time for surge-protecting power strips inside the lab area as well (if they aren't already)? 
  • I was unable to successfully resolve the dmesg problem alluded to earlier. Looking through some forums, I gather that the output of dmesg should be written to a file in /var/log/. But no such file exists on any of our 5 front-ends (but it does on Megatron, for example). So is this way of setting up the front end machines deliberate? Why does this matter? Because it seems that the buffer which we see when we simply run "dmesg" on the console gets preiodically cleared. So sometime back, when I was trying to verify that the installed DACs are indeed 16-bit DACs by looking at dmesg, running "dmesg | head" showed a first line that was written to well after the last reboot of the machine. Anyway, this probably isn't a big deal, and I also verified during the model recompilation that all our DACs are indeed 16-bit.
  • I was also trying to set up the Upstart processes on megatron such that the MC autolocker and FSS slow control scripts start up automatically when the machine is rebooted. But since C1IOO isn't co-operating, I wasn't able to get very far on this front either...

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

GV Jun 5 6pm: From my discussion with jamie, I gather that the fact that the dmesg output is not written to file is because our front-ends are diskless (this is also why the ring buffer, which is what we are reading from when running "dmesg", gets cleared periodically)

 

Quote:

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

 

  13036   Fri Jun 2 22:01:52 2017 gautamUpdateGeneralPower glitch - recovery

[Koji, Rana, Gautam]

Attachment #1 - CDS status at the end of todays efforts. There is one red indicator light showing an RFM error which couldn't be fixed by running "global diag reset" or "mxstream restart" scripts, but getting to this point was a journey so we decided to call it for today.


The state this work was started in was as indicated in the previous elog - c1ioo wasn't ssh-able, but was responding to ping. We then did the following:

  1. Killed all models on all four other front ends other than c1ioo. 
  2. Hard reboot for c1ioo - at this point, we could ssh into c1ioo. With all other models killed, we restarted the c1ioo models one by one. They all came online smoothly.
  3. We then set about restarting the models on the other machines.
    • We started with the IOP models, and then restarted the others one by one
    • We then tried running "global diag reset", "mxstream restart" and "telnet fb 8087 -> shutdown" to get rid of all the red indicator fields on the CDS overview screen.
    • All models came back online, but the models on c1sus indicated a DC (data concentrator?) error. 
  4. After a few minutes, I noticed that all the models on c1iscex had stalled
    • dmesg pointed to a synchronization error when trying to initialize the ADC
    • The field that normally pulses at ~1pps on the CDS overview MEDM screen when the models are running normally was stuck
    • Repeated attempts to restart the models kept throwing up the same error in dmesg 
    • We even tried killing all models on all other frontends and restarting just those on c1iscex as detailed earlier in this elog for c1ioo - to no avail.
    • A walk to the end station to do a hard reboot of c1iscex revealed that both green indicator lights on the slave timing card in the expansion chassis were OFF.
    • The corresponding lights on the Master Timing Sequencer (which supplies the synchronization signal to all the front ends via optical fiber) were also off.
    • Sometime ago, Eric and I had noticed a similar problem. Back then, we simply switched the connection on the Master Timing Sequencer to the one unused available port, this fixed the problem. This time, switching the fiber connection on the Master Timing Sequencer had no effect.
    • Power cycling the Master Timing Sequencer had no effect
    • However, switching the optical fiber connections going to the X and Y ends lead to the green LED on the suspect port on the Master Timing Sequencer (originally the X end fiber was plugged in here) turning back ON when the Y end fiber was plugged in.
    • This suggested a problem with the slave timing card, and not the master. 
  5. Koji and I then did the following at the X-end electronics rack:
    • Shutdown c1iscex, toggled the switches in the front and back of the expansion chassis
    • Disconnect AC power from rear of c1iscex as well as the expansion chassis. This meant all LEDs in the expansion chassis went off, except a single one labelled "+5AUX" on the PCB - to make this go off, we had to disconnect a jumper on the PCB (see Attachment #2), and then toggle the power switches on the front and back of the expansion chassis (with the AC power still disconnected). Finally all lights were off.
    • Confident we had completely cut all power to the board, we then started re-connecting AC power. First we re-started the expansion chassis, and then re-booted c1iscex.
    • The lights on the slave timing card came on (including the one that pulses at ~1pps, which indicates normal operation)!
  6. Then we went back to the control room, and essentially repeated bullet points 2 and 3, but starting with c1iscex instead of c1ioo.
  7. The last twist in this tale was that though all the models came back online, the DC errors on c1sus models persisted. No amount of "mxstream restart", "global diag reset", or restarting fb would make these go away.
  8. Eventually, Koji noticed that there was a large discrepancy in the gpstimes indicated in c1x02 (the IOP model on c1sus), compared to all the other IOP models (even though the PDT displayed was correct). There were also a large number or IRIG-B errors indicated on the same c1x02 status screen, and the "TIM" indicator in the status word was red.
  9. Turns out, running ntpdate before restarting all the models somehow doesn't sync the gps time - so this was what was causing the DC errors. 
  10. So we did a hard reboot of c1sus (and for good measure, repeated the bullet points of 5 above on c1sus and its expansion chassis). Then, we tried starting the c1x02 model without running ntpdate first (on startup, there is an 8 hour mismatch between the actual time in Pasadena and the system time - but system time is 8 hours behind, so it isn't even somehow syncing to UTC or any other real timezone?)
    • Model started up smoothly
    • But there was still a 1 second discrepancy between the gpstime on c1x02 and all the other IOPs (and the 8 hour discrepancy between displayed PDT and actual time in Pasadena)
    • So we tried running ntpdate after starting c1x02 - this finally fixed the problem, gpstime and PDT on c1x02 agreed with the other frontends and the actual time in Pasadena.
    • However, the models on c1lsc and c1ioo crashed
    • So we restarted the IOPs on both these machines, and then the rest of the models.
  11. Finally, we ran "mxstream restart", "global diag reset", and restarted fb, to make the CDS overview screen look like it does now.

Why does ntpdate behave this way? And only on one of the frontends? And what is the remaining RFM error? 

Koji then restarted the IMC autolocker and FSS slow processes on megatron. The IMC locked almost immediately. The MC2 transmon indicated a large shift in the spot position, and also the PMC transmission is pretty low (while the lab temperature equilibriates after the AC being off during peak daytime heat). So the MC transmission is ~14500 counts, while we are used to more like 16,500 counts nowadays.

Re-alignment of the IFO remains to be done. I also did not restart the end lasers, or set up the Marconi with nominal params. 

Attachment #3 - Status of the Master Timing Sequencer after various reboots and power cycling of front ends and associated electronics.

Attachment #4 - Warning lights on C1IOO

Quote:

Today's recovery seems to be a lot more complicated than usual.

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

 

  13038   Sun Jun 4 15:59:50 2017 gautamUpdateGeneralPower glitch - recovery

I think the CDS status is back to normal.

  • Bit 2 of the C1RFM status word was red, indicating something was wrong with "GE FANUC RFM Card 0".
  • You would think the RFM errors occur in pairs, in C1RFM and in some other model - but in this case, the only red light was on c1rfm.
  • While trying to re-align the IFO, I noticed that the TRY time series flatlined at 0 even though I could see flashes on the TRANSMON camera.
  • Quick trip to the Y-End with an oscilloscope confirmed that there was nothing wrong with the PD.
  • I crawled through some elogs, but didn't really find any instructions on how to fix this problem - the couple of references I did find to similar problems reported red indicator lights occurring in pairs on two or more models, and the problem was then fixed by restarting said models.
  • So on a hunch, I restarted all models on c1iscey (no hard or soft reboot of the FE was required)
  • This fixed the problem
  • I also had to start the monit process manually on some of the FEs like c1sus. 

Now IFO work like fixing ASS can continue...

  13079   Sun Jun 25 22:30:57 2017 gautamUpdateGeneralc1iscex timing troubles

I saw that the CDS overview screen indicated problems with c1iscex (also ETMX was erratic). I took a closer look and thought it might be a timing issue - a walk to the X-end confirmed this, the 1pps status light on the timing slave card was no longer blinking. 

I tried all versions of power cycling and debugging this problem known to me, including those suggested in this thread and from a more recent time. I am leaving things as it for the night, will look into this more tomorrow. I've also shutdown the ETMX watchdog for the time being. Looks like this has been down since 24Jun 8am UTC.

  13080   Mon Jun 26 09:39:15 2017 NaomiSummaryGeneralMeasure transfer functions of Mini-Circuits filters

I have spent my first few days as a SURF getting experience working with the Network/Spectrum Analyzer (AG 4395A). After an introduction to the 40m by Koji, I was tasked with using the AG4395A to measure the transfer function of several filters (for example, Mini-Circuits Low Pass Filter SLP-30). I am now familiar with configuring the AG 4395A, taking a single set of data using a command from one of the control computers, and plotting the dataset as a Bode plot (separate plots for magnitude and phase) using Python.

To Do:

  • Use AGmeasure to take multiple datasets with a single command.
  • Plot multiple datasets for each filter on a single Bode plot and perform some statistical analysis. 

To experiment with plotting multiple datasets on a single Bode plot, I used a single dataset from the Network Analyzer using the SLP-30 filter and added random noise to create ten datasets to plot. I am attaching the resulting Bode plot, which has the ten generated sets of data plotted along with their average.

We discussed with Rana and Koji how to interpret this type of dataset from the Network Analyzer. Instead of considering the magnitude and phase as separate quantities, we should consider them together as a single complex number in the form H(f) = M exp(iπP/180), where M is the magnitude and P is the phase in degrees. We can then find the average value of the measured quantity in its complex number form (x + iy), as opposed to just taking the average of the magnitude and phase separately.  

  13081   Mon Jun 26 22:01:08 2017 KojiUpdateGeneralc1iscex timing troubles

I tried a couple of things, but no fundamental improvement of the missing LED light on the timing board.

- The power supply cable to the timing board at c1iscex indicated +12.3V

- I swapped the timing fiber to the new one (orange) in the digital cabinet. It didn't help.

- I swapped the opto-electronic I/F for the timing fiber with the Y-end one. The X-end one worked at Y-end, and Y-end one didn't work at X-end.

- I suspected the timing board itself -> I brought a "spare" timing board from the digital cabinet and tried to swap the board. This didn't help.

 

Some ideas:

- Bring the X-end fiber to C1SUS or C1IOO to see if the fiber is OK or not.

- We checked the opto-electronic I/F is OK

- Try to swap the IO chassis with the Y-end one.

- If this helps, swap the timing board only to see this is the problem or not.

  13085   Wed Jun 28 20:15:46 2017 gautamUpdateGeneralc1iscex timing troubles

[Koji, gautam]

Here is a summary of what we did today to fix the timing issue on c1iscex. The power supply to the timing card in the X end expansion chassis was to blame.

  1. We prepared the Y-end expansion chassis for transport to the X end. To do so, we disconnected the following from the expansion chassis
    • Cables going to the ADC/DAC adaptor boards
    • Dolphin connector
    • BIO connector
    • RFM fiber
    • Timing fiber
  2. We then carried the expansion chassis to the X end electronics rack. There we repeated the above steps for the X-end expansion chassis
  3. We swapped the X and Y end expansion chassis in the X end electronics rack. Powering the unit, we immediately saw the green lights on the front of the timing card turn on, suggesting that the Y-end expansion chassis works fine at the X end as well (as it should). To further confirm that all was well, we were able to successfully start all the RT models on c1iscex without running into any timing issues.
  4. Next, we decided to verify if the spare timing card is functional. So we swapped out the timing card in the expansion chassis brought over to the X end from the Y end with the spare. In this test too, all worked as expected. So at this stage, we concluded that
    • There was nothing wrong with the fiber bringing the timing signal to the X end
    • The Y-end expansion chassis works fine
    • The spare timing card works fine.
  5. Then we decided to try the original X-end expansion chassis timing card in the Y-end expansion chassis. This test too was successful - so there was nothing wrong with any of the timing card!
  6. Next, we decided to power the X-end timing chassis with its original timing card, which was just verified to work fine. Surprisingly, the indicator lights on the timing card did not turn on.
  7. The timing card has 3 external connections
    • A 40 pin IDE connector
    • Power
    • Fiber carrying the timing signal
  8. We went back to the Y-end expansion chassis, and checked that the indicator lights on the timing card turned on even when the 40 pin IDE connector was left unconnected (so the timing card just gets power and the timing signal).
  9. We concluded that the power supply in the X end expansion chassis was to blame. Indeed, when Koji jiggled the connector around a little, the indicator lights came on!
  10. The connection was diagnosed to be somewhat flaky - it employs the screw-in variety of terminal blocks, and one of the connections was quite loose - Koji was able to pull the cable out of the slot applying a little pressure.
  11. I replaced the cabling (swapped the wires for thicker gauge, more flexible variety), and re-tightened the terminal block screws. The connection was reasonably secure even when I applied some force. A quick test verified that the timing card was functional when the unit was powered.
  12. We then replaced the X and Y-end expansion chassis (complete with their original timing cards, so the spare is back in the CDS cabinet), in the racks. The models started up again without complaint, and the CDS overview screen is now in a good state [Attachment #1]. The arms are locked and aligned for maximum transmission now.
  13. There was some additional difficulty in getting the 40-pin IDE connector in on the Y-end expansion chassis. Looked like we had bent some of the pins on the timing board while pulling this cable out. But Koji was able to fix this with a screw driver. Care should be taken when disconnecting this cable in the future!

There were a few more flaky things in the Expansion chassis - the IDE connectors don't have "keys" that fix the orientation they should go in, and the whole timing card assembly is kind of difficult and not exactly secure. But for now, things are back to normal it seems.

Wouldn't it be nice if this fix also eliminates the mystery ETMX glitching problem? After all, seems like this flaky power supply has been a problem for a number of years. Let's keep an eye out.

  13088   Fri Jun 30 02:13:23 2017 gautamUpdateGeneralDRMI locking attempt

Summary:

I attempted to re-lock the DRMI and try and realize some of the noise improvements we have identified. Summary elog, details to follow.

  1. Locked arms, ran ASS, centered OLs on ITMs and BS on their respective QPDs.
  2. Looked into changing the BS Oplev loop shape to match that of the ITMs - it looks like the analog electronics that take the QPD signals in for the BS Oplev is a little different, the 800Hz poles are absent. But I thought I had managed to do this successfully in that the error signal suppression improved and it didn't look like the performance of the modified loop was worse anywhere except possibly at the stack resonance of ~3Hz --- see Attachment #1 (will be rotated later). The TRX spectra before and after this modification also didn't raise any red flags.
  3. Re-aligned PRM - went to the AS table and centered beam on all REFL PDs
  4. Locked PRMI on carrier, ran MICH and AS dither alignment. PRC angular feedforward also seemed to work well.
  5. Re-aligned SRM, looked for DRMI locks - there was a brief lock of a couple of seconds, but after this, the BS behaviour changed dramatically.

Basically after this point, I was unable to repeat stuff I did earlier in the evening just a couple of hours ago. The single arm locks catch quickly, and seem stable over the hour timescale, but when I run the X arm dither, the BS PITCH loop starts to oscillate at ~0.1 Hz. Moreover, I am unable to acquire PRMI carrier lock. I must have changed a setting somewhere that I am not catching right now (although I've scripted most of these things for repeatability, so I am at a loss what I'm missing indecision). The only change I can think of is that I changed the BS Oplev loop shape. But I went back into the filter file archives and restored these to their original configuration. Hopefully I'll have better luck figuring this out tomorrow.

  13090   Fri Jun 30 11:50:17 2017 gautamUpdateGeneralDRMI locking attempt

Seems like the problem is actually with ITMX - the attached DV plots are for ITMX with just local damping loops on (no OLs), LR seems to be suspect.

I'm going to go squish cables and the usual sat. box voodoo, hopefully that settles it.

Quote:

Summary:

I attempted to re-lock the DRMI and try and realize some of the noise improvements we have identified. Summary elog, details to follow.

  1. Locked arms, ran ASS, centered OLs on ITMs and BS on their respective QPDs.
  2. Looked into changing the BS Oplev loop shape to match that of the ITMs - it looks like the analog electronics that take the QPD signals in for the BS Oplev is a little different, the 800Hz poles are absent. But I thought I had managed to do this successfully in that the error signal suppression improved and it didn't look like the performance of the modified loop was worse anywhere except possibly at the stack resonance of ~3Hz --- see Attachment #1 (will be rotated later). The TRX spectra before and after this modification also didn't raise any red flags.
  3. Re-aligned PRM - went to the AS table and centered beam on all REFL PDs
  4. Locked PRMI on carrier, ran MICH and AS dither alignment. PRC angular feedforward also seemed to work well.
  5. Re-aligned SRM, looked for DRMI locks - there was a brief lock of a couple of seconds, but after this, the BS behaviour changed dramatically.

Basically after this point, I was unable to repeat stuff I did earlier in the evening just a couple of hours ago. The single arm locks catch quickly, and seem stable over the hour timescale, but when I run the X arm dither, the BS PITCH loop starts to oscillate at ~0.1 Hz. Moreover, I am unable to acquire PRMI carrier lock. I must have changed a setting somewhere that I am not catching right now (although I've scripted most of these things for repeatability, so I am at a loss what I'm missing indecision). The only change I can think of is that I changed the BS Oplev loop shape. But I went back into the filter file archives and restored these to their original configuration. Hopefully I'll have better luck figuring this out tomorrow.

 

  13093   Fri Jun 30 22:28:27 2017 gautamUpdateGeneralDRMI re-locked

Summary:

Reverted to old settings, tried to reproduce DRMI lock with settings as close to those used in May this year as possible. Tonight, I was successful in getting a couple of ~10min DRMI 1f locks yes. Now I can go ahead and try and reduce the noise.

I am not attempting a full characterization tonight, but the important changes since the May locks are in the de-whitening boards and coil driver boards. I did not attempt to engage the coil-dewhitening, but the PD whitening works fine.

As a quick check, I tested the hypothesis that the BS OL loop A2L coupling dominates between ~10-50Hz. The attached control signal spectra [Attachment #2] supports this hypothesis. Now to actually change the loop shape.

I've centered Oplevs of all vertex optics, and also the beams on the REFL and AS PDs. The ITMs and BS have been repeatedly aligned since re-installing their respective coil driver electronics, but the SRM alignment needed some adjustment of the bias sliders.

Full characterization to follow. Some things to check:

  • Investigate adn fix the suspect X-arm ASS loop
  • Is there too much power on the AS110 PD post Oct2016 vent? Is the PD saturating?

Lesson learnt: Don't try and change too many things at once!

GV July 5 1130am: Looks like the MICH loop gain wasn't set correctly when I took the attached spectra, seems like the bump around 300Hz was caused by this. On later locks, this feature wasn't present.

  13094   Sat Jul 1 14:27:00 2017 KojiUpdateGeneralDRMI re-locked

Basically we use the arm cavities as the reference of the beam alignment. The incident beam is aligned such that the ITMY angle dither is minimized (at least at the dither freq).
This means that we have no capability to adjust the spot poisitions on the PRM, SRM, BS, ITMX optics.

We are still able to minimize A2L by adding intentional asymmetry to the coil actuators.

  13097   Wed Jul 5 19:10:36 2017 gautamUpdateGeneralNB code checkout - updated

I've been making NBs on my laptop, thought I would get the copy under version control up-to-date since I've been negligent in doing so.

The code resides in /ligo/svncommon/NoiseBudget, which as a whole is a git directory. For neatness, most of Evan's original code has been put into the sub-directory  /ligo/svncommon/NoiseBudget/H1NB/, while my 40m NB specific adaptations of them are in the sub-directory /ligo/svncommon/NoiseBudget/NB40. So to make a 40m noise budget, you would have to clone and edit the parameter file accordingly, and run python C1NB.py C1NB_2017_04_30.py for example. I've tested that it works in its current form. I had to install a font package in order to make the code run (with sudo apt-get install tex-gyre ), and also had to comment out calls to GwPy (it kept throwing up an error related to the package "lal", I opted against trying to debug this problem as I am using nds2 instead of GwPy to get the time series data anyways).

There are a few things I'd like to implement in the NB like sub-budgets, I will make a tagged commit once it is in a slightly neater state. But the existing infrastructure should allow making of NBs from the control room workstations now.

Quote:

[evan, gautam]

We spent some time trying to get the noise-budgeting code running today. I guess eventually we want this to be usable on the workstations so we cloned the git repo into /ligo/svncommon. The main objective was to see if we had all the dependencies for getting this code running already installed. The way Evan has set the code up is with a bunch of dictionaries for each of the noise curves we are interested in - so we just commented out everything that required real IFO data. We also commented out all the gwpy stuff, since (if I remember right) we want to be using nds2 to get the data. 

Running the code with just the gwinc curves produces the plots it is supposed to, so it looks like we have all the dependencies required. It now remains to integrate actual IFO data, I will try and set up the infrastructure for this using the archived frame data from the 2016 DRFPMI locks..

 

  13101   Sat Jul 8 17:09:50 2017 gautamUpdateGeneralETMY TRANS QPD anomaly

About 2 weeks ago, I noticed some odd behaviour of the LSC TRY data stream. Its DC value seems to be drifting ~10x more than TRX. Both signals come from the transmission QPDs. At the time, we were dealing with various CDS FE issues but things have been stable on that end for the last two weeks, so I looked into this a bit more today. It seems like one particular channel is bad - Quadrant 4 of the ETMY TRANS QPD. Furthermore, there is a bump around 150Hz, and some features above 2kHz, that are only present for the ETMY channels and not the ETMX ones.

Since these spectra were taken with the PSL shutter closed and all the lab room lights off, it would suggest something is wrong in the electronics - to be investigated.

The drift in TRY can be as large as 0.3 (with 1.0 being the transmitted power in the single arm lock). This seems unusually large, indeed we trigger the arm LSC loops when TRY > 0.3. Attachment #2 shows the second trend of the TRX and TRY 16Hz EPICS channels for 1 day. In the last 12 hours or so, I had left the LSC master switch OFF, but the large drift of the DC value of TRY is clearly visible.

In the short term, we can use the high-gain THORLABS PD for TRY monitoring.

  13102   Sun Jul 9 08:58:07 2017 ranaUpdateGeneralETMY TRANS QPD anomaly

Indeed, the whole point of the high/low gain setup is to never use the QPDs for the single arm work. Only use the high gain Thorlabs PD and then the switchover code uses the QPD once the arm powers are >5.

I don't know how the operation procedure went so higgledy piggledy.

  13103   Mon Jul 10 09:49:02 2017 gautamUpdateGeneralAll FEs down

Attachment #1: State of CDS overview screen as of 9.30AM today morning when I came in.

Looks like there may have bene a power glitch, although judging by the wall StripTool traces, if there was one, it happened more than 8 hours ago. FB is down atm so can't trend to find out when this happened.

All FEs and FB are unreachable from the control room workstations, but Megatron, Optimus and Chiara are all ssh-able. The latter reports an uptime of 704 days, so all seems okay with its UPS. Slow machines are all responding to ping as well as telnet.

Recovery process to begin now. Hopefully it isn't as complicated as the most recent effort indecision[FAMOUS LAST WORDS]

  13104   Mon Jul 10 11:20:20 2017 gautamUpdateGeneralAll FEs down

I am unable to get FB to reboot to a working state. A hard reboot throws it into a loop of "Media Test Failure. Check Cable".

Jetstor RAID array is complaining about some power issues, the LCD display on the front reads "H/W Monitor", with the lower line cycling through "Power#1 Failed", "Power#2 Failed", and "UPS error". Going to 192.168.113.119 on a martian machine browser and looking at the "Hardware information" confirms that System Power #1 and #2 are "Failed", and that the UPS status is "AC power loss". So far I've been unable to find anything on the elog about how to handle this problem, I'll keep looking.


In fact, looks like this sort of problem has happened in the past. It seems one power supply failed back then, but now somehow two are down (but there is a third which is why the unit functions at all). The linked elog thread strongly advises against any sort of power cycling. 

  13106   Mon Jul 10 17:46:26 2017 gautamUpdateGeneralAll FEs down

A bit more digging on the diagnostics page of the RAID array reveals that the two power supplies actually failed on Jun 2 2017 at 10:21:00. Not surprisingly, this was the date and approximate time of the last major power glitch we experienced. Apart from this, the only other error listed on the diagnostics page is "Reading Error" on "IDE CHANNEL 2", but these errors precede the power supply failure.

Perhaps the power supplies are not really damaged, and its just in some funky state since the power glitch. After discussing with Jamie, I think it should be safe to power cycle the Jetstor RAID array once the FB machine has been powered down. Perhaps this will bring back one/both of the faulty power supplies. If not, we may have to get new ones. 

The problem with FB may or may not be related to the state of the Jestor RAID array. It is unclear to me at what point during the boot process we are getting stuck at. It may be that because the RAID disk is in some funky state, the boot process is getting disrupted.

Quote:

I am unable to get FB to reboot to a working state. A hard reboot throws it into a loop of "Media Test Failure. Check Cable".

Jetstor RAID array is complaining about some power issues, the LCD display on the front reads "H/W Monitor", with the lower line cycling through "Power#1 Failed", "Power#2 Failed", and "UPS error". Going to 192.168.113.119 on a martian machine browser and looking at the "Hardware information" confirms that System Power #1 and #2 are "Failed", and that the UPS status is "AC power loss". So far I've been unable to find anything on the elog about how to handle this problem, I'll keep looking.


In fact, looks like this sort of problem has happened in the past. It seems one power supply failed back then, but now somehow two are down (but there is a third which is why the unit functions at all). The linked elog thread strongly advises against any sort of power cycling. 

 

  13107   Mon Jul 10 19:15:21 2017 gautamUpdateGeneralAll FEs down

The Jetstor RAID array is back in its nominal state now, according to the web diagnostics page. I did the following:

  1. Powered down the FB machine - to avoid messing around with the RAID array while the disks are potentially mounted.
  2. Turned off all power switches on the back of the Jetstor unit - there were 4 of them, all of them were toggled to the "0" position.
  3. Disconnected all power cords from the back of the Jetstor unit - there were 3 of them.
  4. Reconnected the power cords, turned the power switches back on to their "1" position.

After a couple of minutes, the front LCD display seemed to indicate that it had finished running some internal checks. The messages indicating failure of power units, which was previously constantly displayed on the front LCD panel, was no longer seen. Going back to the control room and checking the web diagnostics page, everything seemed back to normal.

However, FB still will not boot up. The error is identical to that discussed in this thread by Intel. It seems FB is having trouble finding its boot disk. I was under the impression that only the FE machines were diskless, and that FB had its own local boot disk - in which case I don't know why this error is showing up. According to the linked thread, it could also be a problem with the network card/cable, but I saw both lights on the network switch port FB is connected to turn green when I powered the machine on, so this seems unlikely. I tried following the steps listed in the linked thread but got nowhere, and I don't know enough about how FB is supposed to boot up, so I am leaving things in this state now. 

  13108   Mon Jul 10 21:03:48 2017 jamieUpdateGeneralAll FEs down

 

Quote:
 

However, FB still will not boot up. The error is identical to that discussed in this thread by Intel. It seems FB is having trouble finding its boot disk. I was under the impression that only the FE machines were diskless, and that FB had its own local boot disk - in which case I don't know why this error is showing up. According to the linked thread, it could also be a problem with the network card/cable, but I saw both lights on the network switch port FB is connected to turn green when I powered the machine on, so this seems unlikely. I tried following the steps listed in the linked thread but got nowhere, and I don't know enough about how FB is supposed to boot up, so I am leaving things in this state now. 

It's possible the fb bios got into a weird state.  fb definitely has it's own local boot disk (*not* diskless boot).  Try to get to the BIOS during boot and make sure it's pointing to it's local disk to boot from.

If that's not the problem, then it's also possible that fb's boot disk got fried in the power glitch.  That would suck, since we'd have to rebuild the disk.  If it does seem to be a problem with the boot disk then we can do some invasive poking to see if we can figure out what's up with the disk before rebuilding.

  13110   Mon Jul 10 22:07:35 2017 KojiUpdateGeneralAll FEs down

I think this is the boot disk failure. I put the spare 2.5 inch disk into the slot #1. The OK indicator of the disk became solid green almost immediately, and it was recognized on the BIOS in the boot section as "Hard Disk". On the contrary, the original disk in the slot #0 has the "OK" indicator kept flashing and the BIOS can't find the harddisk.

 

  13111   Tue Jul 11 15:03:55 2017 gautamUpdateGeneralAll FEs down

Jamie suggested verifying that the problem is indeed with the disk and not with the controller, so I tried switching the original boot disk to Slot #1 (from Slot #0 where it normally resides), but the same problem persists - the green "OK" indicator light keeps flashing even in Slot #1, which was verified to be a working slot using the spare 2.5 inch disk. So I think it is reasonable to conclude that the problem is with the boot disk itself.

The disk is a Seagate Savvio 10K.2 146GB disk. The datasheet doesn't explicitly suggest any recovery options. But Table 24 on page 54 suggests that a blinking LED means that the disk is "spinning up or spinning down". Is this indicative of any particular failure moed? Any ideas on how to go about recovery? Is it even possible to access the data on the disk if it doesn't spin up to the nominal operating speed?

Quote:

I think this is the boot disk failure. I put the spare 2.5 inch disk into the slot #1. The OK indicator of the disk became solid green almost immediately, and it was recognized on the BIOS in the boot section as "Hard Disk". On the contrary, the original disk in the slot #0 has the "OK" indicator kept flashing and the BIOS can't find the harddisk.

 

 

  13112   Tue Jul 11 15:12:57 2017 KojiUpdateGeneralAll FEs down

If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.

If the disk is completely gone, we need to rebuilt the disk according to Jamie, and I don't know how to do it. (Don't we have any spare copy?)

  13113   Wed Jul 12 10:21:07 2017 gautamUpdateGeneralAll FEs down

Seems like the connector on this particular disk is of the SAS variety (and not SATA). I'll ask Steve to order a SAS to USB cable. In the meantime I'm going to see if the people at Downs have something we can borrow.

Quote:

If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.

If the disk is completely gone, we need to rebuilt the disk according to Jamie, and I don't know how to do it. (Don't we have any spare copy?)

 

  13114   Wed Jul 12 14:46:09 2017 gautamUpdateGeneralAll FEs down

I couldn't find an external docking setup for this SAS disk, seems like we need an actual controller in order to interface with it. Mike Pedraza in Downs had such a unit, so I took the disk over to him, but he wasn't able to interface with it in any way that allows us to get the data out. He wants to try switching out the logic board, for which we need an identical disk. We have only one such spare at the 40m that I could locate, but it is not clear to me whether this has any important data on it or not. It has "hda RTLinux" written on its front panel with a sharpie. Mike thinks we can back this up to another disk before trying anything, but he is going to try locating a spare in Downs first. If he is unsuccessful, I will take the spare from the 40m to him tomorrow, first to be backed up, and then for swapping out the logic board.

Chatting with Jamie and Koji, it looks like the options we have are:

  1. Get the data from the old disk, copy it to a working one, and try and revert the original FB machine to its last working state. This assumes we can somehow transfer all the data from the old disk to a working one.
  2. Prepare a fresh boot disk, load the old FB daqd code (which is backed up on Chiara) onto it, and try and get that working. But Jamie isn't very optimistic of this working, because of possible conflicts between the code and any current OS we would install.
  3. Get FB1 working. Jamie is looking into this right now.
Quote:

Seems like the connector on this particular disk is of the SAS variety (and not SATA). I'll ask Steve to order a SAS to USB cable. In the meantime I'm going to see if the people at Downs have something we can borrow.

 

 

  13115   Wed Jul 12 14:52:32 2017 jamieUpdateGeneralAll FEs down

I just want to mention that the situation is actually much more dire than we originally thought.  The diskless NFS root filesystem for all the front-ends was on that fb disk.  If we can't recover it we'll have to rebuilt the front end OS as well.

As of right now none of the front ends are accessible, since obviously their root filesystem has disappeared.

ELOG V3.1.3-