40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 140 of 341  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  13003   Mon May 22 13:37:01 2017 gautamUpdateGeneralDAC noise estimate

Summary:

I've spent the last week investigating various parts of the DAC -> OSEM coil signal chain in order to add these noises to the MICH NB. Here is what I have thus far.

Current situation:

  • Coils are operated with no DAC whitening
  • So we expect the DAC noise will dominate any contribution from the electronics noise of the analog De-Whitening and Coil Driver boards
  • There is a factor of 3 gain in the analog De-Whitening board

DAC noise measurement:

  • I essentially followed the prescription in G1401335 and G1401399
  • So far, I only measured one DAC channel (ITMX UL)
  • The noise shaping filter in the above documents was adapted for this measurement. The noise used was uniform between DC and 1kHz for this test.
  • For the >50Hz bandstops, I used 1 complex pole pair at 5Hz, and 1 compelx zero pair at 50Hz to level off the noise.
  • For <50Hz bandstops, I used 1 compelx pole pair at 1Hz and 1 complex zero pair at 5Hz to push the RMS to lower frequencies
  • I set the amplitude ("gain" = 10,000 in awggui) to roughly match the Vpp when the ITM local damping loops are on - this is ~300mVpp (measured with a scope). 
  • The elliptic bandstops were 6th order, with 50dB stopband attenuation.
  • The SR785 input auto-ranging was disabled to allow a fair comparison of the various bandstops - this was fixed to -20 dBVpk for all measurements, and the SR785 noise floor shown is also for this value of the input range. Input was also AC coupled, and since I was using the front-panel LEMO for this test, the signal was effectively single-ended (but the ground of the SR785 was set to "floating" in order to get the differential signal from the DAC) 
  • Attachment #1 shows the results of this measurement - I've subtracted the SR785 noise from the other curves. The noise model was motivated by G1401399, but I use an f^-1/2 model rather than an f^-1 model. It seems to fit the measurement alright (though the "fit" is just done by eye and not by systematic optimization of the parameters of the model function).

Noise budget:

  • I then tried to translate this result into the noise budget
  • The noises for the 4 face coils are added in quadrature, and then the contribution from 3 optics (2 ITMs and BS) are added in quadrature
  • To calibrate into metres, I converted the DAC noise spectral density into cts/rtHz, and used the numbers from this elog. I thought I had missed out on the factor of 3 gain in the de-white board, but the cts-to-meters number from the referenced elog already takes into account this factor.
  • Just to be clear, the black line for DAC noise in Attachment #2 is computed from the single-channel measurement of Attachment #1 according to the following relation: \script{n}_{\mathrm{DAC}} ~ (m/\sqrt{Hz}) = n_{1-ch} (V/\sqrt{Hz}) \times (2^{15}/20) (cts/V) \times G_{act} \times 2 \times \sqrt{6}, where G_act is the coil transfer function from the referenced elog, taken as 5nm/f^2 on average for the 2 ITMs and BS, the factor of 2 comes from adding the noise from 4 coils in quadrature, and the factor of sqrt(6) comes from adding the noise from 3 optics in quadrature (and since the BS has 4 times the noise of the ITMs)
  • Using the 0.016N/A number for each coil gave me an answer than was off by more than an order of magnitude - I am not sure what to make of this. But since the other curves in the NB are made using numbers from the referenced elog, I think the answer I get isn't too crazy...
  • Attachment #2 shows the noise budget in its current form, with DAC noise added. Except for the 30-70Hz region, it looks like the measured noise is accounted for.

Comments:

  • I have made a number of assumptions:
    • All DAC channels have similar noise levels
    • Tried to account for asymmetry between BS and ITMs (BS has 100 ohm resistance in series with the coil driver while the ITMs have 400 ohms) but the individual noises haven't been measured yet
    • This noise estimate holds for the BS, which is the MICH actuator (I didn't attempt to simulate the in-lock MICH control signal and then measure the DAC noise)
  • But this seems sensible as a first estimate
  • The dmesg logs for C1SUS don't tell me what DACs we are using, but I believe they are 16-bit DACs (I'll have to restart the machine to make sure)
  • In the NB, the flattening out of some curves beyond 1kHz is just an artefact of the fact that I don't have data to interpolate in that region, and isn't physical.
  • I had a brief chat with ChrisW who told me that the modified EEPROM/Auto-Cal procedure was only required for 18-bit DACs. So if it is true that our DACs are 16-bit, then he advised that apart from the DAC noise measurement above, the next most important thing to be characterized is the quantization noise (by subtracting the calculated digital control signal from the actual analog signal sent to the coils in lock)
  • More details of my coil driver electronics investigations to follow...
  13005   Mon May 22 18:20:27 2017 KaustubhSummaryGeneralTesting of the new Photo Detectors ET-3010 and ET-3040

I am adding the text files with the data readings and paramater settings along with the Bode Plot of the data. I plotted these graphs using matplotlib module with python 2.7.

Quote:

Motivation:

I got some hands-on-experience on using RF photodetectors and the Network Analyzer from Koji. There were newly purchased RF photodetectors from Electro-Optics Technology, Inc.. These were InGaAs Photodetectors with model no.: 120-10050-0001(ET-3010) and 120-10056-0001(ET-3040). The User Guide for the two detectors can be found here. This is the first time we bought the ET-3010 model PD for the 40m lab. It has an operation bandwith >1.5GHz(not tested yet), much higher than other PDs of its kind. This can be used for detecting the output as we 'sweep' the laser frequency for getting data on the optical cavities and the resonating modes inside the cavity. We just tested out the ET-3040 model today but will test out the ET-3010 next week...

 

  13009   Tue May 23 18:09:18 2017 KaustubhConfigurationGeneralTesting ET-3010 PD

In continuation with the previous(ET-3040 PD) test.

The ET-3010 PD requires to be fiber coupled for optimal use. I will try to test this model without the fiber couple tomorrow and see whether it works or not.

  13010   Tue May 23 22:58:23 2017 gautamUpdateGeneralDe-Whitening board noises

Summary:

I wanted to match a noise model to noise measurement for the coil-driver de-whitening boards. The main objectives were:

  1. Make sure the various poles/zeros of the Bi-Quad stages and the output stage were as expected from the schematics
  2. Figure out which components are dominating the noise contribution, so that these can be prioritized while swapping out the existing thick-film resistors on the board for lower noise thin-film ones
  3. Compare the noise performance of the existing configuration, which uses an LT1128 op-amp (max output current ~20mA) to drive the input of the coil-driver board, with that when we use a TLE2027 (max output current ~50mA) instead. This last change is motivated by the fact that an earlier noise-simulation suggested that the Johnson noise of the 1kohm input resistor on the coil driver board was one of the major noise contributors in the de-whitening board + coil driver board signal chain. Since the TLE2027 can drive an output current of up to 300mA, we could reduce the input impedance of the coil-driver board to mitigate this noise source to some extent. 

Measurement:

  • The back-plane pin controlling the MAX333A that determines whether de-whitening is engaged or not (P1A) was pulled to ground (by means of one of the new extender boards given to us by Ben Abbott). So two de-whitening stages were engaged for subsequent tests.
  • I first measured the transfer function of the signal path with whitening engaged, and then fit my LISO model to the measurement to tweak the values of the various components. This fitted file is what I used for subsequent noise analysis. 
  • ​For the noise measurement, I shorted the input of the de-whitening board (10-pin IDE connector) directly to ground.
  • I then measured the voltage noise at the front-panel SMA connector with the SR785
  • The measurements were only done for 1 channel (CH1, which is the UL coil) for 4 de-whitening boards (2 ITMs, BS, and SRM). The 2 ITM boards are basically identical, and the BS and SRM boards are similar. Here, only results for the board labelled "ITMX" are presented.
  • For this board, I also measured the output voltage noise when the LT1128 was replaced with a TLE2027 (SOIC package, soldered onto a SOIC-to-DIP adaptor). Steve has found (ordered?) some DIP variants of this IC, so we can compare its noise performance when we get it.

Results:

  • Attachment #1 shows the modeled and measured noises, which are in fairly good agreement.
  • The transfer function measurement/fitting (not attached) also suggests that the poles/zeros in the signal path are where we expect as per the schematic. I had already verified the various resistances, but now we can be confident that the capacitance values on the schematic are also correct. 
  • The LT1128 and TLE2027 show pretty much identical noise performance.
  • The SR785 noise floor was low enough to allow this measurement without any pre-amp in between. 
  • I have identified 3 resistors from the LISO model that dominate the noise (all 3 are in the Bi-Quad stages), which should be the first to be replaced. 
  • There are some pretty large 60 Hz harmonics visible. I thought I was careful enough avoiding any ground loops in the measurement, and I have gotten some more tips from Koji about how to better set up the measurement. This was a real problem when trying to characterize the Coil Driver noise.

Next steps:

  • I have data from the other 3 boards I pulled out, to be updated shortly.
  • The last piece (?) in this puzzle is the coil driver noise - this needs to be modeled and measured.
  • Once the coil driver board has been characterized, we need to decide what changes to make to these boards. Some things that come to mind at the moment:
    • Replace critical resistors (from noise-performance point of view) with low noise thin film ones.
    • Remove the "fast analog" path on the coil driver boards - these have potentiometers in series with the coil, which we should remove since we are not using this path anyways.
    • Remove all AD797s from both de-whitening and coil driver boards - these are mostly employed as monitor points that go to the backplane connector, which we don't use, and so can be removed.
    • Increase the series resistor at the output of the coil driver (currently, these are either 100ohm or 400ohm depending on the optic/channel). I need to double check the limits on the various LSC servos to make sure we can live with the reduced range we will have if we up these resistances to 1 kohm (which serves to reduce the current noise to the coils, which is ultimately what matters).
  13011   Wed May 24 18:19:15 2017 KaustubhUpdateGeneralET-3010 PD Test

Summary:

In continuation to the previous test conducted on the ET-3040 PD,  I performed a similar test on the ET-3010 model. This model requires a fiber couple input for proper testing, but I tested it in free space without a fiber couple as the laser power was only 1.00 mW and there was not much danger of scattering of the laser beam. The Data Sheet can be found here

Procedure:

The schematic(attached below) and the procedure are the same as the previous time. The pump current was set to 19.5 mA giving us a laser beam of power 1.00mW at the fiber couple output. The measured voltage for the reference detector was 1.8V. For the DUT, the voltage is amplified using a low noise amplifier(model SR-560) with a gain of 100. Without any laser incidence on the DUT, the multimeter reads 120.6 mV. After alligning the laser with the DUT, the multimeter reads 348.5 mV, i.e. the voltage for the DUT is 227.9/100 ~ 2.28 mV. The DC transimpedance of the reference detector is 10kOhm and its responsivity to 1064 nm is around 0.75 A/W. Using this we calculate the power at the reference detector to be 0.24 mW. The DC transimpedance for the DUT is 50Ohm and the responsivity is around 0.85 A/W. Using this we calculate the power at the DUT to be 0.054 mW. After this we connect the the laser input to the Netwrok Analyzer(AG4395A) and give an RF signal with -10dBm and frequency modulation from 100 kHz to 500 MHz.The RF output from the Analyzer is coupled to the Reference Channel(CHR) of the analyzer via a 20dB directional coupler. The AC output of the reference detector is given at Channel A(CHA) and the output from the DUT is given to Channel B(CHB). We got plots of the ratios between the reference detector, DUT and the coupled refernce for the Transfer Function and the Phase. I stored the data under the directory.../scripts/general/netgpibdata/data. The Bode Plot has been attached below and seeing it we observe that the cut-off frequency for the ET-3010 model is atleast over 500 MHz(stated as >1.5 GHz in the data sheet).

Result:

The bandwidth of the ET-3010 PD is atleast 500MHz, stated in the data sheet as >1.5GHz.

Precaution:

The ET-3010 PD has an internal power supply of 6V. Don't leave the PD connected to any instrument after the experimentation is done or else the batteries will get drained if there is any photocurrent on the PDs.

To Do:

Caliberate the vertical axis in the Bode Plot with transimpedance(Ohms) for the two PDs. Automate the procedure by making a Python script for taking multiple set of readings from the Netwrok Analyzer and aslo plot the error bands.

  13015   Thu May 25 19:27:29 2017 gautamUpdateGeneralCoil driver board noises

[Koji, Gautam]

Summary: 

  • Attachment #1 shows the measured/modeled noise of the coil driver board (labelled ITMX). 
  • Measurement was made with "TEST" input (which is what the DAC drives) is connected to ground via 50ohm terminator, and "BIAS" input grounded.
  • The model tells us to expect a noise of the order of 5nV/rtHz - this is comparable to (or below) the input noise of the SR785, and even the SR560. So this measurement only serves to place an upper bound on the coil driver board noise.
  • There is some excess noise below 40Hz, would be interesting to see if this disappears with swapping out thick-film resistors for thin film ones.
  • The LISO model says that the dominant contribution is from the voltage and input current noise of the two op-amps (LT1125) in the bias LP filter path. 
  • But if we can indeed realize this noise level of ~10-20nV/rtHz, we are already at the ~10^-17m/rtHz displacement noise for MICH at about 200Hz. I suspect there are other noises that will prevent us from realizing this performance in displacement noise.

Details:

This measurement has been troublesome - I was plagued by large 60Hz harmonics (see Attachment #1), the cause of which was unknown. I powered all electronics used in the measurement set up from the same power strip (one of the new surge-protecting ones Steve recently acquired for us), but these remained present. Yesterday, Koji helped me troubleshoot this issue. We did the various things, I try to put them here in the order we did them:

  1. Double check that all electronics were indeed being powered from the same power strip - OK, but harmonics remained present.
  2. Tried using a different DC power supply - no effect.
  3. Checked the signal with an oscilloscope - got no additional insight.
  4. I was using a DB25 breakout board + pomona minigrabbers to measure the output signal and pipe it to the SR785. Koji suggested using twisted ribbon wire + soldered BNC connector (recycled from some used ones lying around the lab). The idea was to minimize stray radiation pickup. We also disconnected the WiFi extender and GPIB box from the analyzer and also disconnected these from the power - this finaly had the desired effect, the large harmonics vanished. 

Today, I tried to repeat the measurement, with the newly made twisted ribbon cable, but the large 60Hz harmonics were back. Then I realized we had also disconnected the WiFi extender and GPIB box yesterday.

Turns out that connecting the Prologix box to the SR785 (even with no power) is the culprit! Disconnecting the Prologix box makes these harmonics go away. I was using the box labelled "Santuzza.martian" (192.168.113.109), but I double-checked with the box labelled "vanna.martian" (192.168.113.105, also a different DC power supply adapter for the box), the effect is the same. I checked various combinations like 

  • GPIB box connected but not powered
  • GPIB box connected with no network cable

but it looks like connecting the GPIB box to the analyzer is what causes the problem. This was reproducible on both SR785s in the lab. So to make this measurement, I had to do things the painful way - acquire the spectrum by manually pushing buttons with the GPIB box disconnected, then re-connect the box and download the data using SRmeasure --getdata. I don't fully understand what is going on, especially since if the input connector is directly terminated using a 50ohm BNC terminator, there are no harmonics, regardless of whether the GPIB box is connected or not. But it is worth keeping this problem in mind for future low-noise measurements. My elog searches did not reveal past reports of similar problems, has anyone seen something like this before?

It also looks like my previous measurement of the de-whitening board noises was plagued by the same problem (I took all those spectra with the GPIB boxes connected). I will repeat this measurement.

Next steps:

At the meeting this week, it was decided that

  • All AD797s would be removed from de-whitening boards and also coil-driver boards (as they are unused).
  • Thick film resistors with the most dominant noise contributions to be replaced with thin-film ones.
  • Gain of 3 on de-whitening board to be changed to gain of 1.

I also think it would be a good idea to up the 100-ohm resistors in the bias path on the ITM coil driver boards to 1kohm wire-wound. Since the dominant noise on the coil-driver boards is from the voltage noise of the Op-Amps in the bias path, this would definitely be an improvement. Looking at the current values of the bias MEDM sliders, a 10x increase in the resistance for ITMX will not be possible (the yaw bias is ~-1.5V), but perhaps we can go for a 4x increase?

The plan is to then re-install the boards, and see if we can 

  1. Turn on the whitening successfully (I checked with an extender board that the switching of the whitening stages works - turning OFF the "simDW" filter in the coil driver filter banks enables the analog de-whitening).
  2. Relize the promised improvement in MICH displacement noise with the existing whitening configuration.

We can then take a call on how much to up the series resistance in the DAC signal path. 

Now that I have figured out the cause of the harmonics, I will also try and measure the combined electronics noise of de-whitening board + coil driver board and compare it to the model.

Quote:
  • The last piece (?) in this puzzle is the coil driver noise - this needs to be modeled and measured.

 

  13016   Sat May 27 10:26:28 2017 KaustubhUpdateGeneralTransimpedance Calibration

Using Alberto's paper LIGO-T10002-09-R titled "40m RF PDs Upgrade", I calibrated the vertical axis in the bode plots I had obtained for the two PDs ET-3010 and ET-3040.

I am not sure whether the values I have obtained are correct or not(i.e. whether the calibration is correct or not). Kindly review them.

EDIT: Attached the formula used to calculate transimpedance for each data point and the values of other paramaters.

EDIT 2: Updated the plots by changing the conversion for gettin ghte ratio of the transfer functions from 10^(y/10) to 10^(y/20).

  13017   Mon May 29 16:47:38 2017 gautamUpdateGeneralCoil driver boards reinstalled

Yesterday, I reinstalled the de-whitening boards + coil driver boards into their respective Eurocrate slots, and reconnected the cabling. I then roughly re-aligned the ITMs using the green beams. 

I've given Steve a list of the thin-film resistors we need to implement the changes discussed in the preceeding elogs - but I figured it would be good to see if we can realize the projected improvement in MICH displacement noise just by fixing the BS Oplev loop shape and turning the existing whitening on. Before re-installing them however, I did make a few changes:

  • Removed the gain of x3 on all the signal paths on the De-Whitening boards, and made them gain x1. For the De-Whitened path, this was done by changing the feedback resistor in the final op-amp (OP27) from 7.5kohm to 2.49kOhm, while for the bypass path, the feedback resistor in the LT1125 stages were changed from 3.01kohm to 1kohm. 
  • To recap - this gain of x3 was originally implemented because the DACs were +/- 5V, while the coil driver electronics had supply voltage of +/- 15V. Now, our DACs are +/- 10V, and even though the supply voltage to the coil driver boards is +/- 15V, in reality, the op-amps saturate at around 12V, so we aren't really losing much in terms of range.
  • I also modified the de-whitening path in the BS de-whitening board to mimic the configuration on the ITM de-whitening boards. Mainly, this involved replacing the final stage AD797 with an OP27, and also implementing the passive pole-zero network at the output of the de-whitened path. I couldn't find capacitors similar to those used on the ITM de-whitening boards, so I used WIMA capacitors.
  • The SRM de-whitening path was not touched for now.
  • On all the boards, I replaced any AD797s that were being used with OP27s, and simply removed AD797s that were in DAQ paths.
  • I removed all the potentiometers on all the boards (FAST analog path on the coil driver boards, and some offset trim Pots on the BS and SRM de-whitening boards for the AD797s, which were also removed).
  • For one signal path on the coil driver board (ITMX ch1), I replaced all of the resistors with thin-film ones and re-measured the noise. However, the excess noise in the measurement below ~40Hz (relative to the model) remained.

Photos of all the boards were taken prior to re-installation, and have been uploaded to the 40m Google Photos page - I will update schematics + photos on the DCC page once other planned changes are implemented.

I also measured the transfer functions on the de-whitened signal paths on all the boards before re-installing them. I then fit everything using LISO, and updated the filter banks in Foton to match these measurements - the original filters were copied over from FM9 and FM10 to FM7 and FM8. The new filters are appended with the suffix "_0517", and live in FM9 and FM10 of the coil output filter banks. The measured TFs (for ITMs and BS) are summarized in Attachment #1, while Attachment #2 contains the data and LISO file used to do the fits (path to the .bod files in the .fil file will have to be changed appropriately). I used 2 complex pole pairs at ~10 Hz, two complex zero pairs at ~100Hz, real poles at ~15Hz and ~3kHz, and real zeros at ~100Hz and ~550Hz for the fits. The fits line up well with the measured data, and are close enough to the "expected" values (as calculated from component values) to be explained by tolerances on the installed components - I omit the plots here. 

After re-installing the boards in the Eurocrate, restoring rough alignment, and updating the filter banks with the most recent measured values, I wanted to see if I could turn the whitening on for one of the optics (ITMY) smoothly before trying to do so in the full DRMI - switching off the "SimDW_0517" filter (FM9) should switch the signal path on the de-whitening board from bypass to de-whitened, and I had confirmed last week with an extender board that the voltage at the appropriate backplane connector pin does change as expected when the FM9 MEDM button is toggled (for both ITMs, BS and SRM). But today I was not able to engage this transition smoothly, the optic seems to be getting kicked around when I engage the whitening. I will need to investigate this further. 


Unrelated to this work: the ETMY Oplev HeNe is dead (see Attachment #3). I thought we had just replaced this laser a couple of months ago - what is the expected lifetime of these? Perhaps the power supply at the Y-end is wonky and somehow damaging the HeNe heads?

  13019   Tue May 30 16:02:59 2017 gautamUpdateGeneralCoil driver boards reinstalled

I think the reason I am unable to engage the de-whitening is that the OL loop is injecting a ton of control noise - see Attachment #1. With the OL loop off (i.e. just local damping loops engaged for the ITMs), the RMS control signal at 100Hz is ~6 orders of magnitude (!) lower than with the OL loop on. So turning on the whitening was just railing the DAC I guess (since the whitening has something like 60dB gain at 100Hz).

The Oplev loops for the ITMs use an "Ellip15" low-pass filter to do the roll-off (2nd order Elliptic low pass filter with 15dB stopband atten and 2dB ripple). I confirmed that if I disable the OL loops, I was able to turn on the whitening for ITMY smoothly.

Now that the ETMY OL HeNe has been replaced, I restored alignment of the IFO. Both arms lock fine (I was also able to engage the ITMY Coil Driver whitening smoothly with the arm locked). However, something funny is going on with ASS - running the dither seems to inject huge offsets into the ITMY pit and yaw such that it almost immediately breaks the lock. This probably has to do with some EPICS values not being reset correctly since the recent slow-machine restarts (for instance, the c1iscaux restart caused all the LSC RFPD whitening gains to be reset to random values, I had to burt-restore the POX11 and POY11 values before I could get the arms to lock), I will have to investigate further.

GV edit 2pm 31 May: After talking to Koji at the meeting, I realized I did not specify what channel the attached spectra are for - it is  C1:SUS-ITMY_ULCOIL_OUT.

Quote:
 

But today I was not able to engage this transition smoothly, the optic seems to be getting kicked around when I engage the whitening. I will need to investigate this further. 


Unrelated to this work: the ETMY Oplev HeNe is dead (see Attachment #3). I thought we had just replaced this laser a couple of months ago - what is the expected lifetime of these? Perhaps the power supply at the Y-end is wonky and somehow damaging the HeNe heads?

 

  13026   Thu Jun 1 00:10:15 2017 gautamUpdateGeneralCoil driver boards reinstalled

[Koji, Gautam]

We tried to debug the mysterious sudden failure of ASS - here is a summary of what we did tonight. These are just notes for now, so I don't forget tomorrow.

What are the problems/symptoms?

  • After re-installing the coil driver electronics, the ASS loops do not appear to converge - one or more loops seem to run away to the point we lose the lock.
  • For the Y-arm dithers, the previously nominal ITM PIT and YAW oscillator amplitudes (of ~1000cts each) now appears far too large (the fuzz on the Y arm transmission increases by x3 as viewed on StripTool).
  • The convergence problem exists for the X arm alignment servos too.

What are the (known) changes since the servos were last working?

  • Gain of x3 on the de-whitening boards for ITMX, ITMY, BS and SRM have been replaced with gain x1. But I had measurements for all transfer functions (De-White board input to De-White Board outputs) before and after this change, so I compensated by adding a filter of gain ~x3 to all the coil filter banks for these optics (the exact value was the ratio of the DC gain of the transfer functions before/after).
  • The ETMY Oplev has been replaced. I walked over to the endtable and there doesn't seem to be any obvious clipping of either the Oplev beam or the IR transmission.

Hypotheses plus checks (indented bullets) to test them:

  1. The actuation on the ITMs are ~x10 times stronger now (for reasons unknown).
    • I locked the Y-arm and drove a line in the channels C1:SUS-ETMY_LSC_EXC and C1:SUS-ITMY_LSC_EXC at ~100Hz and ~30Hz, (one optic at one frequency at a time), and looked at the response in the LSC control signal. The peaks at both frequencies for the ITMs and ETMs were within a factor of ~2. Seems reasonable.
    • We further checked by driving lines in C1:SUS-ETMY_ASCPIT_EXC and C1:SUS-ITMY_ASCPIT_EXC (and also the corresponding YAW channels), and looked at peak heights at the drive frequencies in the OL control signal spectra - the peak heights matched up well in both the ITM and ETM spectra (the drive was in the same number of counts).

      So it doesn't look like there is any strange actuation imbalance between the ITM and ETM as a result of the recent electronics work, which makes sense as the other control loops acting on the suspensions (local damping, Oplevs etc seem to work fine). 
  2. The way the dither servo is set up for the Y-arm, the tip-tilts are used to set the input axis to the cavity axis, while actuation to the ITM and ETM takes care of the spot centering. The problem lies with one of these subsystems.
    • We tried disabling the ASS servo inputs to all the spot-centering loops - but even with just actuation on the TTs, the arm transmission isn't maximized.
    • We tried the other combination - disable actuation path to TTs, leave those to ITM and ETM on - same result, but the divergence is much faster (lock lost within a couple of seconds, large offsets appear in the ETM_PIT_L / ETM_YAW_L error signals.
    • Tried turning on loops one at a time - but still the arm transmission isn't maximized.
  3. Something is funny with the IR transmon QPD / ETMY Oplev.
    • I quickly measured Oplev PIT and YAW OLTFs, they seem normal with upper UGFs around 5Hz and phase margins of ~30 degrees.
    • We had no success using either of the two available Transmon QPDs
    • Looking at the QPD quadrants, the alignment isn't stellar but we get roughly the same number of counts on all quadrants, and the spot isn't drastically misaligned in either PIT or YAW.

For whatever reasons, it appears that dithering the cavity mirrors at frequencies with amplitudes that worked ~3 weeks ago is no longer giving us the correct error signals for dither alignment. We are out of ideas for tonight, TBC tomorrow...

 

  13034   Fri Jun 2 12:32:16 2017 gautamUpdateGeneralPower glitch

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

  13035   Fri Jun 2 16:02:34 2017 gautamUpdateGeneralPower glitch

Today's recovery seems to be a lot more complicated than usual.

  • The vertex area of the lab is pretty warm - I think the ACs are not running. The wall switch-box (see Attachment #1) shows some red lights which I'm pretty sure are usually green. I pressed the push-buttons above the red light, hopefully this fixed the AC and the lab cools down soon.
  • Related to the above - C1IOO has a bunch of warning orange indicator lights ON that suggest it is feeling the heat. Not sure if that is why, but I am unable to bring any of the C1IOO models back online - the rtcds compilation just fails, after which I am unable to ssh back into the machine as well.
  • C1SUS was problematic as well. I found that the expansion chassis was not powered. Fortunately, this was fixed by simply switching to the one free socket on the power strip that powers a bunch of stuff on 1X4 - this brought the expansion chassis back alive, and after a soft reboot of c1sus, I was able to get these models up and running. Fortunately, none of the electronics seem to have been damaged. Perhaps it is time for surge-protecting power strips inside the lab area as well (if they aren't already)? 
  • I was unable to successfully resolve the dmesg problem alluded to earlier. Looking through some forums, I gather that the output of dmesg should be written to a file in /var/log/. But no such file exists on any of our 5 front-ends (but it does on Megatron, for example). So is this way of setting up the front end machines deliberate? Why does this matter? Because it seems that the buffer which we see when we simply run "dmesg" on the console gets preiodically cleared. So sometime back, when I was trying to verify that the installed DACs are indeed 16-bit DACs by looking at dmesg, running "dmesg | head" showed a first line that was written to well after the last reboot of the machine. Anyway, this probably isn't a big deal, and I also verified during the model recompilation that all our DACs are indeed 16-bit.
  • I was also trying to set up the Upstart processes on megatron such that the MC autolocker and FSS slow control scripts start up automatically when the machine is rebooted. But since C1IOO isn't co-operating, I wasn't able to get very far on this front either...

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

GV Jun 5 6pm: From my discussion with jamie, I gather that the fact that the dmesg output is not written to file is because our front-ends are diskless (this is also why the ring buffer, which is what we are reading from when running "dmesg", gets cleared periodically)

 

Quote:

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

 

  13036   Fri Jun 2 22:01:52 2017 gautamUpdateGeneralPower glitch - recovery

[Koji, Rana, Gautam]

Attachment #1 - CDS status at the end of todays efforts. There is one red indicator light showing an RFM error which couldn't be fixed by running "global diag reset" or "mxstream restart" scripts, but getting to this point was a journey so we decided to call it for today.


The state this work was started in was as indicated in the previous elog - c1ioo wasn't ssh-able, but was responding to ping. We then did the following:

  1. Killed all models on all four other front ends other than c1ioo. 
  2. Hard reboot for c1ioo - at this point, we could ssh into c1ioo. With all other models killed, we restarted the c1ioo models one by one. They all came online smoothly.
  3. We then set about restarting the models on the other machines.
    • We started with the IOP models, and then restarted the others one by one
    • We then tried running "global diag reset", "mxstream restart" and "telnet fb 8087 -> shutdown" to get rid of all the red indicator fields on the CDS overview screen.
    • All models came back online, but the models on c1sus indicated a DC (data concentrator?) error. 
  4. After a few minutes, I noticed that all the models on c1iscex had stalled
    • dmesg pointed to a synchronization error when trying to initialize the ADC
    • The field that normally pulses at ~1pps on the CDS overview MEDM screen when the models are running normally was stuck
    • Repeated attempts to restart the models kept throwing up the same error in dmesg 
    • We even tried killing all models on all other frontends and restarting just those on c1iscex as detailed earlier in this elog for c1ioo - to no avail.
    • A walk to the end station to do a hard reboot of c1iscex revealed that both green indicator lights on the slave timing card in the expansion chassis were OFF.
    • The corresponding lights on the Master Timing Sequencer (which supplies the synchronization signal to all the front ends via optical fiber) were also off.
    • Sometime ago, Eric and I had noticed a similar problem. Back then, we simply switched the connection on the Master Timing Sequencer to the one unused available port, this fixed the problem. This time, switching the fiber connection on the Master Timing Sequencer had no effect.
    • Power cycling the Master Timing Sequencer had no effect
    • However, switching the optical fiber connections going to the X and Y ends lead to the green LED on the suspect port on the Master Timing Sequencer (originally the X end fiber was plugged in here) turning back ON when the Y end fiber was plugged in.
    • This suggested a problem with the slave timing card, and not the master. 
  5. Koji and I then did the following at the X-end electronics rack:
    • Shutdown c1iscex, toggled the switches in the front and back of the expansion chassis
    • Disconnect AC power from rear of c1iscex as well as the expansion chassis. This meant all LEDs in the expansion chassis went off, except a single one labelled "+5AUX" on the PCB - to make this go off, we had to disconnect a jumper on the PCB (see Attachment #2), and then toggle the power switches on the front and back of the expansion chassis (with the AC power still disconnected). Finally all lights were off.
    • Confident we had completely cut all power to the board, we then started re-connecting AC power. First we re-started the expansion chassis, and then re-booted c1iscex.
    • The lights on the slave timing card came on (including the one that pulses at ~1pps, which indicates normal operation)!
  6. Then we went back to the control room, and essentially repeated bullet points 2 and 3, but starting with c1iscex instead of c1ioo.
  7. The last twist in this tale was that though all the models came back online, the DC errors on c1sus models persisted. No amount of "mxstream restart", "global diag reset", or restarting fb would make these go away.
  8. Eventually, Koji noticed that there was a large discrepancy in the gpstimes indicated in c1x02 (the IOP model on c1sus), compared to all the other IOP models (even though the PDT displayed was correct). There were also a large number or IRIG-B errors indicated on the same c1x02 status screen, and the "TIM" indicator in the status word was red.
  9. Turns out, running ntpdate before restarting all the models somehow doesn't sync the gps time - so this was what was causing the DC errors. 
  10. So we did a hard reboot of c1sus (and for good measure, repeated the bullet points of 5 above on c1sus and its expansion chassis). Then, we tried starting the c1x02 model without running ntpdate first (on startup, there is an 8 hour mismatch between the actual time in Pasadena and the system time - but system time is 8 hours behind, so it isn't even somehow syncing to UTC or any other real timezone?)
    • Model started up smoothly
    • But there was still a 1 second discrepancy between the gpstime on c1x02 and all the other IOPs (and the 8 hour discrepancy between displayed PDT and actual time in Pasadena)
    • So we tried running ntpdate after starting c1x02 - this finally fixed the problem, gpstime and PDT on c1x02 agreed with the other frontends and the actual time in Pasadena.
    • However, the models on c1lsc and c1ioo crashed
    • So we restarted the IOPs on both these machines, and then the rest of the models.
  11. Finally, we ran "mxstream restart", "global diag reset", and restarted fb, to make the CDS overview screen look like it does now.

Why does ntpdate behave this way? And only on one of the frontends? And what is the remaining RFM error? 

Koji then restarted the IMC autolocker and FSS slow processes on megatron. The IMC locked almost immediately. The MC2 transmon indicated a large shift in the spot position, and also the PMC transmission is pretty low (while the lab temperature equilibriates after the AC being off during peak daytime heat). So the MC transmission is ~14500 counts, while we are used to more like 16,500 counts nowadays.

Re-alignment of the IFO remains to be done. I also did not restart the end lasers, or set up the Marconi with nominal params. 

Attachment #3 - Status of the Master Timing Sequencer after various reboots and power cycling of front ends and associated electronics.

Attachment #4 - Warning lights on C1IOO

Quote:

Today's recovery seems to be a lot more complicated than usual.

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

 

  13038   Sun Jun 4 15:59:50 2017 gautamUpdateGeneralPower glitch - recovery

I think the CDS status is back to normal.

  • Bit 2 of the C1RFM status word was red, indicating something was wrong with "GE FANUC RFM Card 0".
  • You would think the RFM errors occur in pairs, in C1RFM and in some other model - but in this case, the only red light was on c1rfm.
  • While trying to re-align the IFO, I noticed that the TRY time series flatlined at 0 even though I could see flashes on the TRANSMON camera.
  • Quick trip to the Y-End with an oscilloscope confirmed that there was nothing wrong with the PD.
  • I crawled through some elogs, but didn't really find any instructions on how to fix this problem - the couple of references I did find to similar problems reported red indicator lights occurring in pairs on two or more models, and the problem was then fixed by restarting said models.
  • So on a hunch, I restarted all models on c1iscey (no hard or soft reboot of the FE was required)
  • This fixed the problem
  • I also had to start the monit process manually on some of the FEs like c1sus. 

Now IFO work like fixing ASS can continue...

  13079   Sun Jun 25 22:30:57 2017 gautamUpdateGeneralc1iscex timing troubles

I saw that the CDS overview screen indicated problems with c1iscex (also ETMX was erratic). I took a closer look and thought it might be a timing issue - a walk to the X-end confirmed this, the 1pps status light on the timing slave card was no longer blinking. 

I tried all versions of power cycling and debugging this problem known to me, including those suggested in this thread and from a more recent time. I am leaving things as it for the night, will look into this more tomorrow. I've also shutdown the ETMX watchdog for the time being. Looks like this has been down since 24Jun 8am UTC.

  13080   Mon Jun 26 09:39:15 2017 NaomiSummaryGeneralMeasure transfer functions of Mini-Circuits filters

I have spent my first few days as a SURF getting experience working with the Network/Spectrum Analyzer (AG 4395A). After an introduction to the 40m by Koji, I was tasked with using the AG4395A to measure the transfer function of several filters (for example, Mini-Circuits Low Pass Filter SLP-30). I am now familiar with configuring the AG 4395A, taking a single set of data using a command from one of the control computers, and plotting the dataset as a Bode plot (separate plots for magnitude and phase) using Python.

To Do:

  • Use AGmeasure to take multiple datasets with a single command.
  • Plot multiple datasets for each filter on a single Bode plot and perform some statistical analysis. 

To experiment with plotting multiple datasets on a single Bode plot, I used a single dataset from the Network Analyzer using the SLP-30 filter and added random noise to create ten datasets to plot. I am attaching the resulting Bode plot, which has the ten generated sets of data plotted along with their average.

We discussed with Rana and Koji how to interpret this type of dataset from the Network Analyzer. Instead of considering the magnitude and phase as separate quantities, we should consider them together as a single complex number in the form H(f) = M exp(iπP/180), where M is the magnitude and P is the phase in degrees. We can then find the average value of the measured quantity in its complex number form (x + iy), as opposed to just taking the average of the magnitude and phase separately.  

  13081   Mon Jun 26 22:01:08 2017 KojiUpdateGeneralc1iscex timing troubles

I tried a couple of things, but no fundamental improvement of the missing LED light on the timing board.

- The power supply cable to the timing board at c1iscex indicated +12.3V

- I swapped the timing fiber to the new one (orange) in the digital cabinet. It didn't help.

- I swapped the opto-electronic I/F for the timing fiber with the Y-end one. The X-end one worked at Y-end, and Y-end one didn't work at X-end.

- I suspected the timing board itself -> I brought a "spare" timing board from the digital cabinet and tried to swap the board. This didn't help.

 

Some ideas:

- Bring the X-end fiber to C1SUS or C1IOO to see if the fiber is OK or not.

- We checked the opto-electronic I/F is OK

- Try to swap the IO chassis with the Y-end one.

- If this helps, swap the timing board only to see this is the problem or not.

  13085   Wed Jun 28 20:15:46 2017 gautamUpdateGeneralc1iscex timing troubles

[Koji, gautam]

Here is a summary of what we did today to fix the timing issue on c1iscex. The power supply to the timing card in the X end expansion chassis was to blame.

  1. We prepared the Y-end expansion chassis for transport to the X end. To do so, we disconnected the following from the expansion chassis
    • Cables going to the ADC/DAC adaptor boards
    • Dolphin connector
    • BIO connector
    • RFM fiber
    • Timing fiber
  2. We then carried the expansion chassis to the X end electronics rack. There we repeated the above steps for the X-end expansion chassis
  3. We swapped the X and Y end expansion chassis in the X end electronics rack. Powering the unit, we immediately saw the green lights on the front of the timing card turn on, suggesting that the Y-end expansion chassis works fine at the X end as well (as it should). To further confirm that all was well, we were able to successfully start all the RT models on c1iscex without running into any timing issues.
  4. Next, we decided to verify if the spare timing card is functional. So we swapped out the timing card in the expansion chassis brought over to the X end from the Y end with the spare. In this test too, all worked as expected. So at this stage, we concluded that
    • There was nothing wrong with the fiber bringing the timing signal to the X end
    • The Y-end expansion chassis works fine
    • The spare timing card works fine.
  5. Then we decided to try the original X-end expansion chassis timing card in the Y-end expansion chassis. This test too was successful - so there was nothing wrong with any of the timing card!
  6. Next, we decided to power the X-end timing chassis with its original timing card, which was just verified to work fine. Surprisingly, the indicator lights on the timing card did not turn on.
  7. The timing card has 3 external connections
    • A 40 pin IDE connector
    • Power
    • Fiber carrying the timing signal
  8. We went back to the Y-end expansion chassis, and checked that the indicator lights on the timing card turned on even when the 40 pin IDE connector was left unconnected (so the timing card just gets power and the timing signal).
  9. We concluded that the power supply in the X end expansion chassis was to blame. Indeed, when Koji jiggled the connector around a little, the indicator lights came on!
  10. The connection was diagnosed to be somewhat flaky - it employs the screw-in variety of terminal blocks, and one of the connections was quite loose - Koji was able to pull the cable out of the slot applying a little pressure.
  11. I replaced the cabling (swapped the wires for thicker gauge, more flexible variety), and re-tightened the terminal block screws. The connection was reasonably secure even when I applied some force. A quick test verified that the timing card was functional when the unit was powered.
  12. We then replaced the X and Y-end expansion chassis (complete with their original timing cards, so the spare is back in the CDS cabinet), in the racks. The models started up again without complaint, and the CDS overview screen is now in a good state [Attachment #1]. The arms are locked and aligned for maximum transmission now.
  13. There was some additional difficulty in getting the 40-pin IDE connector in on the Y-end expansion chassis. Looked like we had bent some of the pins on the timing board while pulling this cable out. But Koji was able to fix this with a screw driver. Care should be taken when disconnecting this cable in the future!

There were a few more flaky things in the Expansion chassis - the IDE connectors don't have "keys" that fix the orientation they should go in, and the whole timing card assembly is kind of difficult and not exactly secure. But for now, things are back to normal it seems.

Wouldn't it be nice if this fix also eliminates the mystery ETMX glitching problem? After all, seems like this flaky power supply has been a problem for a number of years. Let's keep an eye out.

  13088   Fri Jun 30 02:13:23 2017 gautamUpdateGeneralDRMI locking attempt

Summary:

I attempted to re-lock the DRMI and try and realize some of the noise improvements we have identified. Summary elog, details to follow.

  1. Locked arms, ran ASS, centered OLs on ITMs and BS on their respective QPDs.
  2. Looked into changing the BS Oplev loop shape to match that of the ITMs - it looks like the analog electronics that take the QPD signals in for the BS Oplev is a little different, the 800Hz poles are absent. But I thought I had managed to do this successfully in that the error signal suppression improved and it didn't look like the performance of the modified loop was worse anywhere except possibly at the stack resonance of ~3Hz --- see Attachment #1 (will be rotated later). The TRX spectra before and after this modification also didn't raise any red flags.
  3. Re-aligned PRM - went to the AS table and centered beam on all REFL PDs
  4. Locked PRMI on carrier, ran MICH and AS dither alignment. PRC angular feedforward also seemed to work well.
  5. Re-aligned SRM, looked for DRMI locks - there was a brief lock of a couple of seconds, but after this, the BS behaviour changed dramatically.

Basically after this point, I was unable to repeat stuff I did earlier in the evening just a couple of hours ago. The single arm locks catch quickly, and seem stable over the hour timescale, but when I run the X arm dither, the BS PITCH loop starts to oscillate at ~0.1 Hz. Moreover, I am unable to acquire PRMI carrier lock. I must have changed a setting somewhere that I am not catching right now (although I've scripted most of these things for repeatability, so I am at a loss what I'm missing indecision). The only change I can think of is that I changed the BS Oplev loop shape. But I went back into the filter file archives and restored these to their original configuration. Hopefully I'll have better luck figuring this out tomorrow.

  13090   Fri Jun 30 11:50:17 2017 gautamUpdateGeneralDRMI locking attempt

Seems like the problem is actually with ITMX - the attached DV plots are for ITMX with just local damping loops on (no OLs), LR seems to be suspect.

I'm going to go squish cables and the usual sat. box voodoo, hopefully that settles it.

Quote:

Summary:

I attempted to re-lock the DRMI and try and realize some of the noise improvements we have identified. Summary elog, details to follow.

  1. Locked arms, ran ASS, centered OLs on ITMs and BS on their respective QPDs.
  2. Looked into changing the BS Oplev loop shape to match that of the ITMs - it looks like the analog electronics that take the QPD signals in for the BS Oplev is a little different, the 800Hz poles are absent. But I thought I had managed to do this successfully in that the error signal suppression improved and it didn't look like the performance of the modified loop was worse anywhere except possibly at the stack resonance of ~3Hz --- see Attachment #1 (will be rotated later). The TRX spectra before and after this modification also didn't raise any red flags.
  3. Re-aligned PRM - went to the AS table and centered beam on all REFL PDs
  4. Locked PRMI on carrier, ran MICH and AS dither alignment. PRC angular feedforward also seemed to work well.
  5. Re-aligned SRM, looked for DRMI locks - there was a brief lock of a couple of seconds, but after this, the BS behaviour changed dramatically.

Basically after this point, I was unable to repeat stuff I did earlier in the evening just a couple of hours ago. The single arm locks catch quickly, and seem stable over the hour timescale, but when I run the X arm dither, the BS PITCH loop starts to oscillate at ~0.1 Hz. Moreover, I am unable to acquire PRMI carrier lock. I must have changed a setting somewhere that I am not catching right now (although I've scripted most of these things for repeatability, so I am at a loss what I'm missing indecision). The only change I can think of is that I changed the BS Oplev loop shape. But I went back into the filter file archives and restored these to their original configuration. Hopefully I'll have better luck figuring this out tomorrow.

 

  13093   Fri Jun 30 22:28:27 2017 gautamUpdateGeneralDRMI re-locked

Summary:

Reverted to old settings, tried to reproduce DRMI lock with settings as close to those used in May this year as possible. Tonight, I was successful in getting a couple of ~10min DRMI 1f locks yes. Now I can go ahead and try and reduce the noise.

I am not attempting a full characterization tonight, but the important changes since the May locks are in the de-whitening boards and coil driver boards. I did not attempt to engage the coil-dewhitening, but the PD whitening works fine.

As a quick check, I tested the hypothesis that the BS OL loop A2L coupling dominates between ~10-50Hz. The attached control signal spectra [Attachment #2] supports this hypothesis. Now to actually change the loop shape.

I've centered Oplevs of all vertex optics, and also the beams on the REFL and AS PDs. The ITMs and BS have been repeatedly aligned since re-installing their respective coil driver electronics, but the SRM alignment needed some adjustment of the bias sliders.

Full characterization to follow. Some things to check:

  • Investigate adn fix the suspect X-arm ASS loop
  • Is there too much power on the AS110 PD post Oct2016 vent? Is the PD saturating?

Lesson learnt: Don't try and change too many things at once!

GV July 5 1130am: Looks like the MICH loop gain wasn't set correctly when I took the attached spectra, seems like the bump around 300Hz was caused by this. On later locks, this feature wasn't present.

  13094   Sat Jul 1 14:27:00 2017 KojiUpdateGeneralDRMI re-locked

Basically we use the arm cavities as the reference of the beam alignment. The incident beam is aligned such that the ITMY angle dither is minimized (at least at the dither freq).
This means that we have no capability to adjust the spot poisitions on the PRM, SRM, BS, ITMX optics.

We are still able to minimize A2L by adding intentional asymmetry to the coil actuators.

  13097   Wed Jul 5 19:10:36 2017 gautamUpdateGeneralNB code checkout - updated

I've been making NBs on my laptop, thought I would get the copy under version control up-to-date since I've been negligent in doing so.

The code resides in /ligo/svncommon/NoiseBudget, which as a whole is a git directory. For neatness, most of Evan's original code has been put into the sub-directory  /ligo/svncommon/NoiseBudget/H1NB/, while my 40m NB specific adaptations of them are in the sub-directory /ligo/svncommon/NoiseBudget/NB40. So to make a 40m noise budget, you would have to clone and edit the parameter file accordingly, and run python C1NB.py C1NB_2017_04_30.py for example. I've tested that it works in its current form. I had to install a font package in order to make the code run (with sudo apt-get install tex-gyre ), and also had to comment out calls to GwPy (it kept throwing up an error related to the package "lal", I opted against trying to debug this problem as I am using nds2 instead of GwPy to get the time series data anyways).

There are a few things I'd like to implement in the NB like sub-budgets, I will make a tagged commit once it is in a slightly neater state. But the existing infrastructure should allow making of NBs from the control room workstations now.

Quote:

[evan, gautam]

We spent some time trying to get the noise-budgeting code running today. I guess eventually we want this to be usable on the workstations so we cloned the git repo into /ligo/svncommon. The main objective was to see if we had all the dependencies for getting this code running already installed. The way Evan has set the code up is with a bunch of dictionaries for each of the noise curves we are interested in - so we just commented out everything that required real IFO data. We also commented out all the gwpy stuff, since (if I remember right) we want to be using nds2 to get the data. 

Running the code with just the gwinc curves produces the plots it is supposed to, so it looks like we have all the dependencies required. It now remains to integrate actual IFO data, I will try and set up the infrastructure for this using the archived frame data from the 2016 DRFPMI locks..

 

  13101   Sat Jul 8 17:09:50 2017 gautamUpdateGeneralETMY TRANS QPD anomaly

About 2 weeks ago, I noticed some odd behaviour of the LSC TRY data stream. Its DC value seems to be drifting ~10x more than TRX. Both signals come from the transmission QPDs. At the time, we were dealing with various CDS FE issues but things have been stable on that end for the last two weeks, so I looked into this a bit more today. It seems like one particular channel is bad - Quadrant 4 of the ETMY TRANS QPD. Furthermore, there is a bump around 150Hz, and some features above 2kHz, that are only present for the ETMY channels and not the ETMX ones.

Since these spectra were taken with the PSL shutter closed and all the lab room lights off, it would suggest something is wrong in the electronics - to be investigated.

The drift in TRY can be as large as 0.3 (with 1.0 being the transmitted power in the single arm lock). This seems unusually large, indeed we trigger the arm LSC loops when TRY > 0.3. Attachment #2 shows the second trend of the TRX and TRY 16Hz EPICS channels for 1 day. In the last 12 hours or so, I had left the LSC master switch OFF, but the large drift of the DC value of TRY is clearly visible.

In the short term, we can use the high-gain THORLABS PD for TRY monitoring.

  13102   Sun Jul 9 08:58:07 2017 ranaUpdateGeneralETMY TRANS QPD anomaly

Indeed, the whole point of the high/low gain setup is to never use the QPDs for the single arm work. Only use the high gain Thorlabs PD and then the switchover code uses the QPD once the arm powers are >5.

I don't know how the operation procedure went so higgledy piggledy.

  13103   Mon Jul 10 09:49:02 2017 gautamUpdateGeneralAll FEs down

Attachment #1: State of CDS overview screen as of 9.30AM today morning when I came in.

Looks like there may have bene a power glitch, although judging by the wall StripTool traces, if there was one, it happened more than 8 hours ago. FB is down atm so can't trend to find out when this happened.

All FEs and FB are unreachable from the control room workstations, but Megatron, Optimus and Chiara are all ssh-able. The latter reports an uptime of 704 days, so all seems okay with its UPS. Slow machines are all responding to ping as well as telnet.

Recovery process to begin now. Hopefully it isn't as complicated as the most recent effort indecision[FAMOUS LAST WORDS]

  13104   Mon Jul 10 11:20:20 2017 gautamUpdateGeneralAll FEs down

I am unable to get FB to reboot to a working state. A hard reboot throws it into a loop of "Media Test Failure. Check Cable".

Jetstor RAID array is complaining about some power issues, the LCD display on the front reads "H/W Monitor", with the lower line cycling through "Power#1 Failed", "Power#2 Failed", and "UPS error". Going to 192.168.113.119 on a martian machine browser and looking at the "Hardware information" confirms that System Power #1 and #2 are "Failed", and that the UPS status is "AC power loss". So far I've been unable to find anything on the elog about how to handle this problem, I'll keep looking.


In fact, looks like this sort of problem has happened in the past. It seems one power supply failed back then, but now somehow two are down (but there is a third which is why the unit functions at all). The linked elog thread strongly advises against any sort of power cycling. 

  13106   Mon Jul 10 17:46:26 2017 gautamUpdateGeneralAll FEs down

A bit more digging on the diagnostics page of the RAID array reveals that the two power supplies actually failed on Jun 2 2017 at 10:21:00. Not surprisingly, this was the date and approximate time of the last major power glitch we experienced. Apart from this, the only other error listed on the diagnostics page is "Reading Error" on "IDE CHANNEL 2", but these errors precede the power supply failure.

Perhaps the power supplies are not really damaged, and its just in some funky state since the power glitch. After discussing with Jamie, I think it should be safe to power cycle the Jetstor RAID array once the FB machine has been powered down. Perhaps this will bring back one/both of the faulty power supplies. If not, we may have to get new ones. 

The problem with FB may or may not be related to the state of the Jestor RAID array. It is unclear to me at what point during the boot process we are getting stuck at. It may be that because the RAID disk is in some funky state, the boot process is getting disrupted.

Quote:

I am unable to get FB to reboot to a working state. A hard reboot throws it into a loop of "Media Test Failure. Check Cable".

Jetstor RAID array is complaining about some power issues, the LCD display on the front reads "H/W Monitor", with the lower line cycling through "Power#1 Failed", "Power#2 Failed", and "UPS error". Going to 192.168.113.119 on a martian machine browser and looking at the "Hardware information" confirms that System Power #1 and #2 are "Failed", and that the UPS status is "AC power loss". So far I've been unable to find anything on the elog about how to handle this problem, I'll keep looking.


In fact, looks like this sort of problem has happened in the past. It seems one power supply failed back then, but now somehow two are down (but there is a third which is why the unit functions at all). The linked elog thread strongly advises against any sort of power cycling. 

 

  13107   Mon Jul 10 19:15:21 2017 gautamUpdateGeneralAll FEs down

The Jetstor RAID array is back in its nominal state now, according to the web diagnostics page. I did the following:

  1. Powered down the FB machine - to avoid messing around with the RAID array while the disks are potentially mounted.
  2. Turned off all power switches on the back of the Jetstor unit - there were 4 of them, all of them were toggled to the "0" position.
  3. Disconnected all power cords from the back of the Jetstor unit - there were 3 of them.
  4. Reconnected the power cords, turned the power switches back on to their "1" position.

After a couple of minutes, the front LCD display seemed to indicate that it had finished running some internal checks. The messages indicating failure of power units, which was previously constantly displayed on the front LCD panel, was no longer seen. Going back to the control room and checking the web diagnostics page, everything seemed back to normal.

However, FB still will not boot up. The error is identical to that discussed in this thread by Intel. It seems FB is having trouble finding its boot disk. I was under the impression that only the FE machines were diskless, and that FB had its own local boot disk - in which case I don't know why this error is showing up. According to the linked thread, it could also be a problem with the network card/cable, but I saw both lights on the network switch port FB is connected to turn green when I powered the machine on, so this seems unlikely. I tried following the steps listed in the linked thread but got nowhere, and I don't know enough about how FB is supposed to boot up, so I am leaving things in this state now. 

  13108   Mon Jul 10 21:03:48 2017 jamieUpdateGeneralAll FEs down

 

Quote:
 

However, FB still will not boot up. The error is identical to that discussed in this thread by Intel. It seems FB is having trouble finding its boot disk. I was under the impression that only the FE machines were diskless, and that FB had its own local boot disk - in which case I don't know why this error is showing up. According to the linked thread, it could also be a problem with the network card/cable, but I saw both lights on the network switch port FB is connected to turn green when I powered the machine on, so this seems unlikely. I tried following the steps listed in the linked thread but got nowhere, and I don't know enough about how FB is supposed to boot up, so I am leaving things in this state now. 

It's possible the fb bios got into a weird state.  fb definitely has it's own local boot disk (*not* diskless boot).  Try to get to the BIOS during boot and make sure it's pointing to it's local disk to boot from.

If that's not the problem, then it's also possible that fb's boot disk got fried in the power glitch.  That would suck, since we'd have to rebuild the disk.  If it does seem to be a problem with the boot disk then we can do some invasive poking to see if we can figure out what's up with the disk before rebuilding.

  13110   Mon Jul 10 22:07:35 2017 KojiUpdateGeneralAll FEs down

I think this is the boot disk failure. I put the spare 2.5 inch disk into the slot #1. The OK indicator of the disk became solid green almost immediately, and it was recognized on the BIOS in the boot section as "Hard Disk". On the contrary, the original disk in the slot #0 has the "OK" indicator kept flashing and the BIOS can't find the harddisk.

 

  13111   Tue Jul 11 15:03:55 2017 gautamUpdateGeneralAll FEs down

Jamie suggested verifying that the problem is indeed with the disk and not with the controller, so I tried switching the original boot disk to Slot #1 (from Slot #0 where it normally resides), but the same problem persists - the green "OK" indicator light keeps flashing even in Slot #1, which was verified to be a working slot using the spare 2.5 inch disk. So I think it is reasonable to conclude that the problem is with the boot disk itself.

The disk is a Seagate Savvio 10K.2 146GB disk. The datasheet doesn't explicitly suggest any recovery options. But Table 24 on page 54 suggests that a blinking LED means that the disk is "spinning up or spinning down". Is this indicative of any particular failure moed? Any ideas on how to go about recovery? Is it even possible to access the data on the disk if it doesn't spin up to the nominal operating speed?

Quote:

I think this is the boot disk failure. I put the spare 2.5 inch disk into the slot #1. The OK indicator of the disk became solid green almost immediately, and it was recognized on the BIOS in the boot section as "Hard Disk". On the contrary, the original disk in the slot #0 has the "OK" indicator kept flashing and the BIOS can't find the harddisk.

 

 

  13112   Tue Jul 11 15:12:57 2017 KojiUpdateGeneralAll FEs down

If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.

If the disk is completely gone, we need to rebuilt the disk according to Jamie, and I don't know how to do it. (Don't we have any spare copy?)

  13113   Wed Jul 12 10:21:07 2017 gautamUpdateGeneralAll FEs down

Seems like the connector on this particular disk is of the SAS variety (and not SATA). I'll ask Steve to order a SAS to USB cable. In the meantime I'm going to see if the people at Downs have something we can borrow.

Quote:

If we have a SATA/USB adapter, we can test if the disk is still responding or not. If it is still responding, can we probably salvage the files?
Chiara used to have a 2.5" disk that is connected via USB3. As far as I know, we have remote and local backup scripts running (TBC), we can borrow the USB/SATA interface from Chiara.

If the disk is completely gone, we need to rebuilt the disk according to Jamie, and I don't know how to do it. (Don't we have any spare copy?)

 

  13114   Wed Jul 12 14:46:09 2017 gautamUpdateGeneralAll FEs down

I couldn't find an external docking setup for this SAS disk, seems like we need an actual controller in order to interface with it. Mike Pedraza in Downs had such a unit, so I took the disk over to him, but he wasn't able to interface with it in any way that allows us to get the data out. He wants to try switching out the logic board, for which we need an identical disk. We have only one such spare at the 40m that I could locate, but it is not clear to me whether this has any important data on it or not. It has "hda RTLinux" written on its front panel with a sharpie. Mike thinks we can back this up to another disk before trying anything, but he is going to try locating a spare in Downs first. If he is unsuccessful, I will take the spare from the 40m to him tomorrow, first to be backed up, and then for swapping out the logic board.

Chatting with Jamie and Koji, it looks like the options we have are:

  1. Get the data from the old disk, copy it to a working one, and try and revert the original FB machine to its last working state. This assumes we can somehow transfer all the data from the old disk to a working one.
  2. Prepare a fresh boot disk, load the old FB daqd code (which is backed up on Chiara) onto it, and try and get that working. But Jamie isn't very optimistic of this working, because of possible conflicts between the code and any current OS we would install.
  3. Get FB1 working. Jamie is looking into this right now.
Quote:

Seems like the connector on this particular disk is of the SAS variety (and not SATA). I'll ask Steve to order a SAS to USB cable. In the meantime I'm going to see if the people at Downs have something we can borrow.

 

 

  13115   Wed Jul 12 14:52:32 2017 jamieUpdateGeneralAll FEs down

I just want to mention that the situation is actually much more dire than we originally thought.  The diskless NFS root filesystem for all the front-ends was on that fb disk.  If we can't recover it we'll have to rebuilt the front end OS as well.

As of right now none of the front ends are accessible, since obviously their root filesystem has disappeared.

  13117   Fri Jul 14 17:47:03 2017 gautamUpdateGeneralDisks from LLO have arrived

[jamie, gautam]

Today morning, the disks from LLO arrived. Jamie and I have been trying to get things back up and running, but have not had much success today. Here is a summary of what we tried.

Keith Thorne sent us two disks: one has the daqd code and the second is the boot disk for the FE machines. Since Jamie managed to successfully compile the daqd code on FB1 yesterday, we decided to try the following: mount the boot disk KT sent us (using a SATA/USB adapter) on /mnt on FB1, get the FEs booted up, and restart the RT models. 

Quote:

I just want to mention that the situation is actually much more dire than we originally thought.  The diskless NFS root filesystem for all the front-ends was on that fb disk.  If we can't recover it we'll have to rebuilt the front end OS as well.

As of right now none of the front ends are accessible, since obviously their root filesystem has disappeared.

While on FB1, Jamie realized he actually had a copy of the /diskless/root directory, which is the NFS filesystem for the FEs, on FB1. So we decided to try and boot some of the FEs with this (instead of starting from scratch with the disks KT sent us). The way things were set up, the FEs were querying the FB machine as the DHCP server. But today, we followed the instructions here to get the FEs to get their IP address from chiara instead. We also added the line 

/diskless/root *(sync,rw,no_root_squash,no_all_squash,no_subtree_check)

to /etc/exports followed by exportfs -ra on FB1. At which point the FE machine we were testing (c1lsc) was able to boot up. 

However, it looks like the NFS filesystem isn't being mounted correctly, for reasons unknown. We commented out some of the rtcds related lines in /etc/rc.local because they were causing a whole bunch of errors at boot (the lines that were touched have been tagged with today's date).


So in summary, the status as of now is:

  1. Front-end machines are able to boot
  2. There seems to be some problem during the boot process, leading to the NFS file system not being correctly mounted. The closest related thing I could find from an elog search is this entry, but I think we are facing a different probelm.
  3. We wanted to see if we could start the realtime models (but without daqd for now), but we weren't even able to get that far today.

We will resume recovery efforts on Monday.

  13124   Wed Jul 19 00:59:47 2017 gautamUpdateGeneralFINESSE model of DRMI (no arms)

Summary:

I've been working on improving the 40m FINESSE model I set up sometime last year (where the goal was to model various RC folding mirror scenarios). Specifically, I wanted to get the locking feature of FINESSE working, and also simulate the DRMI (no arms) configuration, which is what I have been working on locking the real IFO to. This elog is a summary of what I have from the last few days of working on this.

Model details:

  • No IMC included for now.
  • Core optics R and T from the 40m wiki page.
  • Cavity lengths are the "ideal" ones - see the attached ipynb for the values used.
  • RF modulation depths from here. But for now, the relative phase between f1 and f2 at the EOM is set to 0.
  • I've not included flipped folding mirrors - instead, I put a loss of 0.5% on PR3 and SR3 in the model to account for the AR surface of these optics being inside the RCs. 
  • I've made the AR surfaces of all optics FINESSE "beamsplitters" - there was some discussion on the FINESSE mattermost channel about how not doing this can lead to slightly inaccurate results, so I've tried to be more careful in this respect.
  • I'm using "maxtem 1" in my FINESSE file, which means TEM_mn modes up to (m+n=1) are taken into account - setting this to 0 makes it a plane wave model. This parameter can significantly increase the computational time. 

Model validation:

  • As a first check, I made the PRM and SRM transparent, and used the in-built routines in FINESSE to mode-match the input beam to the arm cavities.
  • I then scanned one arm cavity about a resonance, and compared the transmisison profile to the analytical FP cavity expression - agreement was good.
  • Next, I wanted to get a sensing matrix for the DRMI (no arms) configuration (see attached ipynb notebook).
    • First, I make the ETMs in the model transparent
    • I started with the phases for the BS, PRM and SRM set to their "naive" values of 0, 0 and 90 (for the standard DRMI configuration)
    • I then scanned these optics around, used various PDs to look at the points where appropriate circulating fields reached their maximum values, and updated the phase of the optic with these values.
    • Next, I set the demod phase of various RFPDs such that the PDH error signal is entirely in one quadrature. I use the RFPDs in pairs, with demod phases separated by 90 degrees. I arbitrarily set the demod phase of the Q phase PD as 90 + phase of I phase PD. I also tried to mimic the RFPD-IFO DoF pairing that we use for the actual IFO - so for example, PRCL is controlled by REFL11_I.
    • Confident that I was close enough to the ideal operating point, I then fed the error signals from these RFPDs to the "lock" routine in FINESSE. The manual recommends setting the locking loop gain to 1/optical gain, which is what I did.
    • The tunings for the BS and RMs in the attached kat file are the result of this tuning.
    • For the actual sensing matrix, I moved each of PRM, BS and SRM +/-5 degrees (~15nm) around each resonance. I then computed the numerical derivative around the zero crossing of each RFPD signal, and then plotted all of this in some RADAR plots - see Attachment #1.

Explanation of Attachments and Discussion:

  • Attachment #1 - Computed sensing matrix from this model. Compare to an actual measurement, for example here - the relative angle between the sensing matrix elements dont exactly line up with what is measured. EQ suggested today that I should look into tuning the relative phase between the RF frequencies at the EOM. Nevertheless, I tried comparing the magnitudes of the MICH sensing element in AS55 Q - the model tells me that it should be ~7.8*10^5 W/m. In this elog, I measured it to be 2.37*10^5 W/m. On the AS table, there is a 50-50 BS splitting the light between the AS55 and AS110 photodiodes which is not accounted for in the model. Factoring this in, along with the fact that there are 6 in-vaccuum steering mirrors (assume 98% reflectivity for these), 3 in air steering mirrors, and the window, the sensing matrix element from the model starts to be in the same ballpark as the measurement, at ~3*10^5 W/m. So the model isn't giving completely crazy results.
  • Attachment #2 - Example of the signals at various RFPDs in response to sweeping the PRM around its resonance. To be compared with actual IFO data. Teal lines are the "I" phase, and orange lines are "Q" phase.
  • Attachment #3 - FINESSE kat file and the IPython notebook I used to make these plots. 
  • Next steps
    • More validation against measurements from the actual IFO.
    • Try and resolve differences between modeled and measured sensing matrices.
    • Get locking working with full IFO - there was a discussion on the mattermost thread about sequential/parallel locking some time ago, I need to dig that up to see what is the right way to get this going. Probably the DRMI operating point will also change, because of the complex reflectivities of the arm cavities seen by the RF sidebands (this effect is not present in the current configuration where I've made the ETMs transparent).

GV Edit: EQ pointed out that my method of taking the slope of the error signal to compute the sensing element isn't the most robust - it relies on choosing points to compute the slope that are close enough to the zero crossing and also well within the linear region of the error signal. Instead, FINESSE allows this computation to be done as we do in the real IFO - apply an excitation at a given frequency to an optic and look at the twice-demodulated output of the relevant RFPD (e.g. for PRCL sensing element in the 1f DRMI configuration, drive PRM and demodulate REFL11 at 11MHz and the drive frequenct). Attachment #4 is the sensing matrix recomputed in this way - in this case, it produces almost identical results as the slope method, but I think the double-demod technique is better in that you don't have to worry about selecting points for computing the slope etc. 

 

  13148   Fri Jul 28 16:47:16 2017 gautamUpdateGeneralPSL StripTool flatlined

About 3.5 hours ago, all the PSL wall StripTool traces "flatlined", as happens when we had the EPICS freezes in the past - except that all these traces were flat for more than 3 hours. I checked that the c1psl slow machine responded to ping, and I could also telnet into it. I tried opening the StripTool on pianosa and all the traces were responsive. So I simply re-started the PSL StripTool on zita. All traces look responsive now.

  13150   Sat Jul 29 14:05:19 2017 gautamUpdateGeneralPSL StripTool flatlined

The PMC was unlocked when I came in ~10mins ago. The wall StripTool traces suggest it has been this way for > 8hours. I was unable to get the PMC to re-lock by using the PMC MEDM screen. The c1psl slow machine responded to ping, and I could also telnet into it. But despite burt-restoring c1psl, I could not get the PMC to lock. So I re-started c1psl by keying the crate, and then burt-restored the EPICS values again. This seems to have done the trick. Both the PMC and IMC are now locked.


Unrelated to this work: It looks like some/all of the FE models were re-started. The x3 gain on the coil outputs of the 2 ITMs and BS, which I had manually engaged when I re-aligned the IFO on Monday, were off, and in general, the IMC and IFO alignment seem much worse now than it was yesterday. I will do the re-alignment later as I'm not planning to use the IFO today.

  13151   Sat Jul 29 16:24:55 2017 jamieUpdateGeneralPSL StripTool flatlined
Quote:
Unrelated to this work: It looks like some/all of the FE models were re-started. The x3 gain on the coil outputs of the 2 ITMs and BS, which I had manually engaged when I re-aligned the IFO on Monday, were off, and in general, the IMC and IFO alignment seem much worse now than it was yesterday. I will do the re-alignment later as I'm not planning to use the IFO today.

This was me.  I restarted the front ends when I was getting the MX streams working yesterday.  I'll try to me more conscientious about logging front end restarts.

  13167   Fri Aug 4 18:25:15 2017 gautamUpdateGeneralBilinear noise coupling

[Nikhil, gautam]

We repeated the test that EricQ detailed here today. We have downloaded ~10min of data (between GPS times 11885925523 - 11885926117), and Nikhil will analyze it.

  13169   Mon Aug 7 16:00:41 2017 ranaUpdateGeneralBilinear noise coupling

These are not the angular test parameters that we're looking for:

 recall that what we want is the low frequency beam spot variations and the feedback to be limited to a small high frequency band.

e.g. only inject noise at 40-50 Hz, not loud enough to find at 2x the injected frequency.

It should NOT be the case that the high frequency injected noise be dominating the RMS.

The coupling should be ~1e-3; some combination of beam spot mis-centering and beam spot motion.

  13170   Mon Aug 7 22:50:57 2017 KojiUpdateGeneralNew wifi router for the GC network installed

I have replaced the old 11n wifi router (CISCO / Linksys) for the GC network with a new one with 11ac technology.

The new one is a 3band wifi router. Thus it has one 2.4GHz (11n) SSID and two 5GHz (11ac) SSIDs. All these have been set to be hidden. Just come to the 40m and find the necessary info for the connection.

Note that the user id / password for the admin tool have been changed from the default values.

  13181   Thu Aug 10 09:10:55 2017 steveUpdateGeneraldataviewer is recovering

It can look back 7 days trends now. There is still no vacuum channels. I can bring back the channels through the restore directory, but there are no data.

  13208   Tue Aug 15 08:22:16 2017 SteveUpdateGeneralprojector light bulb replaced

BL-FS300C-PH-LE was replaced after 2,904 hrs  It did not explode this time.  The 4 monts life period is actually pritty good because this is a $73.  cheap bulb. The best-high priced warranty is 5 months.

 

PS: future option_bulbless laser projector

HITACHI LP-WU3500 PROJECTOR	$2,549.00	
DLP, WUXGA, 3500 LUMENS, HDMI, 20K HR LASER, 5YR WARRANTY, 1.36-2.34:1 LENS, 24/7 DUTY CYCLE
45x72 IMAGE FROM 8' TO 13'10" LENS TO SCREEN, AND AT 10' APPROX. 154FL OF BRIGHTNESS ON A 1.0 GAIN SCREEN
http://www.projectorcentral.com/pdf/projector_spec_9722.pdf

This would give you everything you are requesting, plus a lamp-less design and 5yr warranty.  Ground shipping would be free anywhere in the lower 48, and we would not charge sales tax on orders billing/shipping outside of AZ.  If you have any questions or if you would like to order... just let me know!
  13221   Wed Aug 16 20:01:03 2017 gautamUpdateGeneralSUS model ASC input weirdness

I'm not sure if this has something to do with the model restarts / new RCG, but while I was re-enabling the MC watchdogs, I noticed the RMS sensor voltage channels on ITMX hovering around ~100mV, even though local damping was on (in which configuration I would expect <1mV if everything is working normally).  I was confused by this behaviour, and after staring at the ITMX suspension screen for a while, I noticed that the input to the "ASCP" and "ASCY" servos were "-nan", and the outputs were 10^20 cts frown (see Attachment #1).

Digging a little deeper, I found that the same problem existed on ITMY, ETMX, ETMY, PRM (but not BS or SRM) - reasons unknown for now.

I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible.

After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.

gedit 28 Oct 0026: Seems like this problem is seen at the sites as well. I wonder if the problem is related.

  13228   Fri Aug 18 21:58:35 2017 gautamUpdateGeneralSUS model ASC input weirdness

I spent some time today trying to debug this issue.

Jamie and I had opened up the c1sus frontend to try and replace the RFM card before we realized that the problem was in the RCG code generator. During this process, we had disconnected all of the back-panel cabling to this machine (2 ethernet cables, dolphin cable, and RFM cables/fibers). I thought I may have accidentally returned the cables to the wrong positions - but all the status indicator lights indicate that everything is working as it should, and I also confirmed that the cabling is as it is in the pictures of the rack on the wiki page.

Looking at the SimuLink model diagram (see Attachment #1 for example), it looks like (at least some of) these channels are actually on the dolphin network, and not the RFM network (with which we were experiencing problems). This suggests that the problem is something deeper. Although I did see nans in some of the ETMX ASC channels as well, for which the channels are piped over the RFM network. Even more puzzling is that the ASC MEDM screen (Attachment #3) and the SimuLink diagram (Attachment #2) suggest that there is an output matrix in between the input signals and the output angular control signals to the suspensions. As Attachment #4 shows, the rows corresponding to ITMX PIT and YAW are zero (I confirmed using z read <matrixElement>). Attachment #3 shows that the output of all the servo banks except CARM_YAW is zero, but CARM_YAW has no matrix element going to the ITMs (also confirmed with z read <servoOutputChannel>). So 0 x 0 should be 0, but for some reason the model doesn't give this output?

GV Edit: As EricQ just pointed out to me, nan x 0 is still nan, which probably explains the whole issue. Poking a little further, it seems like this is an SDF issue - the SDF table isn't able to catch differences for this hold output channel.


As I was writing this elog, I noticed that, as mentioned above, the CARM_YAW output was "nan". When I restart the model (thankfully this didn't crash c1lsc!), it seems to default to this state. Opening up the filter module, I saw that the "hold output" was enabled.

Toggling that switch made the nans in all the SUS ASC channels disappear. Mysterious indecision.

All the points above stand - CARM_YAW output shouldn't have been going anywhere as per the output matrix, but it seems to have been responsible? Seems like a bug in any case if a model restarts with a field as "nan".

Anyways the problem seems to have been resolved so I'm going to try locking and dither aligning the arms now.

Rolf mentioned that a simple update could fix several of the CDS issues we are facing (e.g. inability to open up testpoints), but he didn't seem to have any insight into this particular issue. Jamie will try and recompile all the models and then we have to see if that fixes the remaining problems.

Quote:
 

I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible.

After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.

 

  13235   Mon Aug 21 20:11:25 2017 johannesSummaryGeneralLoss measurements plan

There are three methods we (will soon) have available to evaluate the round-trip dissipative losses in the arms that do not suffer from the ITM loss dominance:

  • DC reflection method:
    • Compare reflected light levels from [ITM only] vs [arm cavity on resonance]
  • Basler CCDs:
    • Infer large (or small) angle scatter loss with calibrated CCDs
  • Reflection ringdowns:
    • Need AS port light injection, principle is similar to DC method but better (?)

DCREFL

The DC method comparing reflectivities has been used in the past and is relatively easy to do. After the recent vacuum troubles the first step should be to re-perform these as CDS permits (needs some ASS functionality and of course the MC to behave). It wouldn't hurt to know the parameters this depends on, aka mode overlap and modulation depths with better certainty. Maybe the SURF scripts for mode-spectroscopy can be applied?

CCDs

With the new CCD cameras calibrated, pre-vent we can determine the magnitude of the large-angle scatter loss (assuming isotropic scatter) of ETMX and possibly ETMY. Can we look past ETMX/ETMY from the viewports? Then we can probably also look at the small angle scatter of ITMX and ITMY. If not, once we open one of the chambers there's the option of installing mirrors as close as possible to the main beam path. The easiest is probably to look at ITMX, since there is plenty of space in the XEND chamber, and the camera is already installed.

ASPORT

This requires a lot of up-front work. We decided to use the spare 200mW NPRO. It will be placed on the PSL table and injected into an optical fiber, which terminates on the AS table. The again free space beam there needs to be sort-of mode-matched into the SRC ("sort-of" because mode-spectroscopy). We want to be able to phaselock this secondary beam to the PSL with at least a couple kHz bandwidth and also completely extinguish the beam on time-scales of a few microseconds. We will likely need to purchase a few components that we can salvage from other labs, I'm still going through the inventory and will know more soon (more detailed post to follow). We need to settle for the polarization we want to send in from the back.

 

Tentative Schedule (aggressive)

Week Aug 21 - Aug 27:

  • Update mode-overlap estimates
  • Obtain current DC refl estimates
  • Spatial profile of auxiliary NPRO
  • Fiber setup concept; purchasing
  • CCD software prep work

Week Aug 28 - Sep 3:

  • Re-evaluate modulation indices if necessary
  • Optical beat AS Port Auxiliary Laser (ASAL) - PSL
  • PLL setup
  • CCD large angle prep work

Week Sep 4 - Sep 10:

  • PLL CDS integration
  • Amplitude-modulation preparation
  • CCD large angles

Week Sep 11 - Sep 17:

  • Fiber-injection
  • AS table preliminary mode-matching
  • Faraday setup
  • CCD small angle prep work

Week Sep 18 - Sep 24:

  • ASAL amplitude switching
  • CCD small angles

Week Sep 25 - Oct 1:

  • AS port ringdowns

 

  13236   Mon Aug 21 21:26:41 2017 gautamSummaryGeneralLoss measurements plan

In case you want to use it, I had profiled the Lightwave NPRO sometime back, and we were even using it as the AUX X laser for a short period of time. 

As for using the AS laser for mode spectroscopy: don't we want to match the beam into the cavity as best as possible, and then use some technique to disturb the input mode (like the dental tooth scraper technique from Chris Mueller's thesis)? 

Johannes and I did an arm scan of the X arm today (arm controlled with ALS, monitoring IR transmission) - only 2 IR FSRs were scanned, but there should be sufficient information in there to extract the modulation depth and mode matching - can we use Kaustubh's/Naomi's code?. The Y arm ALS needs to be touched up so I don't have a Y arm scan yet. Note that to get a good arm scan measurement, the High Gain Thorlabs PD should be used as the transmission PD.

Quote:
 

Week Aug 21 - Aug 27:

  • Update mode-overlap estimates
  • Obtain current DC refl estimates
  • Spatial profile of auxiliary NPRO
  • Fiber setup concept; purchasing
  • CCD software prep work

 

ELOG V3.1.3-