40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 236 of 346  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  15668   Tue Nov 10 11:59:37 2020 gautamUpdateVACStuck RV2

I've uploaded some more photos here. I believe the problem is a worn out thread where the main rotary handle attaches to the shaft that operates the valve.

This morning, I changed the valve config such that TP2 backs TP1 and that combo continues to pump on the main volume through the partially open RV2. TP3 was reconfigured to pump the annuli - initially, I backed it with the AUX drypump but since the load has decreased now, I am turning the AUX drypump off. At some point, if we want to try it, we can try pumping the main volume via the RGA line using TP2/TP3 and see if that allows us to get to a lower pressure, but for now, I think this is a suitable configuration to continue the IFO work.

There was a suggestion at the meeting that the saturation of the main volume pressure at 1mtorr could be due to a leak - to test, I closed V1 for ~5 hours and saw the pressure increased by 1.5 mtorr, which is in line with our estimates from the past. So I think we can discount that possibility.

Attachment 1: damagedThread.001.jpeg
damagedThread.001.jpeg
Attachment 2: IFOstatus.png
IFOstatus.png
Attachment 3: P1a_leakTest.png
P1a_leakTest.png
  15669   Tue Nov 10 12:41:23 2020 gautamUpdateIOO1W > IMC

Looking back through the elog, 1mtorr is the pressure at which it is deemed safe to send the full power beam into the IMC. After replacing the HR mirror in the MCREFL path with a 10% reflective BS, I just cranked the power back up. IMC is locked. With the increased exposure on the MC2T camera, lots of new scattered light has become visible.

Attachment 1: CD93A725-FB5C-4F67-BB2E-D122205114B0.jpeg
CD93A725-FB5C-4F67-BB2E-D122205114B0.jpeg
  15670   Tue Nov 10 14:30:06 2020 gautamUpdateIOOWFS2 broken

While proceeding with the interferometer recovery, I noticed that there appeared to be no light on WFS2. I confirmed on the AP table that the beam was indeed hitting the QPD, but the DC quadrants are all returning 0. Looking back, it appears that the failure happened on Monday 26 October at ~6pm local time. For now, I hand-aligned the IMC and centered the beams on the WFS1 and MC2T QPDs - MCT is ~15000 cts and MC REFL DC is ~0.1, all consistent with the best numbers I've been able to obtain in the past. I don't think the servo will work without 1 sensor without some retuning of the output matrix.

It would appear that both the DC and RF outputs of WFS2 are affected - I dithered the MC2 optic in pitch (with the WFS loop disabled) at 3.33 Hz, the transmission and WFS1 sensors see the dither but not WFS2. It could be that I'm just not well centerd on the PD, but by eye, I am, so it would appear that the problem is present in both the DC and RF signal paths. I am not going into the PD head debugging today.

Quote:

Looking back through the elog, 1mtorr is the pressure at which it is deemed safe to send the full power beam into the IMC. After replacing the HR mirror in the MCREFL path with a 10% reflective BS, I just cranked the power back up. IMC is locked. With the increased exposure on the MC2T camera, lots of new scattered light has become visible.

Attachment 1: WFS2broken.png
WFS2broken.png
Attachment 2: WFS2broken_RF.png
WFS2broken_RF.png
  15672   Tue Nov 10 17:46:06 2020 gautamUpdateGeneralIFO recovery

Summary:

  1. Recovery was complicated by RFM failure on c1iscey - see Attachment #1. This is becoming uncomfortably frequent. As a result, the ETMY suspension wasn't being damped properly. Needed a reboot of c1iscey machine and a restart of the c1rfm model to fix.
  2. POX/POY locking was restored. Arm alignment was tuned using the dither alignment system.
  3. AS beam was centered on its CCD (I put a total of ND=1.0 filters back on the CCD). Note that the power in the AS beam is ~4x what it was given we have removed the in-vacuum pickoff to the OMC.
  4. Green beams were aligned to the arm cavities. See Attachment #2. Both green cameras were adjusted on the PSL table to have the beam be ~centered on them.
  5. ALS noise is far too high for locking, needs debugging. See Attachment #3.
  6. AS beam was aligned onto the AS55 photodiode. With the PRM aligned, the REFL beam was centerd on the various REFL photodiodes. The PRMI (resonant carrier) could be locked, see Attachment #4.

I want to test out an AS port WFS now that I have all the parts in hand - I guess the Michelson / PRMI will suffice until I make the ALS noise good again, and anyways, there is much assembly work to be done. Overnight, I'm repeating the suspension eigenmode measurement.

Attachment 1: RFMerrs.png
RFMerrs.png
Attachment 2: IFOrecovery.png
IFOrecovery.png
Attachment 3: ALS_ool.pdf
ALS_ool.pdf
Attachment 4: PRMIcarr.png
PRMIcarr.png
  15673   Thu Nov 12 14:26:35 2020 gautamUpdateGeneralETMY suspension eigenmodes

The results from the ringdown are attached - in summary:

  • The peak positions have shifted <50 mHz from their in-air locations, so that's good I guess
  • The fitted Qs of the POS and SIDE eigenmodes are ~500, but those for PIT and YAW are only ~200
  • The fitting might be sub-optimal due to spurious sideband lobes around the peaks themselves - I didn't go too deep into investigating this, especially since the damping seems to work okay for now
  • There is up to a factor of 5 variation in the response at the eigenfrequencies in the various sensors - this seems rather large
  • The condition number of the matrix that would diagonalize the sensing is a scarcely believable 240, but this is unsurprising given the large variation in the response in the different sensors. Unclear what the implications are - I'm not messing with the input matrix for now
Attachment 1: ETMY.tar.bz2
  15674   Thu Nov 12 14:31:27 2020 gautamUpdateElectronicsSR560s in need of repair/battery replacement

I had to go through five SR560s in the lab yesterday evening to find one that had the expected 4 nV/rtHz input noise and worked on battery power. To confirm that the batteries were charged, I left 4 of them plugged in overnight. Today, I confirmed that the little indicator light on the back is in "Maintain" and not "Charge". However, when I unplug the power cord, they immediately turn off.

One of the units has a large DC output offset voltage even when the input is terminated (though it is not present with the input itself set to "GND" rather than DC/AC). Do we want to send this in for repair? Can we replace the batteries ourselves?

Attachment 1: IMG_8947.jpg
IMG_8947.jpg
  15675   Thu Nov 12 14:55:35 2020 gautamUpdateElectronicsMore systematic noise characterization

Summary:

I now think the excess noise in this circuit could be coming from the KEPCO switching power supply (in fact, the supplies are linear, and specd for a voltage ripple at the level of <0.002% of the output - this is pretty good I think, hard to find much better).

Details:

All component references are w.r.t. the schematic. For this test, I decided to stuff a fresh channel on the board, with new components, just to rule out some funky behavior of the channel I had already stuffed. I decoupled the HV amplifier stage and the Acromag DAC noise filtering stages by leaving R3 open. Then, I shorted the non-inverting input of the PA95 (i.e. TP3) to GND, with a jumper cable. Then I measured the noise at TP5, using the AC coupling pomona box (although in principle, there is no need for this as the DC voltage should be zero, but I opted to use it just in case). The characteristic bump in the spectra at ~100Hz-1kHz was still evident, see the bottom row of Attachment #1. The expected voltage noise in this configuration, according to my SPICE model, is ~10 nV/rtHz, see the analysis note.

As a second test, I decided to measure the voltage noise of the power supply - there isn't a convenient monitor point on the circuit to directly probe the +/- HV supply rails (I didn't want any exposed HV conductors on the PCB) - so I measured the voltage noise at the 3-pin connector supplying power to the 2U chassis (i.e. the circuit itself was disconnected for this measurement, I'm measuring the noise of the supply itself). The output is supposedly differential - so I used the SR785 input "Float" mode, and used the Pomona AC coupling box once again to block the large DC voltage and avoid damage to the SR785. The results are summarized in the top row of Attachment #1.

The shape of the spectra suggests to me that the power supply noise is polluting the output noise - Koji suggested measuring the coherence between the channels, I'll try and do this in a safe way but I'm hesitant to use hacky clips for the High Voltage. The PA95 datasheet says nothing about its PSRR, and seems like the Spice model doesn't include it either. It would seem that a PSRR of <60dB at 100 Hz would explain the excess noise seen in the output. Typically, for other Op-Amps, the PSRR falls off as 1/f. The CMRR (which is distinct from the PSRR) is spec'd at 98 dB at DC, and for other OpAmps, I've seen that the CMRR is typically higher than the PSRR. I'm trying to make a case here that it's not unreasonable if the PA95 has a PSRR <= 60dB @100 Hz.

So what are the possible coupling mechanisms and how can we mitigate it?

  1. Use better power supply - I'm not sure how this spec of 10-50 uV/rtHz from the power supply lines up in the general scheme of things, is this already very good? Or can a linear power supply deliver better performance? Assuming the PSRR at 100 Hz is 60 dB and falls off as 1/f, we'd need a supply that is ~10x quieter at all frequencies if this is indeed the mechanism.
  2. Better grounding? To deliver the bipolar voltage rails, I used two unipolar supplies. The outputs are supposedly floating, so I connected the "-" input of the +300 V supply to the "+" input of the -300 V supply. I think this is the right thing to do, but maybe this is somehow polluting the measurement?
  3. Additional bypass capacitors? I use 0.1 uF, 700V DC ceramic capacitors as bypass capacitors close to the leads of the PA95, as is recommended in the datasheet. Can adding a 10uF capacitor in parallel provide better filtering? I'm not sure if one with compatible footprint and voltage rating is readily available, I'll look around.

What do the analog electronics experts think? I may be completely off the rails and imagining things here.


Update 2130: I measured the coherence between the positive supply rail and the output, under the same conditions (i.e. HV stage isolated, input shorted to ground). See Attachment #2 - the coherence does mirror the "bump" seen in the output voltage noise - but the coherence is. only 0.1,  even with 100 averages, suggesting the coupling is not directly linear - anyways, I think it's worth it to try adding some extra decoupling, I'm sourcing the HV 10uF capacitors now.

Attachment 1: powerSupplyNoise.pdf
powerSupplyNoise.pdf
Attachment 2: coherence.pdf
coherence.pdf
  15678   Mon Nov 16 16:00:19 2020 gautamUpdateEquipment loanLB1005-->Cryo lab

Shruti picked it up @4pm.

  15681   Wed Nov 18 17:51:50 2020 gautamUpdateVACAgilent pressure gauge controller delivered

It is stored along with the cables that arrived a few weeks ago, awaiting the gauges which are now expected next week sometime.

  15682   Wed Nov 18 22:49:06 2020 gautamUpdateASCSome thoughts about AS WFS electronics

Where do we want to install the interface and readout electronics for the AS port WFS? Options are:

  • 1Y1 / 1Y3  (i.e. adjacent to the LSC rack) - advantage is that 55 MHz RF signal is readily available for demodulation. But space is limited (1Y2, where the RF signal is, is too full so at the very least, we'd have to run a short cable to an adjacent rack), and we'd have a whole bunch of IPC channels between c1lsc and c1ioo models.
  • 1X1/1X2. There's much more space and we can directly digitize into the c1ioo model, but we'd have to route the 55 MHz signal back to this rack (kind of lame since the signal generation is happening here). I'm leaning towards this option though - thinking we can just open up the freq generation box and take a pickoff of the 55 MHz signal...

There isn't much difference in terms of cable length that will be required - I believe the AS WFS is going to go on the AP table even in the new optical layout and not on the ITMY in-air oplev table? 

The project requires a large number of new electronics modules. Here is a short update and some questions I had:

  1. WFS head and housing. Need to finalize the RF transimpedance gain (i.e. the LC resonant part), and also decide which notches we want to stuff. Rich's advise was to not stuff any more than is absolutely necessary, so perhaps we can have at first just the 2f notch and add others as we deem necessary once we look at the spectrum with the interferometer locked. Need to also figure out a neat connector solution to get the signals from the SMP connectors on the circuit board to the housing - I'm thinking of using Front-Panel-Express to design a little patch board that we can use for this purpose, I'll post a more detailed note about the design once I have it.
  2. WFS interface board + soft-start board (the latter provides a smooth ramp up of the PD bias voltage). These go in a chassis, the assembly is almost complete, just waiting on the soft-start board from JLCPCB. One question is how to power this board - Sorensens or linear? If we choose to install in 1X1/1X2, I guess Sorensen is the only option, unless we have a couple of linear power supplies lying around spare.
  3. Demod board (quad chassis). Assembly is almost complete, need to install the 4 way RF splitter, some insulating shoulder washers. (to ensure the RF ground is isolated from the chassis), and better nuts for the D-sub connectors. A related question is how we want to supply the electrical LO signal for demodulation. The "nominal" level each demod board wants is 10 dBm. This signal will be sourced inside the chassis from a 4-way RF splitter (~7 dB insertion loss). So we'd need 17dBm going into the splitter. This is a little too high for a compact amplifier like the ZHL-500-HLN to drive (1dB compression point is 16 dBm), and the signal level available at the LSC rack is only ~2 dBm. So do we want a beefy amplifier outside the chassis amplifying the signal to this level? Or do we want to use the ZHL-500-HLN, and amplify the signal to, say 13 dBm, and drive each board with ~6 dBm LO? The Peregrine mixer on these boards (PE4140) are supposed to be pretty forgiving in terms of the LO level they want... In either case, I think we should avoid having an amplifier also inside the chassis, it is rather full in there with 4 demod boards, regulator board, all the cabling, and an RF splitter. It may be that heat dissipation becomes an issue if we stick an RF amplifier in there too...
  4. Whitening chassis. Waiting for front panels to arrive, PCBs and interface board are in hand, stuffed and ready to go. A question here is how we want to control the whitening - it's going to be rather difficult to have fast switchable whitening. I think we can just fix the whitening state. Another option would be to control the whitening using Acromag BIO channels.
  5. AI chassis - will go between whitening and ADC.
  6. Large number of cables to interconnect all the above pieces. I've asked Chub to order the usual "Deluxe" shielded Dsub cables, and we will get some long SMA-SMA cables to transmit the RF signals from head to demod board from Pasternack (or similar), do we need to use Heliax or the Times Microwave alternative for this purpose? What about the LO signal? Do we want to use any special cable to route it from the LSC rack to the IOO rack, if we end up going that way? 

Approximately half of the assembly of the various electronics is now complete. The basic electrical testing of the interface chassis and demod chassis are also done (i.e. they get power, the LEDs light up, and are stable for a few minutes). Detailed noise and TF characterization will have to be done.

  15683   Sun Nov 22 21:09:37 2020 gautamUpdateASCPlanned mods for WFS head

Attachment #1 - Proposed mods for 40m RF freqs. 

  • I followed Rich's suggestion of choosing an inductor that has Z~100 ohms at the frequency of interest.
  • The capacitor is then chosen to have the correct resonant frequency.
  • Voltronix trim caps are used for fine tuning the resonances. 2 variants are used, one with a range of 4-20 pF, and a Q of 500 per spec, while the other has range of 8-40 pF, and a Q of 200 per spec.
  • In the table, the first capacitance is the fixed one, and the second is the variable one. We're not close to the rail for the variable caps.
  • For the first trials, I think we can try by not populating all of the notches - just the 2f notch. We can then add notches if deemed necessary. Probably these notches are more important for a REFL/POP port WFS.
  • One thing I noticed is that the aLIGO WFS use ceramic capacitors for the LC reactances. i haven't checked if there is any penalty we are paying in terms of Q of the capacitor. anyways, i'm not going to redesign the PCB and maybe ceramic is the only option in the 0805 package size?

Attachment #2 - Modelled TFs for the case where all the notches are stuffed, and where only the 2f notch is stuffed.

  • The model uses realistic composite models for the inductors from coilcraft, but the capacitors are idealized parts.
  • I also found the library part for LMH6624, so this should be a bit closer to the actual circuit than Rich's models which subbed in the MAX4107 in place.
  • The dashed vertical lines indicate some frequencies of interest.
  • Approx 1 kohm transimpedance is realized at 55 MHz. I don't have the W/rad number for the sensitivity at the AS port, but my guess is this will be just fine.
  • If the 44 MHz and 66 MHz notches are stuffed, then there is some interaction with the 55 MHz notch, which lowers the transimpedance gain somewhat. So if we decide to stuff those notches, we should do a mroe careful investigation into whether this is problematic.

Attachment #3 - Modelled TFs for the case where all the notches are stuffed, and where only the 2f notch is stuffed.

  • Initially, I found the (modelled) noise level to be rather higher than expected. It persisted despite making the resistors in the model noiseless. Turns out there is some leakage from the "Test Input" path. Some documents in the DCC suggest that there should be an "RF Relay" that allows one to isolate this path, but afaik, the aLIGO WFS does not have this feature. So maybe what we should do is to remove C9 once we're done tuning the resonances. Better yet, just tune the resonance with the Jenne laser and not this current-injection path.
  • Horizontal dashed lines indicate shot noise for the indicated DC photocurrent levels. It is unlikely we will have even 1 mW of light on a single quadrant at the AS port, so the AS port WFS will not be shot noise limited. But I think that's okay for initial trials.
  • The noise level of ~20 pA/rtHz input referred is in agreement what I would expect using Eq 3 of the LMH6624 datasheet. The preamp has a gain of 10, so the source impedance seen by it is ~100 ohms (since the overall gain is 1kohm). The corresponding noise level per Eq 3 is ~2 nV/rtHz, or 20 pA/rtHz current noise referred to the photocurrent 👍 . 
  • The LMH6624 datasheet claims that the OpAmp is stable for CLG >= 10. For reasons that aren't obvious to me, Koji states here that the CLG needs to be even higher, 15-20 for stability. Do the aLIGO WFS see some instability? Should I raise R14 to 900 ohms?

Any other red flags anyone sees before I finish stuffing the board?

Quote:

WFS head and housing. Need to finalize the RF transimpedance gain (i.e. the LC resonant part), and also decide which notches we want to stuff.

Attachment 1: aLIGO_wfs_v5_40m.pdf
aLIGO_wfs_v5_40m.pdf
Attachment 2: TFs.pdf
TFs.pdf
Attachment 3: noise.pdf
noise.pdf
  15684   Mon Nov 23 12:25:14 2020 gautamUpdateBHDBHD MMT Optics delivered

Optics --> Cabinet at south end (Attachment #1)

Scanned datasheets--> wiki. It would be good if someone can check the specs against what was ordered.

Attachment 1: IMG_8965.HEIC
  15686   Mon Nov 23 16:33:10 2020 gautamUpdateVACMore vacuum deliveries

Five Agilent pressure gauges were delivered to the 40m. It is stored with the controller and cables in the office area. This completes the inventory for the gauge replacement - we have all the ordered parts in hand (though. not necessarily all the adaptor flanges etc). I'll see if I can find some cabinet space in the VEA to store these, the clutter is getting out of hand again...
 

in addition, the spare gate valve from LHO was also delivered today to the 40m. It is stored at EX with the other spare valves. 

Quote:

It is stored along with the cables that arrived a few weeks ago, awaiting the gauges which are now expected next week sometime.

  15688   Tue Nov 24 16:51:29 2020 gautamUpdatePonderSqueezePonderomotive squeezing in aLIGO

Summary:

On the call last week, I claimed that there isn't much hope of directly measuring Ponderomotive Squeezing in aLIGO without some significant configurational changes. Here, I attempt to quantify this statement a bit, and explicitly state what I mean by "significant configurational changes".

Optomechanical coupling:

The I/O relations will generally look something like:

\begin{bmatrix} b_1\\ b_2 \end{bmatrix} = \begin{bmatrix} C_{11} & C_{12}\\ C_{21} & C_{22} \end{bmatrix} \begin{bmatrix} a_{1}\\ a_2 \end{bmatrix} + \begin{bmatrix} D_1\\ D_2 \end{bmatrix} \frac{h}{h_{\mathrm{SQL}}}.

The. magnitudes of the matrix elements C_12 and C_21 (i.e. phase to amplitude and amplitude to phase coupling coefficients) will encode the strength of the Ponderomotive squeezing. 

Readout:

For the inital study, let's assume DC readout (since there isn't a homodyne readout yet even in Advanced LIGO). This amounts to setting \zeta = \phi in the I/O relations, where the former angle is the "homodyne phase" and the latter is the "SRC detuning". For DC readout, the LO quadrature is fixed relative to the signal - for example, in the usual RSE operation, \zeta = \phi = \frac{\pi}{2}. So the quadrature we will read out will be purely b_1 (or nearly so, for small detunings around RSE operation). The displacement noises will couple in via the D_1 matrix element. Attachment #1 and Attachment #2 show the off-diagonal elements of the "C" matrix for detunings of the SRC near RSE and SR operation respectively. You can see that the optomechanical coupling decays pretty rapidly above ~40 Hz. 

SRC detuning:

In this particular case, there is no benefit to detuning the SRC, because we are assuming the homodyne angle is fixed, which is not an unreasonable assumption as the quadrature of the LO light is fixed relative to the signal in DC readout (not sure what the residual fluctuation in this quantity is). But presumably it is at the mrad level, so the pollution due to the orthogonal anti-squeezed quadrture can be ignored for a first pass I think. I also assume ~10 degrees of detuning is possible with the Finesse ~15 SRC, as the linewidth is ~12 degrees.

Noise budget:

To see how this would look in an actual measurement, I took the data from Lee's ponderomotive squeezing paper, as an estimate for the classical noises, and plotted the quantum noise models for a few representative SRC detunings near RSE operation - see Attachment #3. The curves labelled for various phis are the quantum noise models for those SRC detunings, assuming DC readout. I fudged the power into the IFO to make my modelled quantum noise curve at RSE line up with the high frequency part of the "Measured DARM" curve. To measure Ponderomotive Squeezing unambiguously, we need the quantum noise curve to "dip" as is seen around 40 Hz for an SRC tuning of 80 degrees, and that to be the dominant noise source. Evidently, this is not the case.

The case for balanced homodyne readout:

I haven't analyzed it in detail yet - but it may be possible that if we can access other quadratures, we might benefit from rotating away from the DARM quadrature - the strength of the optomechanical coupling would decrease, as demonstrated in Attachments #1 and #2, but the coupling of classical noise would be reduced as well, so we may be able to win overall. I'll briefly investigate whether a robust measurement can be made at the site once the BHD is implemented.

Attachment 1: QN_heatmap_RSE.pdf
QN_heatmap_RSE.pdf
Attachment 2: QN_heatmap_SR.pdf
QN_heatmap_SR.pdf
Attachment 3: noiseBudget.pdf
noiseBudget.pdf
  15689   Wed Nov 25 18:18:41 2020 gautamUpdateASCPlanned mods for WFS head

I am confused by the discussion during the call today. I revisited Hartmut's paper - the circuit in Fig 6 is essentially what I am calling "only 2f_2 notch stuffed" in my previous elog. Qualitatively, the plot I presented in Attachment #2 of the preceeding elog in this thread shows the expected behavior as in Fig 8 of the paper - the impedance seen by the photodiode is indeed lower. In Attachment #1, I show the comparison - the "V(anode)/I(I1)" curve is analogous to the "PD anode" curve in Hartmut's paper, and the "V(vout)/I(I1)" curve is analogous to the "1f-out" curve. I also plot the sensitivity analysis (Attachment #2), by varying the photodiode junction capacitance between 100pF and 200 pF (both values inclusive) in 20 pF steps. There is some variation at 55 MHz, but it is unlikely that the capacitance will change so much during normal operation? 

I understand the motivation behind stuffing the other notches, to reduce intermodulation effects. But the impression I got from the call was that somehow, the model I presented was wrong. Can someone help me identify the mistake?

I didn't bother to export the LTspice data and make a matplotlib plot for this quick analysis, so pardon the poor presentation. The colors run from green=100pF to grey=200pF.

Attachment 1: anodeVsOutput.png
anodeVsOutput.png
Attachment 2: sensitivity.png
sensitivity.png
  15690   Wed Nov 25 18:30:23 2020 gautamUpdateASCSome thoughts about AS WFS electronics

An 8 channel whitening chassis was prepared and tested. I measured:

  1. TF from input to output - there are 7 switchable stages (3 dB, 6 dB, 12 dB and 24 dB flat whitening gain, and 3 stages of 15:150 Hz z:p whitening). I enabled one at a time and measured the TF. 
  2. Noise with input terminated.

In summary,

  1. All the TFs look good (I will post the plots later), except that the 3rd stage of whitening on both boards don't show the expected transfer function. The fact that it's there on both boards makes me suspect that the switching isn't happening correctly (I'm using a little breakout board). I'm inclined to not debug this because it's unlikely we will ever use 3 stages of 15:150 whitening for the AS WFS. 
  2. The noise measurement displayed huge (x1000 above the surrounding broadband noise floor) 60 Hz harmonics out to several kHz. My hypothesis is that this has to do with some bad grounding. I found that the circuit ground is shorted to the chassis via the shell of the 9pin and 15pin Dsub connectors (but the two D37 connector shields are isolated). This seems very wierd, idk what to make of this. Is this expected? Looking at the schematic, it would appear that the shields of the connectors are shorted to ground which seems like a bad idea. afaik, we are using the same connectors as on the chassis at the sites - is this a problem there too? Any thoughts
Quote:

Whitening chassis. Waiting for front panels to arrive, PCBs and interface board are in hand, stuffed and ready to go. A question here is how we want to control the whitening - it's going to be rather difficult to have fast switchable whitening. I think we can just fix the whitening state. Another option would be to control the whitening using Acromag BIO channels.

  15694   Wed Dec 2 15:27:06 2020 gautamSummaryComputer Scripts / ProgramsTC200 python driver

FYI, there is this. Seems pretty well maintained, and so might be more useful in the long run. The available catalog of instruments is quite impressive - TC200 temp controller and SRS345 func gen are included and are things we use in the lab. maybe you can make a pull request to add MDT694B (there is some nice API already built I think). We should also put our netgpibdata stuff and the vacuum gauge control (basically everything that isn't rtcds) on there (unless there is some intellectual property rights issues that the Caltech lawyers have to sort out).

Quote:

Given the similarities between the MDT694B (single channel piezo controller) and TC200 (temperature controller) serial interfaces, I added the pyserial driver here

*Warning* this first version of the driver remains untested

  15695   Wed Dec 2 17:54:03 2020 gautamUpdateCDSFE reboot

As discussed at the meeting, I commenced the recovery of the CDS status at 1750 local time.

  • Started by attempting to just soft-restart the c1rfm model and see if that fixes the issue. It didn't and what's more, took down the c1sus machine.
  • So hard reboots of the vertex machines was required. c1iscey also crashed. I was able to keep the EX machine up, but I soft-stopped all the RT models on it.
  • All systems were recovered by 1815. For anyone checking, the DC light on the c1oaf model is red - this is a "known" issue and requires a model restart, but i don't want to get into that now and it doesn't disrupt normal operation.

Single arm POX/POY locking was checked, but not much more. Our IMC WFS are still out of service so I hand aligned the IMC a bit, IMC REFL DC went from ~0.3 to ~0.12, which is the usual nominal level.

  15696   Wed Dec 2 18:35:31 2020 gautamUpdateDetCharSummary page revival

The summary pages were in a sad state of disrepair - the daily jobs haven't been running for > 1 month. I only noticed today because Jordan wanted to look at some vacuum trends and I thought summary pages is nice for long term lookback. I rebooted it just now, seems to be running. @Tega, maybe you want to set up some kind of scripted health check that also sends an alert.

  15697   Wed Dec 2 23:07:19 2020 gautamUpdateASCElectrical LO signal for AS WFS

I'm thinking of making some modifications to the RF distribution box in 1X2, so as to have an extra 55 MHz pickoff. Koji already proposed some improvements to the layout in 2015. I've marked up his "Possible Improvement" page of the document in Attachment #1, with my proposed modifications. I believe it will be possible to get 15-16 dBm of signal into a 4 way RF splitter in the quad demod chassis. With the insertion loss of the splitter, we can have 9-10 dBm of LO reaching each demod board, which will then be boosted to +20 dBm by the Teledyne on board. The PE4140 mixer claims to require only -7 dBm of LO signal. So we have quite a bit of headroom here - as long as we limit the RF signal to 0dBm (=0.5 Vpp from the LMH6431 opamp at 55 MHz, we shouldn't be having a much larger signal anyways), we should be just fine with 15 dBm of LO power (which is what we will have after the division into the I and Q paths, and nominal insertion losses in the transmission path). These numbers may be slight overestimates given the possible degradation of the RF amps over the last 10 years, but shouldn't be a show-stopper.

Do the RF electronics experts agree with my assessment? If so, I will start working on these mods tomorrow. Technically, the splitter can be added outside the box, but it may be neater if we package it inside the box. 

Attachment 1: RF_Frequency_Source.pdf
RF_Frequency_Source.pdf
  15698   Thu Dec 3 10:33:00 2020 gautamUpdateVACTrippLite UPS delivered

The latest greatest UPS has been delivered. I will move it to near the vacuum rack in its packaging for storage. It weighs >100lbs so care will have to be taken when installing - can the rack even support this?

Attachment 1: DFDD4F39-3F8A-439D-888D-7C0CE2E030CF.jpeg
DFDD4F39-3F8A-439D-888D-7C0CE2E030CF.jpeg
  15699   Thu Dec 3 10:46:39 2020 gautamUpdateElectronicsDC power strip requirements

Since we will have several new 1U / 2U aLIGO style electronics chassis installed in the racks, it is desirable to have a more compact power distribution solution than the fusable terminal blocks we use currently. 

  • The power strips come in 2 varieties, 18 V and 24 V. The difference is in the Dsub connector that is used - the 18 V variant has 3 pins / 3 sockets, while the 24V version uses a hybrid of 2 pins / 1 socket (and the mirror on the mating connector).
  • Each strip can accommodate 24 individual chassis. It is unlikely that we will have >24 chassis in any collection of racks, so per area (e.g. EX/EY/IOO/SUS), one each of the 18V and 24V strips should be sufficient. We can even migrate our Acromag chassis to be powered via these strips.
  • Details about the power strip may be found here.

I did a quick walkaround of the lab and the electronics rack today. I estimate that we will need 5 units of the 24 V and 5 units of the 18 V power strips. Each end will need 1 each of 18 V and 24 V strips. The 1Y1/1Y2/1Y3 (LSC/OMC/BHD sus) area will be served by 1 each 18 V and 24 V. The 1X1/1X2 (IOO) area will be served by 1 each 18 V and 24 V. The 1X5/1X6 (SUS Shadow sensor / Coil driver) area will be served by 1 each of 18 V and 24 V.  So I think we should get 7 pcs of each to have 2 spares.

Most of the chassis which will be installed in large numbers (AA, AI, whitening) supports 24V DC input. A few units, like the WFS interface head, OMC driver, OMC QPD interface, require 18V. It is not so clear what the input voltage for the Satellite box and Coil Drivers should be. For the former, an unregulated tap-off of the supply voltage is used to power the LT1021 reference and a transistor that is used to generate the LED drive current for the OSEMs. For the latter, the OPA544 high current opamp used to drive the coil current has its supply rails powered by again, an unregulated tap-off of the supply voltage. Doesn't seem like a great idea to drive any ICs with the unregulated switching supply voltage from a noise point of view, particularly given the recent experience with the HV coil driver testing and the PSRR, but I think it's a bit late in the game to do anything about this. The datasheet specs ~50 dB of PSRR on the negative rail, but we have a couple of decoupling caps close to the IC and this IC is itself in a feedback loop with the low noise AD8671 IC so maybe this won't be much of an issue.

For the purposes of this discussion, I think both Satellite Amp and Coil Driver chassis can be driven with +/- 24 V DC.


On a side note - after the upgrade will the "Satellite Amplifiers" be in the racks, and not close to the flange as they currently are? Or are we gonna have some mini racks next to the chambers? Not sure what the config is at the sites, and if the circuits are designed to drive long cables.

  15704   Thu Dec 3 20:38:46 2020 gautamUpdateASCElectrical LO signal for AS WFS

I removed the Frequency Generation box from the 1X2 rack. For the time being, the PSL shutter is closed, since none of the cavities can be locked without the RF modulation source anyways.

Prior to removal, I did the following:

  1. Measured powers at each port on the front panel 
    • Gigatronix power meter was used, which has a maximum power rating of 20dBm, so for the EOM drive outputs which we operate closer to 25-27 dBm, I used a 20 dBm coupler to make the measurement.
    • Attachment #1 summarizes my findings - there doesn't seem to be anything majorly wrong, except that for the 11 MHz EOM drive channel, the "7" setting on the variable attenuator doesn't seem to work. 
    • We can probably get a replacement from MiniCircuits, but since we operate at 0dBm variable attenuation nominally, maybe we don't need to futz around with this.
  2. Measured the relative phasing between the 11 MHz and 55 MHz signals using an oscilloscope.
    • I measured the relative phase for the EOM drive channels, and also the demod channels.
    • The scope can accept a maximum of 5V RMS signal with 50ohm input impedance. So once again, I couldn't make a direct measurement at the nominal setting for the EOM drive channel. Instead, I used the variable attenuator to set the signal amplitude to ~2V RMS. 
    • I will upload the time-domain plots later. But we now have a record of the relative phasing that we can try and reproduce after making modifications. FWIW, my measured phase difference of 139 degrees is reasonably consistent with Koji's inferred from the modulation spectrum.

One thing I noticed was that we're using very stiff coax cabling (RG405) inside this box? Do we need to stick with this option? Or can we use the more flexible RG316? I guess RG405 is lower loss, so it's better. I can't actually find any measurement of the shielding performance in my quick google searching but I think the claim on the call yesterday was that RG405 with its solder soaked braids offer superior shielding.

Before doing any modification you should check how much the distributed powers are at the ports.
Also your modification will change the relative phase between 11MHz and 55MHz.
Can you characterize how much phase difference you have between them, maybe using the modulation of the main marconi? And you might want to adjust it to keep the previous value (or any new value) after the modification by adding a cable inside?

Attachment 1: RF_Frequency_Source.pdf
RF_Frequency_Source.pdf
Attachment 2: demodPath.pdf
demodPath.pdf
Attachment 3: EOMpath.pdf
EOMpath.pdf
  15706   Thu Dec 3 21:44:49 2020 gautamUpdateASCElectrical LO signal for AS WFS

I'm open to either approach. If the full replacement requires a lot of machining, maybe I will stick to just the 55 MHz line. But if only a couple of new holes are required, it might be advantageous to do the revamp while we have the box out? What do you think?

BTW, now that I look more closely at the RF chain, I have several questions:

  1. The 1 dB compression power of the ZHL-2 amplifiers is ~29 dBm, and we are driving it at that level. Is this okay? I thought we always want to be several dBm away from the 1dBm compression point?
  2. Why do we have an attenuator between the Marconi input and the first ZHL-2 amplifier? Can't we just set the Marconi to output 8 or 9 dBm?
  3. The Wenzel frequency multiplier is rated to have 13dBm input and 20 dBm output. We operate it with 12 dBm input and 19 dBm output. Why throw away 1 dBm?

I guess it is feasible to have +17 dBm of 55 MHz signal to plug into the Quad Demod chassis - e.g. drive the 55 MHz input with 20 dBm, pick off 3dBm to the front panel for ASC. Then we can even have several "spare" 55 MHz outputs and still satisfy the 9 dBm input that the ZHL-2 in the 55 MHz chain wants (though again, isn't this dangerously close to the 1dB compression point?). The design doc claims to have done some Optickle modeling, so I guess there isn't really any issue? 

Quote:

Are you going to full replacement of the 55MHz system? Or just remove the 7dBm and then implement the proposed modification for the 55MHz line?

  15710   Fri Dec 4 22:41:56 2020 gautamUpdateASCFreq Gen Box revamp

This turned out to be a much more involved project than I expected. The layout is complete now, but I found several potentially damaged sections of cabling (the stiff cables don't have proper strain relief near the connectors). I will make fresh cables tomorrow before re-installing the unit in the rack. Several changes have been made to the layout so I will post more complete details after characterization and testing.

I was poring over minicircuits datasheets today, and I learned that the minicircuits bandpass filters (SBP10.7 and SBP60) are not bi-directional! The datasheet clearly indicates that the Male SMA connector is the input and the Female SMA connector is the output. Almost all the filters were installed the other way around 😱 . I'll install them the right way around now.

  15711   Sat Dec 5 20:44:35 2020 gautamUpdateASCFreq Gen Box re-installed

This work is now complete. The box was characterized and re-installed in 1X2. I am able to (briefly) lock the IMC and see PDH fringes in POX and POY so the lowest order checks pass.

Even though I did not deliberately change anything in the 29.5 MHz path, and I confirmed that the level at the output is the expected 13 dBm, I had to lower then IN1 gain of the IMC servo by 2dB to have a stable lock - should confirm if this is indeed due to higher optical gain at the IMC error point, or some electrical funkiness. I'm not delving into a detailed loop characterization today - but since my work involved all elements in the RF modulation chain, some detailed characterization of all the locking loops should be done - I will do this in the coming week.

After tweaking the servo gains for the POX/POY loops, I am able to realize the single arm locks as well (though I haven't dont the characterization of the loops yet).

I'm leaving the PSL shutter open, and allowing the IMC autolocker to run. The WFS loops remain disabled for now until I have a chance to check the RF path as well.


Unrelated to this work: Koji's swapping back of the backplane cards seems to have fixed the WFS2 issue - I now see the expected DC readbacks. I didn't check the RF readbacks tonight.

Update 7 Dec 2020 1 pm: A ZHL-2 with heat sink attached and a 11.06 MHz Wenzel source were removed from the box as part of this work (the former was no longer required and the latter wasn't being used at all). They have been stored in the RF electronics cabinet along the east arm.

Attachment 1: IFOverview.png
IFOverview.png
Attachment 2: IMG_0004.jpg
IMG_0004.jpg
Attachment 3: IMG_9007.jpg
IMG_9007.jpg
Attachment 4: IMG_0003.jpg
IMG_0003.jpg
Attachment 5: schematicLayout.pdf
schematicLayout.pdf
Attachment 6: EOMpath_postMod.pdf
EOMpath_postMod.pdf
  15712   Mon Dec 7 11:25:31 2020 gautamUpdateSUSMC1 suspension glitchy again

The MC1 suspension has begun to show evidence of glitches again, from Friday/Saturday. You can look at the suspension Vmon tab a few days ago and see that the excess fuzz in the Vmon was not there before. The extra motion is also clearly evident on the MCREFL spot. I noticed this on Saturday evening as I was trying to recover the IMC locking, but I thought it might be Millikan so I didn't look into it further. Usually this is symptomatic of some Satellite box issues. I am not going to attempt to debug this anymore.

  15713   Mon Dec 7 12:38:51 2020 gautamUpdateIOOIMC loop char

Summary:

There seems to be significant phase loss in the TTFSS path, which is limiting the IMC OLTF to <100 kHz. 

Details:

See Attachment #1 and #2. The former shows the phase loss, while the latter is just to confirm that the optical gain of the error point is roughly the same, since I noticed this after working on and replacing the RF frequency distribution unit. Unfortunately there have been many other changes also (e.g. the work that Rana and Koji did at the IMC rack, swapping of backplane controls etc etc - maybe they have an OLTF measurement from the time they were working?) so I don't know which is to blame. Off the top of my head, I don't see how the RF source can change the phase lag of the IMC servo at 100 kHz. The only part of the IMC RF chain that I touched was the short cable inside the unit that routes the output of the Wenzel source to the front panel SMA feedthrough. I confirmed with a power meter that the power level of the 29.5 MHz signal at that point is the same before and after my work.

The time domain demod monitor point signals appear somewhat noisier in todays measurement compared to some old data I had from 2018, but I think this isn't significant. Once the SR785 becomes available, I will measure the error point spectrum as well to confirm. One thing I noticed was that like many of our 1U/2U chassis units, the feedthrough returns are shorted to the chassis on the RF source box (and hence presumably also to the rack). The design doc for this box makes many statements about the precautions taken to avoid this, but stops short of saying if the desired behavior was realized, and I can't find anything about it in the elog. Can someone confirm that the shields of all the connectors on the box were ever properly isolated? My suspicion is that the shorting is happening where the all-metal N-feedthroughs touch the drilled surfaces on the front panel - while the front and back surfaces of the panel are insulating, the machined surfaces are not.

This is an unacceptable state but no clear ideas of how to troubleshoot quickly (without going piece by piece into the IMC servo chain) occur to me. I still don't understand how the freq source work could have resulted in this problem but I'm probably overlooking something basic. I'm also wondering why the differential receiving at the TTFSS error point did not require a gain adjustment of the IMC servo? Shouldn't the differential-receiving-single-ended-sending have resulted in an overall x0.5 gain?


Update 8 Dec 1200: To test the hypothesis, I bypassed the SR560 based differential receiving and restored the original config. I am then able to run with the original gain settings, and you see in Attachment #4 that the IMC OLTF UGF is back above 100 kHz. It is still a little lower than it was in June 2019, not sure why. There must be some saturation issues somewhere in the signal chain because I cannot preserve the differential receiving and retain 100 kHz UGF, either by raising the "VCO gain" on the MC servo board, setting the SR560 to G=2, or raising the "Common Gain Adjust" on the FSS box by 6 dB. I don't have a good explanation for why this worked for some weeks and failed now - maybe some issue with the SR560? We don't have many working units so I didn't try switching it.

So either there is a whole mess of lines or the frequency noise suppression is limited. Sigh.

Attachment 1: OLTFcomparison.pdf
OLTFcomparison.pdf
Attachment 2: demodMons.pdf
demodMons.pdf
Attachment 3: OLTFcomparison.pdf
OLTFcomparison.pdf
  15714   Mon Dec 7 14:32:02 2020 gautamUpdateLSCNew demod phases for POX/POY locking

In favor of keeping the same servo gains, I tuned the digital demod phases for the POX and POY photodiode signals to put as much of the PDH error signal in the _I quadrature as possible. The changes are summarized below:

POX / POY demod phases
PD Old demod phase [deg] New demod phase [deg]
POX11 79.5 -75.5
POY11 -106.0 116.0

The old locking settings seem to work fine again. This setting isn't set by the ifoconfigure scripts when they do the burt restore - do we want it to be?

Attachments #1 and #2 show some spectra and TFs for the POX/POY loops. In Attachment #2, the reference traces are from the past, while the live traces are from today. In fact, to have the same UGF as the reference traces (from ~1 year ago), I had to also raise the digital servo loop gain by ~20%. Not sure if this can be put down to a lower modulation depth - at least, at the output on the freq ref box, I measured the same output power (at the 0dB variable attenuator gain setting we nominally run in) before and after the changes. But I haven't done an optical measurement of the modulation depth yet. There is also a hint of lesser phase available at ~100 Hz now compared to a year ago.

Attachment 1: POX_POY_OLTF.pdf
POX_POY_OLTF.pdf
Attachment 2: POX_POY_spectra.pdf
POX_POY_spectra.pdf
  15715   Mon Dec 7 22:54:30 2020 gautamUpdateLSCModulation depth measurement

Summary:

I measured the modulation depth at 11 MHz andf 55 MHz using an optical beat + PLL setup. Both numbers are ~0.2 rad, which is consistent with previous numbers. More careful analysis forthcoming, but I think this supports my claim that the optical gain for the PDH locking loops should not have decreased.

Details:

  • For this measurement, I closed the PSL shutter between ~4pm and ~9pm local time. 
  • The photodiode used was the NF1611, which I assumed has a flat response in the 1-200 MHz band, and so did not apply any correction/calibration.
Attachment 1: modDepth.pdf
modDepth.pdf
  15716   Tue Dec 8 15:07:13 2020 gautamUpdateComputer Scripts / Programsndscope updated

I updated the ndscope on rossa to a bleeding edge version (0.7.9+dev0) which has many of the fixes I've requested in recent times (e.g. direct PDF export, see Attachment #1). As usual if you find issue, report it on the issue tracker. The basic functionality for looking at signals seems to be okay so this shouldn't adversely impact locking efforts.


In hindsight - I decided to roll-back to 0.7.9, and have the bleeding edge as a separate binary. So if you call ndscope from the command line, you should still get 0.7.9 and not the bleeding edge.

Attachment 1: test.pdf
test.pdf
  15717   Wed Dec 9 11:54:11 2020 gautamUpdateOptical LeversITMX HeNe replaced

The ITMX Oplev (installed in March 2019) was near end of life judging by the SUM channel (see Attachment #1). I replaced it yesterday evening with a new HeNe head. Output power was ~3.25 mW. The head was labelled appropriately and the Oplev spot was recentered on its QPD. The lifetime of ~20 months is short but recall that this HeNe had already been employed as a fiber illuminator at EX and so maybe this is okay.

Loop UGFs and stability margins seem acceptable to me, see Attachment #2-#3.

Attachment 1: OLtrend_old_ndscope.png
OLtrend_old_ndscope.png
Attachment 2: ITMX_OL_P.pdf
ITMX_OL_P.pdf
Attachment 3: ITMX_OL_Y.pdf
ITMX_OL_Y.pdf
  15718   Wed Dec 9 12:02:04 2020 gautamUpdateLSCPOX locking still unsatisfactory

Continuting the IFO recovery - I am unable to recover similar levels of TRX RIN as I had before. Attachment #1 shows that the TRX RIN is ~4x higher in RMS than TRY RIN (the latter is commensurate with what we had previously). The excess is dominated by some low frequency (~1 Hz) fluctuations. The coherence structure is confusing - why is TRY RIN coherent with IMC transmission at ~2 Hz but not TRX? But anyways, doesn't look like its intensity fluctuations on the incident light (unsurprisingly, since the TRY RIN was okay). I thought it may be because of insufficient low-frequency loop gain - but the loop shape is the same for TRX and TRY. I confirmed that the loop UGF is similar now (red trace in Attachment #2) as it was ~1 month ago (black trace in Attachment #2). Seismometers don't suggest excess motion at 1 Hz. I don't think the modulation depth at 11 MHz is to blame either. As I showed earlier, the spectrum of the error point is comparable now as it was previously.

What am I missing?

Attachment 1: armRIN.pdf
armRIN.pdf
Attachment 2: POX_OLTF.pdf
POX_OLTF.pdf
  15719   Wed Dec 9 15:37:48 2020 gautamUpdateCDSRFM switch IP addr reset

I suspect what happened here is that the IP didn't get updated when we went from the 131.215.113.xxx system to 192.168.113.xxx system. I fixed it now and can access the web interface. This system is now ready for remote debugging (from inside the martian network obviously). The IP is 192.168.113.90.

Managed to pull this operation off without crashing the RFM network, phew.

BTW, a windows laptop that used to be in the VEA (I last remember it being on the table near MC2 which was cleared sometime to hold the spare suspensions) is missing. Anyone know where this is ?

Attachment 1: Screenshot_2020-12-09_15-39-20.png
Screenshot_2020-12-09_15-39-20.png
Attachment 2: Screenshot_2020-12-09_15-46-46.png
Screenshot_2020-12-09_15-46-46.png
  15720   Wed Dec 9 16:22:57 2020 gautamUpdateSUSYet another round of Sat. Box. switcharoo

As discussed at the meeting, I decided to effect a satellite box swap for the misbehaving MC1 unit. I looked back at the summary pages Vmon for the SRM channels, and found that in the last month or so, there wasn't any significant evidence of glitchiness. So I decided to effect that swap at ~4pm today. The sequence of steps was:

  • SRM and MC1 watchdogs were disabled.
  • Unplugged the two satellite boxes from the vacuum flanges.
  • For the record: S/N 102 was installed at MC1, and S/N 104 was installed at SRM. Both were de-lidded, supposedly to mitigate the horrible thermal environment a bit. S/N 104 was the one Koji repaired in Aug 2019 (the serial number isn't visible or noted there, but only one box has jumper wires and Koji's photos show the same jumper wires). In June 2020, I found that the repaired box was glitching again, which is when I swapped it for S/N 102.
  • After swapping the two units, I re-enabled the local damping on both optics, and was able to re-lock the IMC no issues.

One thing I was reminded of is that the motion of the MC1 optic by controlling the bias sliders is highly cross-coupled in pitch and yaw, it is almost diagonal. If this is true for the fast actuation path too, that's not great. I didn't check it just now.

While I was working on this, I took the opportunity to also check the functionality of the RF path of the IMC WFS. Both WFS heads seem to now respond to angular motion of the IMC mirror - I once again dithered MC2 and looked at the demodulated signals, and see variation at the dither frequency, see Attachment #1. However, the signals seem highly polluted with strong 60 Hz and harmonics, see the zoomed-in time domain trace in Attachment #2. This should be fixed. Also, the WFS loop needs some re-tuning. In the current config, it actually makes the MC2T RIN worse, see Attachment #3 (reference traces are with WFS loop enabled, live traces are with the loop disabled - sorry for the confusing notation, I overwrote the patched version of DTT that I got from Erik that allows the user legend feature, working on getting that back).

Quote:

The MC1 suspension has begun to show evidence of glitches again, from Friday/Saturday. You can look at the suspension Vmon tab a few days ago and see that the excess fuzz in the Vmon was not there before. The extra motion is also clearly evident on the MCREFL spot. I noticed this on Saturday evening as I was trying to recover the IMC locking, but I thought it might be Millikan so I didn't look into it further. Usually this is symptomatic of some Satellite box issues. I am not going to attempt to debug this anymore.

Attachment 1: WFS2.png
WFS2.png
Attachment 2: WFS_lineNoise.png
WFS_lineNoise.png
Attachment 3: WFSchar.pdf
WFSchar.pdf
  15721   Wed Dec 9 20:14:49 2020 gautamUpdateVACUPS failure

Summary:

  1. The (120V) UPS at the vacuum rack is faulty.
  2. The drypump backing TP2 is faulty.
  3. Current status of vacuum system: 
    • The old UPS is now powering the rack again. Sometime ago, I noticed the "replace battery" indicator light on this unit was on. But it is no longer on. So I judged this is the best course of action. At least this UPS hasn't randomly failed before...
    • main vol is being pumped by TP1, backed by TP3.
    • TP2 remains off.
    • The annular volumes are isolated for now while we figure out what's up with TP2.
    • The pressure went up to ~1 mtorr (c.f. ~600utorr that is the nominal value with the stuck RV2) during the whole episode but is coming back down now.
  4. Steve seems to have taken the reliability of the vacuum system with him.

Details:

Around 7pm, the UPS at the vacuum rack seems to have failed. Don't ask me why I decided to check the vacuum screen 10 mins after the failure happened, but the point is, this was a silent failure so the protocols need to be looked into.

Going to the rack, I saw (unsurprisingly) that the 120V UPS was off. 

  • Pushed the power on button - the LCD screen would briefly light up, say the line voltage was 120 V, and then turned itself off. Not great.
  • I traced the power connection to the UPS itself to a power strip under the rack - then I moved the plug from one port to another. Now the UPS stays on. okay...
  • but after ~3 mins while I'm hunting for a VGA cable, I hear an incessant beeping. The UPS display has the "Fault" indicator lit up. 
  • I decided to shift everything back to the old UPS. After the change was made, I was able to boot up the c1vac machine again, and began the recovery process.
  • When I tried to start TP2, the drypump was unusually noisy, and I noticed PTP2 bottomed out at ~500 torr (yes torr). So clearly something is not right here. This pump supposedly had its tip-seal replaced by Jordan just 3 months ago. This is not a normal lifetime for the tip seal - we need to investigate more in detail what's going on here...
  • Decided that an acceptable config is to pump the main volume (so that we can continue working on other parts of the IFO). The annuli are all <10mtorr and holding, so that's just fine I think.

Questions:

  1. Are the failures of TP2 drypump and UPS related? Or coincidence? Who is the chicken and who is the egg?
  2. What's up with the short tip seal lifetime?
  3. Why did all of this happen without any of our systems catching it and sending an alert??? I have left the UPS connected to the USB/ethernet interface in case anyone wants to remotely debug this.

For now, I think this is a safe state to leave the system in. Unless I hear otherwise, I will leave it so - I will be in the lab another hour tonight (~10pm).

Some photos and a screen-cap of the Vac medm screen attached.

Attachment 1: rackBeforenAfter.pdf
rackBeforenAfter.pdf
Attachment 2: IMG_0008.jpg
IMG_0008.jpg
Attachment 3: IMG_0009.jpg
IMG_0009.jpg
Attachment 4: vacStatus.png
vacStatus.png
  15725   Thu Dec 10 14:29:26 2020 gautamUpdateVACUPS failure

I don't buy this story - P2 only briefly burped around GPStime 1291608000 which is around 8pm local time, which is when I was recovering the system.

Today. Jordan talked to Jon Feicht - apparently there is some kind of valve in the TP2 forepump, which only opens ~15-20 seconds after turning the pump on. So the loud sound I was hearing yesterday was just some transient phenomenon. So today morning at ~9am, we turned on TP2. Once again, PTP2 pressure hovered around 500 torr for about 15-20 seconds. Then it started to drop, although both Jordan and I felt that the time it took for the pressure to drop in the range 5 mtorr - 1 mtorr was unusually long. Jordan suspects some "soft-start" feature of the Turbo Pumps, which maybe spins up the pump in a more controlled way than usual after an event like a power failure. Maybe that explains why the pressure dropped so slowly? One thing is for sure - the TP2 controller displayed "TOO HIGH LOAD" yesterday when I tried the first restart (before migrating everything to the older UPS unit). This is what led me to interpret the loud sound on startup of TP2 to indicate some issue with the forepump - as it turns out, this is just the internal valve not being opened.

Anyway, we left TP2 on for a few hours, pumping only on the little volume between it and V4, and PTP2 remained stable at 20 mtorr. So we judged it's okay to open V4. For today, we will leave the system with both TP2 and TP3 backing TP1. Given the lack of any real evidence of a failure from TP2, I have no reason to believe there is elevated risk.

As for prioritising UPS swap - my opinion is that it's better to just replace the batteries in the UPS that has worked for years. We can run a parallel reliability test of the new UPS and once it has demonstrated stability for some reasonable time (>4 months), we can do the swap.


I was able to clear the FAULT indicator on the new UPS by running a "self-test". pressing and holding the "mute" button on the front panel initiates this test according to the manual, and if all is well, it will clear the FAULT indicator, which it did. I'm still not trusting this unit and have left all units powered by the old UPS.


Update 1100 Dec 11: The config remained stable overnight so today I reverted to the nominal config of TP3 pumping the annuli and TP2 backing TP1 which pumps the main volume (through the partially open RV2).

Quote:
 

According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.

Attachment 1: vacDiag1.png
vacDiag1.png
  15728   Thu Dec 10 16:24:13 2020 gautamUpdateEquipment loanNoliac PZT --> Paco

I gave one Noliac PZT from the two spare in the metal PMC kit to Paco. There is one spare left in the kit.

  15730   Thu Dec 10 22:45:42 2020 gautamUpdateSUSMore spare OSEMs

I acquired several spare OSEMs (in unknown condition) from Paco. They are stored alongside the shipment from UF.

  15731   Thu Dec 10 22:46:57 2020 gautamUpdateASCWFS head assembled

The assembly of the head is nearly complete, I thought I'd do some characterization before packaging everything up too nicely. I noticed that the tapped holes in the base are odd-sized. According to the official aLIGO drawing, these are supposed to be 4-40 tapped, but I find that something in between 2-56 and 4-40 is required - so it's a metric hole? Maybe we used some other DCC document to manufacture these parts - does anyone know the exact drawings used? In the meantime, the circuit is placed inside the enclosure with the back panel left open to allow some tuning of the trim caps. The front panel piece for mounting the SMA feedthroughs hasn't been delivered yet so hardware-wise, that's the last missing piece (apart from the aforementioned screws).

Attachment #1 - the circuit as stuffed for the RF frequencies of relevance to the 40m.

Attachment #2 - measured TF from the "Test Input" to Quadrant #1 "RF Hi" output.

  • There is reasonable agreement, but not sure what to make of the gain mismatch at most frequencies.
  • The photodiode itself hasn't been installed yet, so there will be some additional tuning required to account for the interaction with the photodiode's junction capacitance.
  • I noticed that the Qs of the resonances in between the notches is pretty high in this config, but the SPICE model also predicts this, so I'm hopeful that they will be tamed once the photodiode is installed.
  • One thing that is worrying is the feature at ~170 MHz. Could be some oscillation of the LM opamp. All the aLIGO WFS test procedure documentation shows measurements only out to 100 MHz. Should we consider increasing the gain of the preamp from x10 to x20 by swapping the feedback resistor from 453 ohms to 1 kohm? Is this a known issue at the sites
  • Any other comments?

Update 11 Dec: For whatever reason, whoever made this box decided to tap 4-40 holes from the bottom (i.e. on the side of the base plate), and didn't thread the holes all the way through, which is why I was unable to get a 4-40 screw in there. To be fair the drawing doesn't specify the depth of the 4-40 holes to be tapped. All the taps we have in the lab have a maximum thread length of 9/16" whereas we need something with at least 0.8" thread length. I'll ask Joe Benson at the physics workshop if he has something I can use, and if not, I'll just drill a counterbore on the bottom side and use the taps we have to go all the way through and hopefully that does the job.

The front panel I designed for the SMA feedthroughs arrived today. Unfortunately, it is impossible for the D-sub shaped holes in this box to accommodate 8 insulated SMA feedthroughs (2 per quadrant for RF low and RF high) - while the actual SMA connector doesn't occupy so much space, the plastic mold around the connector and the nut to hold it are much too bulky. For the AS WFS application, we will only need 4 so that will work, but if someone wants all 8 outputs (plus an optional 9th for the "Test Input"), a custom molded feedthrough will have to be designed. 

As for the 170 MHz feature - my open loop modeling in Spice doesn't suggest a lack of phase margin at that frequency so I'm not sure what the cause is there. If this is true, just increasing the gain won't solve the issue (since there is no instability at least by the phase margin metric). Could be a problem with the "Test Input" path I guess. I confirmed it is present in all 4 quadrants.

Attachment 1: aLIGO_wfs_v5_40m.pdf
aLIGO_wfs_v5_40m.pdf
Attachment 2: TF_meas.pdf
TF_meas.pdf
  15735   Tue Dec 15 12:38:41 2020 gautamUpdateElectronicsDC power strip

I installed a DC power strip (24 V variant, 12 outlets available) on the NW upright of the 1X1 rack. This is for the AS WFS. Seems to work, all outlets get +/- 24 V DC.

The FSS_RMTEMP channel is very noisy after this work. I'll look into it, but probably some Acromag grounding issue.

In the afternoon, Jordan and I also laid out 4x SMA LMR240 cables and 1x DB15 M/F cable from 1X2 to the NE corner of the AP table via the overhead cable trays.

  15736   Thu Dec 17 15:23:56 2020 gautamUpdateASCWFS head characterization

Summary:

I think the WFS head performs satisfactorily.

  • The (input-referred) dark noise level at the operating frequency of 55 MHz is ~40pA/rtHz (modelled) and ~60 pA/rtHz (measured, converted to input-referred). See Attachment #1. Attachment #5 has the input referred current noise spectral densities, and a few representative shot noise levels.
  • The RF transimpedance gain at the operating frequency is ~500 ohms when driving a 50 ohm load (in good agreement with LTspice model). See Attachment #2 and Attachment #3.
  • The resonant gain to notch ratios are all > 30 dB, which is in line with numbers I can find for the WFS installed at the sites (and in good agreement with the LTspice model as well).
  • There are a few lines visible in the noise measurement. But these are small enough not to be a show-stopper I think.

Details and remarks:

  1. Attachment #4 shows a photo of the setup. 
    • The QPD used was S/N #84.
    • The heat sinks have a bunch of washers because the screw holes were not tappe at time of manufacture.
    • There isn't space to have 8 SMA feedthroughs in the D-shaped cutouts, so we can only have the 4 "RF HI" outputs without some major metalwork.
    • C9 has been remvoed in all channels (to isolate the "TEST INPUT").
  2. I found that some quadrants displayed a ~35 MHz sine-wave of a few mV pk-pk when I had the back of the enclosure off (for tuning the notches). The hypothesis is that this was due to some kind of stray capacitance effect. Anyways, once I closed everything up, for the noise measurement, this peak was no longer visible. With an HP8447A preamp, I measured an RMS voltage of ~2mV rms on an oscilloscope. After undoing the 20 dB gain of the amplifier, each quadrant has an output voltage noise of ~200 uVrms (as returned by the "measure" utility on the scope, I don't know the specifics of how it computes this). Point is, there wasn't any clear sine-wave oscillations like I saw on two channels when the lid was off. 
  3. Some of the lines are present during some measurement times but not others (e.g. Q4 blue vs red curve in Attachment #1). I was doing this work in the elec-bench area of the lab, right next to the network switches etc so not exactly the quietest environment. But anyway, I don't see anything in these measurements that suggest something is seriously wrong.
  4. In the transfer function measurements, above 150 MHz, there are all sorts of features. But I think this is a measurement artefact (stray cable capacitance etc) and not anything real in the RF signal path. Koji saw similar effects I believe, and I didn't delve further into it.
  5. The dark noise of the circuit is such that to be shot noise limited, each quadrant needs 10 mA of DC photocurrent. The light bulb we have has a max current rating of 0.25A, with which I could only get 3 mA DC per quadrant. So the 55 MHz sideband power needed to be shot noise limited is ~50 mW - we will never have such high power. But I think to have better performance will need a major re-work of the circuit design (finite Qs of inductors, capacitors etc).
  6. Regarding the transimpedance gains - in my earlier plots, I omitted the 50ohm input impedance of the AG4395A network analyzer. The numbers I report here are ~half of those earlier in this thread for this reason. In any case, I think this number is what is important, since the ADT-1-1 on the demod board RF input has an input impedance of 50ohm. 
  7. Regarding grounding - the RF ground on the head is actually isolated from the case pretty well. Two locations of concern are (i) the heat sinks for the voltage regulator ICs and (ii) the DB15 connector shield. I've placed electrically insulating (but thermally conducting) pads from TO220 mounting kits between both sets of objects and the case. However, for the Dsub connector, the shape of the pad doesn't quite fit all the way round the connector. So if I over-tighten the 4-40 mounting bolts, at some point, the case gets shorted to the RF ground, presumably because the connector deforms slightly and touches the case in a spot where I don't have the isolating pad installed. I think I've realized a tightness that is mechanically satisfying but electrically isolating.
  8. I will do the fitting at my leisure but the eye-fit is already suggesting that things are okay I think.

If the RF experts see some red flags / think there are more tests that need to be performed, please let me know. Big thanks to Chub for patiently supporting this build effort, I'm pleasantly surprised it worked.

Attachment 1: oNoise.pdf
oNoise.pdf
Attachment 2: Z_Hi.pdf
Z_Hi.pdf
Attachment 3: Z_Low.pdf
Z_Low.pdf
Attachment 4: IMG_9030.jpg
IMG_9030.jpg
Attachment 5: iNoise.pdf
iNoise.pdf
  15737   Fri Dec 18 10:52:17 2020 gautamUpdateCDSRFM errors

As I was working on the IFO re-alignment just now, the rfm errors popped up again. I don't see any useful diagnostics on the web interface.

Do we want to take this opportunity to configure jumpers and set up the rogue master as Rolf suggested? Of course there's no guarantee that will fix anything, and may possibly make it impossible to recover the current state...

Attachment 1: RFMdiag.png
RFMdiag.png
  15741   Sat Dec 19 20:24:25 2020 gautamUpdateElectronicsWFS hardware install

I installed 4 chassis in the rack 1X2 (characterization on the E-bench was deemed satisfactory, I will upload the analysis later). I ran out of hardware to make power cables so only 2 of them are powered right now (1 32ch AA chassis and 1 WFS head interface). The current limit on the +24V Sorensens was raised to allow for similar margin to the limit with the increased current draw.

Remaining work:

  1. Make 2 more power cables for ISC whitening chassis and quad demod chassis.
  2. Make a 2x 4pin LEMO-->DB9 cable to digitize the FSS and PMC diagnostic channels with the new AA chassis. If RnD cables has a very short turnaround time, might be worth it to give this to them as well.
  3. Connect ADC1 on c1ioo machine to new AA chassis (transfer SCSI cable from existing AA unit to the new one). This will necessarily involve some model changes as well.
  4. Make a short cable to connect 55 MHz output from RFsource box to the LO input on the quad demod chassis.
  5. Install the WFS head on the AS table at a suitable location. Probably will need a focusing lens as well. 
  6. Connect WFS head to the signal processing electronics (the cables were already laid out by Jordan and I).
  7. Make the necessary CDS model changes (WFS filters, matrices, servos etc). I personally don't see the need for a new model but if anyone feels strongly about separating the IMC WFS and AS WFS we can set up another model.
  8. Commission the system.

While I definitely bumped various cables, I don't seem to have done any lasting damage to the CDS system (the RFM errors remain of course).

  15743   Mon Dec 21 18:18:03 2020 gautamUpdateCDSMany model changes

The CDS model change required to get the AS WFS signals into the RTCDS system are rather invasive.

  • We use VCS for these models. Linus Torvald may question my taste but I also made local backups of the models, just in case...
  • Particularly, the ADC1 card on c1ioo is completely reconfigured.
  • I also think it's more natural to do all the ASC computations on this machine rather than the c1lsc machine (it is currently done in the c1ass model). So there are many IPC changes as well.
  • I have documented everything carefully, and the compile/install went smoothly.
  • Taking down all the FE servers at 1830 local time
    1. To propagagte the model changes
    2. To make a hardware change in the c1rfm card in the c1ioo machine to configure it as "ROGUE MASTER 0"
    3. To clear the RFM errors we are currently suffering from will require a model reboot anyways.
  • Recovery was completed by 1930 - the RFM errors are also cleared, and now we have a "ROGUE MASTER 👾" on the network. Pretty smooth, no major issues with the CDS part of the procedure to report.
  • The main issue is that in the AA chassis I built, Ch14 (with the first channel as Ch1) has the output saturated to 28V (differential). I'm not sure what kind of overvoltage protection the ADC has - we frequently have the inputs exceed the spec'd +/-20 V (e.g. when the whitening filters are engaged and the cavity is fringing), but pending further investigation, I am removing the SCSI connection from the rear of the AA chassis.

In terms of computational load, the c1ioo model seems to be able to handle the extra load no issues - ~35us/60us per cycle. The RFM model shows no extra computational time

After this work, the IMC locking and POX/POY locking, and dither alignment servos are working okay. So I have some confidence that my invasive work hasn't completely destroyed everything. There is some hardware around the rear of 1X2 that I will clear tomorrow.

Attachment 1: CDSoverview.png
CDSoverview.png
  15744   Tue Dec 22 22:11:37 2020 gautamUpdateCDSAA filt repaired and reinstalled

Koji fixed the problematic channel - the issue was a bad solder joint on the input resistors to the THS4131. The board was re-installed. I also made a custom 2x4-pin LEMO-->DB9 cable, so we are now recording the PMC and FSS ERR/CTRL channel diagnostics again (spectra tomorrow). Note that Ch32 is recording some sort of DuoTone signal and so is not usable. This is due to a misconfiguration - ADC0 CH31 is the one which is supposed to be reserved for this timing signal, and not ADC1 as we currently have. When we swap the c1ioo hosts, we should fix this issue.

I also did most of the work to make the MEDM screens for the revised ASC topology, tried to mirror the site screens where possible. The overview screen remains to be done. I also loaded the anti-whitening filters (z:p 150:15) at the demodulated WFS input signal entry points. We don't have remote whitening switching capability at this time, so I'll test the switching manually at some point.

Quote:

The main issue is that in the AA chassis I built, Ch14 (with the first channel as Ch1) has the output saturated to 28V (differential). I'm not sure what kind of overvoltage protection the ADC has - we frequently have the inputs exceed the spec'd +/-20 V (e.g. when the whitening filters are engaged and the cavity is fringing), but pending further investigation, I am removing the SCSI connection from the rear of the AA chassis.

  15745   Wed Dec 23 10:13:08 2020 gautamUpdateCDSNear term upgrades

Summary:

  1. There appears to be insufficient number of PCIe slots in the new Supermicro servers that were bought for the BHD upgrade.
  2. Modulo a "riser card", we have all the parts in hand to put one of the end machines on the Dolphin network. If the Rogue Master doesn't improve the situation, we should consider installing a Dolphin card in the c1iscex server and connecting it to the Dolphin network at the next opportunity. 

Details:

Last night, I briefly spoke with Koji about the CDS upgrade plan. This is a follow up.

Each server needs a minimum of two peripheral devices added to the PCIe bus:

  • A PCIe interface card that connects the server to the Expansion Chassis (copper or optical fiber depending on distance between the two).
  • A Dolphin or RFM card that makes the IPC interconnects. 
  • I'm pretty certain the expansion chasiss cannot support the Dolphin / RFM card (it's only meant to be for ADCs/DACs/BIO cards). At least, all the existing servers in the 40m have at least 2 PCIe cards installed, and I think we have enough to worry about without trying to engineer a hack.
  • I attach some photos of new and old Supermicro servers to illustrate my point, see Attachment #1

As for the second issue, the main question is if we had an open PCIe slot on the c1iscex machine to install a Dolphin card. Looks like the 2 standard slots are taken (see Attachment #1), but a "low profile" slot is available. I can't find what the exact models of the Supermicro servers installed back in 2010 are, but maybe it's this one? It's a good match visually anyways. The manual says a "riser card" is required. I don't know if such a riser is already installed. 

Questions I have, Rolf is probably the best person to answer:

  1. Can we even use the specified host adaptor, HIB25-X4, which is "PCIe Gen2", with the "PCIe Gen3" slots on the new Supermicro servers we bought? Anyway, the fact that the new servers have only 1 PCIe slot probably makes this a moot point.
  2. Can we overcome this slot limitation by installing the Dolphin / RFM card in the expansion chassis?
  3. In the short run (i.e. if it is much faster than the full CDS shipment we are going to receive), can we get (from the CDS test stand in Downs or the site) 
    • A riser card so that we may install the Gen 1 Dolphin card (which we have in hand) in the c1iscex server?
    • A compatible (not sure what PCIe generation we need) PCIe host to ECA kit so we can test out the replacement for the Sun Microsystems c1ioo server?
    • A spare RFM card (VMIC 5565, also for the above purpose). 
  4. What sort of a test should we run to test the new Dolphin connection? Make a "null channel" differencing the same signal (e.g. TRX) sent via RFM and Dolphin? Or is there some better checksum diagnostic?
Attachment 1: IMG_0020.pdf
IMG_0020.pdf
  15746   Wed Dec 23 23:06:45 2020 gautamConfigurationCDSUpdated CDS upgrade plan
  1. The diagram should clearly show the host machines and the expansion chassis and the interconnects between them.
  2. We no longer have any Gentoo bootserver or diskless FEs.
  3. The "c1lsc" host is in 1X4 not 1Y3.
  4. The connection between c1lsc and Dolphin switch is copper not fiber. I don't know how many Gbps it is. But if the switch is 10 Gbps, are they really selling interface cables that have lower speed? The datasheet says 10 Gbps.
  5. The control room workstations - Debian10 (rossa) is the way forward I believe. it is true pianosa remains SL7 (and we should continue to keep it so until all other machines have been upgraded and tested on Debian 10).
  6. There is no "IOO/OAF". The host is called "c1ioo".
  7. The interconnect between Dolphin switch and c1ioo host is via fiber not copper.
  8. It'd be good to have an accurate diagram of the current situation as well (with the RFM network).
  9. I'm not sure if the 1Y1 rack can accommodate 2 FEs and 2 expansion chassis. Maybe if we clear everything else there out...
  10. There are 2 "2GB/s" Copper traces. I think the legend should make clear what's going on - i.e. which cables are ethernet (Cat 6? Cat 5? What's the speed limitation? The cable? Or the switch?), which are PCIe cables etc etc. 

I don't have omnigraffle - what about uploading the source doc in a format that the excellent (and free) draw.io can handle? I think we can do a much better job of making this diagram reflect reality. There should also be a corresponding diagram for the Acromag system (but that doesn't have to be tied to this task). Megatron (scripts machine) and nodus should be added to that diagram as well.

Please send me any omissions or corrections to the layout.

  15748   Wed Jan 6 15:28:04 2021 gautamUpdateVACVac rack UPS batteries replaced

[chub, gautam]

the replacement was done this afternoon. The red "Replace Battery" indicator is no longer on.

  15749   Wed Jan 6 16:18:38 2021 gautamUpdateOptical LeversBS Oplev glitchy

As part of the hunt why the X arm IR transmission RIN is anomalously high, I noticed that the BS Oplev Servo periodically kicks the optic around - the summary pages are the best illustration of this happening. Looking back in time, these seem to have started ~Nov 23 2020. The HeNe power output has been degrading, see Attachment #1, but this is not yet at the point where the head usually needs replacing. The RIN spectrum doesn't look anomalous to me, see Attachment #2 (the whitening situation for the quadrants is different for the BS and the TMs, which explains the HF noise). I also measured the loop UGFs (using swept sine) - seems funky, I can't get the same coherence now (live traces) between 10-30 Hz that I could before (reference traces) with the same drive amplitude, and the TF that I do measure has a weird flattening out at higher frequencies that I can't explain, see Attachment #3.

The excess RIN is almost exactly in the band that we expect our Oplevs to stabilize the angular motion of the optics in, so maybe needs more investigation - I will upload the loop suppression of the error point later. So far, I don't see any clean evidence of the BS Oplev HeNe being the culprit, so I'm a bit hesitant to just swap out the head...

Attachment 1: missingData.png
missingData.png
Attachment 2: OLRIN.pdf
OLRIN.pdf
Attachment 3: BS_OL_P.pdf
BS_OL_P.pdf
Attachment 4: BS_OL_suppression.pdf
BS_OL_suppression.pdf
ELOG V3.1.3-