40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 230 of 339  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  15683   Sun Nov 22 21:09:37 2020 gautamUpdateASCPlanned mods for WFS head

Attachment #1 - Proposed mods for 40m RF freqs. 

  • I followed Rich's suggestion of choosing an inductor that has Z~100 ohms at the frequency of interest.
  • The capacitor is then chosen to have the correct resonant frequency.
  • Voltronix trim caps are used for fine tuning the resonances. 2 variants are used, one with a range of 4-20 pF, and a Q of 500 per spec, while the other has range of 8-40 pF, and a Q of 200 per spec.
  • In the table, the first capacitance is the fixed one, and the second is the variable one. We're not close to the rail for the variable caps.
  • For the first trials, I think we can try by not populating all of the notches - just the 2f notch. We can then add notches if deemed necessary. Probably these notches are more important for a REFL/POP port WFS.
  • One thing I noticed is that the aLIGO WFS use ceramic capacitors for the LC reactances. i haven't checked if there is any penalty we are paying in terms of Q of the capacitor. anyways, i'm not going to redesign the PCB and maybe ceramic is the only option in the 0805 package size?

Attachment #2 - Modelled TFs for the case where all the notches are stuffed, and where only the 2f notch is stuffed.

  • The model uses realistic composite models for the inductors from coilcraft, but the capacitors are idealized parts.
  • I also found the library part for LMH6624, so this should be a bit closer to the actual circuit than Rich's models which subbed in the MAX4107 in place.
  • The dashed vertical lines indicate some frequencies of interest.
  • Approx 1 kohm transimpedance is realized at 55 MHz. I don't have the W/rad number for the sensitivity at the AS port, but my guess is this will be just fine.
  • If the 44 MHz and 66 MHz notches are stuffed, then there is some interaction with the 55 MHz notch, which lowers the transimpedance gain somewhat. So if we decide to stuff those notches, we should do a mroe careful investigation into whether this is problematic.

Attachment #3 - Modelled TFs for the case where all the notches are stuffed, and where only the 2f notch is stuffed.

  • Initially, I found the (modelled) noise level to be rather higher than expected. It persisted despite making the resistors in the model noiseless. Turns out there is some leakage from the "Test Input" path. Some documents in the DCC suggest that there should be an "RF Relay" that allows one to isolate this path, but afaik, the aLIGO WFS does not have this feature. So maybe what we should do is to remove C9 once we're done tuning the resonances. Better yet, just tune the resonance with the Jenne laser and not this current-injection path.
  • Horizontal dashed lines indicate shot noise for the indicated DC photocurrent levels. It is unlikely we will have even 1 mW of light on a single quadrant at the AS port, so the AS port WFS will not be shot noise limited. But I think that's okay for initial trials.
  • The noise level of ~20 pA/rtHz input referred is in agreement what I would expect using Eq 3 of the LMH6624 datasheet. The preamp has a gain of 10, so the source impedance seen by it is ~100 ohms (since the overall gain is 1kohm). The corresponding noise level per Eq 3 is ~2 nV/rtHz, or 20 pA/rtHz current noise referred to the photocurrent 👍 . 
  • The LMH6624 datasheet claims that the OpAmp is stable for CLG >= 10. For reasons that aren't obvious to me, Koji states here that the CLG needs to be even higher, 15-20 for stability. Do the aLIGO WFS see some instability? Should I raise R14 to 900 ohms?

Any other red flags anyone sees before I finish stuffing the board?

Quote:

WFS head and housing. Need to finalize the RF transimpedance gain (i.e. the LC resonant part), and also decide which notches we want to stuff.

Attachment 1: aLIGO_wfs_v5_40m.pdf
aLIGO_wfs_v5_40m.pdf
Attachment 2: TFs.pdf
TFs.pdf
Attachment 3: noise.pdf
noise.pdf
  15684   Mon Nov 23 12:25:14 2020 gautamUpdateBHDBHD MMT Optics delivered

Optics --> Cabinet at south end (Attachment #1)

Scanned datasheets--> wiki. It would be good if someone can check the specs against what was ordered.

Attachment 1: IMG_8965.HEIC
  15686   Mon Nov 23 16:33:10 2020 gautamUpdateVACMore vacuum deliveries

Five Agilent pressure gauges were delivered to the 40m. It is stored with the controller and cables in the office area. This completes the inventory for the gauge replacement - we have all the ordered parts in hand (though. not necessarily all the adaptor flanges etc). I'll see if I can find some cabinet space in the VEA to store these, the clutter is getting out of hand again...
 

in addition, the spare gate valve from LHO was also delivered today to the 40m. It is stored at EX with the other spare valves. 

Quote:

It is stored along with the cables that arrived a few weeks ago, awaiting the gauges which are now expected next week sometime.

  15688   Tue Nov 24 16:51:29 2020 gautamUpdatePonderSqueezePonderomotive squeezing in aLIGO

Summary:

On the call last week, I claimed that there isn't much hope of directly measuring Ponderomotive Squeezing in aLIGO without some significant configurational changes. Here, I attempt to quantify this statement a bit, and explicitly state what I mean by "significant configurational changes".

Optomechanical coupling:

The I/O relations will generally look something like:

\begin{bmatrix} b_1\\ b_2 \end{bmatrix} = \begin{bmatrix} C_{11} & C_{12}\\ C_{21} & C_{22} \end{bmatrix} \begin{bmatrix} a_{1}\\ a_2 \end{bmatrix} + \begin{bmatrix} D_1\\ D_2 \end{bmatrix} \frac{h}{h_{\mathrm{SQL}}}.

The. magnitudes of the matrix elements C_12 and C_21 (i.e. phase to amplitude and amplitude to phase coupling coefficients) will encode the strength of the Ponderomotive squeezing. 

Readout:

For the inital study, let's assume DC readout (since there isn't a homodyne readout yet even in Advanced LIGO). This amounts to setting \zeta = \phi in the I/O relations, where the former angle is the "homodyne phase" and the latter is the "SRC detuning". For DC readout, the LO quadrature is fixed relative to the signal - for example, in the usual RSE operation, \zeta = \phi = \frac{\pi}{2}. So the quadrature we will read out will be purely b_1 (or nearly so, for small detunings around RSE operation). The displacement noises will couple in via the D_1 matrix element. Attachment #1 and Attachment #2 show the off-diagonal elements of the "C" matrix for detunings of the SRC near RSE and SR operation respectively. You can see that the optomechanical coupling decays pretty rapidly above ~40 Hz. 

SRC detuning:

In this particular case, there is no benefit to detuning the SRC, because we are assuming the homodyne angle is fixed, which is not an unreasonable assumption as the quadrature of the LO light is fixed relative to the signal in DC readout (not sure what the residual fluctuation in this quantity is). But presumably it is at the mrad level, so the pollution due to the orthogonal anti-squeezed quadrture can be ignored for a first pass I think. I also assume ~10 degrees of detuning is possible with the Finesse ~15 SRC, as the linewidth is ~12 degrees.

Noise budget:

To see how this would look in an actual measurement, I took the data from Lee's ponderomotive squeezing paper, as an estimate for the classical noises, and plotted the quantum noise models for a few representative SRC detunings near RSE operation - see Attachment #3. The curves labelled for various phis are the quantum noise models for those SRC detunings, assuming DC readout. I fudged the power into the IFO to make my modelled quantum noise curve at RSE line up with the high frequency part of the "Measured DARM" curve. To measure Ponderomotive Squeezing unambiguously, we need the quantum noise curve to "dip" as is seen around 40 Hz for an SRC tuning of 80 degrees, and that to be the dominant noise source. Evidently, this is not the case.

The case for balanced homodyne readout:

I haven't analyzed it in detail yet - but it may be possible that if we can access other quadratures, we might benefit from rotating away from the DARM quadrature - the strength of the optomechanical coupling would decrease, as demonstrated in Attachments #1 and #2, but the coupling of classical noise would be reduced as well, so we may be able to win overall. I'll briefly investigate whether a robust measurement can be made at the site once the BHD is implemented.

Attachment 1: QN_heatmap_RSE.pdf
QN_heatmap_RSE.pdf
Attachment 2: QN_heatmap_SR.pdf
QN_heatmap_SR.pdf
Attachment 3: noiseBudget.pdf
noiseBudget.pdf
  15689   Wed Nov 25 18:18:41 2020 gautamUpdateASCPlanned mods for WFS head

I am confused by the discussion during the call today. I revisited Hartmut's paper - the circuit in Fig 6 is essentially what I am calling "only 2f_2 notch stuffed" in my previous elog. Qualitatively, the plot I presented in Attachment #2 of the preceeding elog in this thread shows the expected behavior as in Fig 8 of the paper - the impedance seen by the photodiode is indeed lower. In Attachment #1, I show the comparison - the "V(anode)/I(I1)" curve is analogous to the "PD anode" curve in Hartmut's paper, and the "V(vout)/I(I1)" curve is analogous to the "1f-out" curve. I also plot the sensitivity analysis (Attachment #2), by varying the photodiode junction capacitance between 100pF and 200 pF (both values inclusive) in 20 pF steps. There is some variation at 55 MHz, but it is unlikely that the capacitance will change so much during normal operation? 

I understand the motivation behind stuffing the other notches, to reduce intermodulation effects. But the impression I got from the call was that somehow, the model I presented was wrong. Can someone help me identify the mistake?

I didn't bother to export the LTspice data and make a matplotlib plot for this quick analysis, so pardon the poor presentation. The colors run from green=100pF to grey=200pF.

Attachment 1: anodeVsOutput.png
anodeVsOutput.png
Attachment 2: sensitivity.png
sensitivity.png
  15690   Wed Nov 25 18:30:23 2020 gautamUpdateASCSome thoughts about AS WFS electronics

An 8 channel whitening chassis was prepared and tested. I measured:

  1. TF from input to output - there are 7 switchable stages (3 dB, 6 dB, 12 dB and 24 dB flat whitening gain, and 3 stages of 15:150 Hz z:p whitening). I enabled one at a time and measured the TF. 
  2. Noise with input terminated.

In summary,

  1. All the TFs look good (I will post the plots later), except that the 3rd stage of whitening on both boards don't show the expected transfer function. The fact that it's there on both boards makes me suspect that the switching isn't happening correctly (I'm using a little breakout board). I'm inclined to not debug this because it's unlikely we will ever use 3 stages of 15:150 whitening for the AS WFS. 
  2. The noise measurement displayed huge (x1000 above the surrounding broadband noise floor) 60 Hz harmonics out to several kHz. My hypothesis is that this has to do with some bad grounding. I found that the circuit ground is shorted to the chassis via the shell of the 9pin and 15pin Dsub connectors (but the two D37 connector shields are isolated). This seems very wierd, idk what to make of this. Is this expected? Looking at the schematic, it would appear that the shields of the connectors are shorted to ground which seems like a bad idea. afaik, we are using the same connectors as on the chassis at the sites - is this a problem there too? Any thoughts
Quote:

Whitening chassis. Waiting for front panels to arrive, PCBs and interface board are in hand, stuffed and ready to go. A question here is how we want to control the whitening - it's going to be rather difficult to have fast switchable whitening. I think we can just fix the whitening state. Another option would be to control the whitening using Acromag BIO channels.

  15694   Wed Dec 2 15:27:06 2020 gautamSummaryComputer Scripts / ProgramsTC200 python driver

FYI, there is this. Seems pretty well maintained, and so might be more useful in the long run. The available catalog of instruments is quite impressive - TC200 temp controller and SRS345 func gen are included and are things we use in the lab. maybe you can make a pull request to add MDT694B (there is some nice API already built I think). We should also put our netgpibdata stuff and the vacuum gauge control (basically everything that isn't rtcds) on there (unless there is some intellectual property rights issues that the Caltech lawyers have to sort out).

Quote:

Given the similarities between the MDT694B (single channel piezo controller) and TC200 (temperature controller) serial interfaces, I added the pyserial driver here

*Warning* this first version of the driver remains untested

  15695   Wed Dec 2 17:54:03 2020 gautamUpdateCDSFE reboot

As discussed at the meeting, I commenced the recovery of the CDS status at 1750 local time.

  • Started by attempting to just soft-restart the c1rfm model and see if that fixes the issue. It didn't and what's more, took down the c1sus machine.
  • So hard reboots of the vertex machines was required. c1iscey also crashed. I was able to keep the EX machine up, but I soft-stopped all the RT models on it.
  • All systems were recovered by 1815. For anyone checking, the DC light on the c1oaf model is red - this is a "known" issue and requires a model restart, but i don't want to get into that now and it doesn't disrupt normal operation.

Single arm POX/POY locking was checked, but not much more. Our IMC WFS are still out of service so I hand aligned the IMC a bit, IMC REFL DC went from ~0.3 to ~0.12, which is the usual nominal level.

  15696   Wed Dec 2 18:35:31 2020 gautamUpdateDetCharSummary page revival

The summary pages were in a sad state of disrepair - the daily jobs haven't been running for > 1 month. I only noticed today because Jordan wanted to look at some vacuum trends and I thought summary pages is nice for long term lookback. I rebooted it just now, seems to be running. @Tega, maybe you want to set up some kind of scripted health check that also sends an alert.

  15697   Wed Dec 2 23:07:19 2020 gautamUpdateASCElectrical LO signal for AS WFS

I'm thinking of making some modifications to the RF distribution box in 1X2, so as to have an extra 55 MHz pickoff. Koji already proposed some improvements to the layout in 2015. I've marked up his "Possible Improvement" page of the document in Attachment #1, with my proposed modifications. I believe it will be possible to get 15-16 dBm of signal into a 4 way RF splitter in the quad demod chassis. With the insertion loss of the splitter, we can have 9-10 dBm of LO reaching each demod board, which will then be boosted to +20 dBm by the Teledyne on board. The PE4140 mixer claims to require only -7 dBm of LO signal. So we have quite a bit of headroom here - as long as we limit the RF signal to 0dBm (=0.5 Vpp from the LMH6431 opamp at 55 MHz, we shouldn't be having a much larger signal anyways), we should be just fine with 15 dBm of LO power (which is what we will have after the division into the I and Q paths, and nominal insertion losses in the transmission path). These numbers may be slight overestimates given the possible degradation of the RF amps over the last 10 years, but shouldn't be a show-stopper.

Do the RF electronics experts agree with my assessment? If so, I will start working on these mods tomorrow. Technically, the splitter can be added outside the box, but it may be neater if we package it inside the box. 

Attachment 1: RF_Frequency_Source.pdf
RF_Frequency_Source.pdf
  15698   Thu Dec 3 10:33:00 2020 gautamUpdateVACTrippLite UPS delivered

The latest greatest UPS has been delivered. I will move it to near the vacuum rack in its packaging for storage. It weighs >100lbs so care will have to be taken when installing - can the rack even support this?

Attachment 1: DFDD4F39-3F8A-439D-888D-7C0CE2E030CF.jpeg
DFDD4F39-3F8A-439D-888D-7C0CE2E030CF.jpeg
  15699   Thu Dec 3 10:46:39 2020 gautamUpdateElectronicsDC power strip requirements

Since we will have several new 1U / 2U aLIGO style electronics chassis installed in the racks, it is desirable to have a more compact power distribution solution than the fusable terminal blocks we use currently. 

  • The power strips come in 2 varieties, 18 V and 24 V. The difference is in the Dsub connector that is used - the 18 V variant has 3 pins / 3 sockets, while the 24V version uses a hybrid of 2 pins / 1 socket (and the mirror on the mating connector).
  • Each strip can accommodate 24 individual chassis. It is unlikely that we will have >24 chassis in any collection of racks, so per area (e.g. EX/EY/IOO/SUS), one each of the 18V and 24V strips should be sufficient. We can even migrate our Acromag chassis to be powered via these strips.
  • Details about the power strip may be found here.

I did a quick walkaround of the lab and the electronics rack today. I estimate that we will need 5 units of the 24 V and 5 units of the 18 V power strips. Each end will need 1 each of 18 V and 24 V strips. The 1Y1/1Y2/1Y3 (LSC/OMC/BHD sus) area will be served by 1 each 18 V and 24 V. The 1X1/1X2 (IOO) area will be served by 1 each 18 V and 24 V. The 1X5/1X6 (SUS Shadow sensor / Coil driver) area will be served by 1 each of 18 V and 24 V.  So I think we should get 7 pcs of each to have 2 spares.

Most of the chassis which will be installed in large numbers (AA, AI, whitening) supports 24V DC input. A few units, like the WFS interface head, OMC driver, OMC QPD interface, require 18V. It is not so clear what the input voltage for the Satellite box and Coil Drivers should be. For the former, an unregulated tap-off of the supply voltage is used to power the LT1021 reference and a transistor that is used to generate the LED drive current for the OSEMs. For the latter, the OPA544 high current opamp used to drive the coil current has its supply rails powered by again, an unregulated tap-off of the supply voltage. Doesn't seem like a great idea to drive any ICs with the unregulated switching supply voltage from a noise point of view, particularly given the recent experience with the HV coil driver testing and the PSRR, but I think it's a bit late in the game to do anything about this. The datasheet specs ~50 dB of PSRR on the negative rail, but we have a couple of decoupling caps close to the IC and this IC is itself in a feedback loop with the low noise AD8671 IC so maybe this won't be much of an issue.

For the purposes of this discussion, I think both Satellite Amp and Coil Driver chassis can be driven with +/- 24 V DC.


On a side note - after the upgrade will the "Satellite Amplifiers" be in the racks, and not close to the flange as they currently are? Or are we gonna have some mini racks next to the chambers? Not sure what the config is at the sites, and if the circuits are designed to drive long cables.

  15704   Thu Dec 3 20:38:46 2020 gautamUpdateASCElectrical LO signal for AS WFS

I removed the Frequency Generation box from the 1X2 rack. For the time being, the PSL shutter is closed, since none of the cavities can be locked without the RF modulation source anyways.

Prior to removal, I did the following:

  1. Measured powers at each port on the front panel 
    • Gigatronix power meter was used, which has a maximum power rating of 20dBm, so for the EOM drive outputs which we operate closer to 25-27 dBm, I used a 20 dBm coupler to make the measurement.
    • Attachment #1 summarizes my findings - there doesn't seem to be anything majorly wrong, except that for the 11 MHz EOM drive channel, the "7" setting on the variable attenuator doesn't seem to work. 
    • We can probably get a replacement from MiniCircuits, but since we operate at 0dBm variable attenuation nominally, maybe we don't need to futz around with this.
  2. Measured the relative phasing between the 11 MHz and 55 MHz signals using an oscilloscope.
    • I measured the relative phase for the EOM drive channels, and also the demod channels.
    • The scope can accept a maximum of 5V RMS signal with 50ohm input impedance. So once again, I couldn't make a direct measurement at the nominal setting for the EOM drive channel. Instead, I used the variable attenuator to set the signal amplitude to ~2V RMS. 
    • I will upload the time-domain plots later. But we now have a record of the relative phasing that we can try and reproduce after making modifications. FWIW, my measured phase difference of 139 degrees is reasonably consistent with Koji's inferred from the modulation spectrum.

One thing I noticed was that we're using very stiff coax cabling (RG405) inside this box? Do we need to stick with this option? Or can we use the more flexible RG316? I guess RG405 is lower loss, so it's better. I can't actually find any measurement of the shielding performance in my quick google searching but I think the claim on the call yesterday was that RG405 with its solder soaked braids offer superior shielding.

Before doing any modification you should check how much the distributed powers are at the ports.
Also your modification will change the relative phase between 11MHz and 55MHz.
Can you characterize how much phase difference you have between them, maybe using the modulation of the main marconi? And you might want to adjust it to keep the previous value (or any new value) after the modification by adding a cable inside?

Attachment 1: RF_Frequency_Source.pdf
RF_Frequency_Source.pdf
Attachment 2: demodPath.pdf
demodPath.pdf
Attachment 3: EOMpath.pdf
EOMpath.pdf
  15706   Thu Dec 3 21:44:49 2020 gautamUpdateASCElectrical LO signal for AS WFS

I'm open to either approach. If the full replacement requires a lot of machining, maybe I will stick to just the 55 MHz line. But if only a couple of new holes are required, it might be advantageous to do the revamp while we have the box out? What do you think?

BTW, now that I look more closely at the RF chain, I have several questions:

  1. The 1 dB compression power of the ZHL-2 amplifiers is ~29 dBm, and we are driving it at that level. Is this okay? I thought we always want to be several dBm away from the 1dBm compression point?
  2. Why do we have an attenuator between the Marconi input and the first ZHL-2 amplifier? Can't we just set the Marconi to output 8 or 9 dBm?
  3. The Wenzel frequency multiplier is rated to have 13dBm input and 20 dBm output. We operate it with 12 dBm input and 19 dBm output. Why throw away 1 dBm?

I guess it is feasible to have +17 dBm of 55 MHz signal to plug into the Quad Demod chassis - e.g. drive the 55 MHz input with 20 dBm, pick off 3dBm to the front panel for ASC. Then we can even have several "spare" 55 MHz outputs and still satisfy the 9 dBm input that the ZHL-2 in the 55 MHz chain wants (though again, isn't this dangerously close to the 1dB compression point?). The design doc claims to have done some Optickle modeling, so I guess there isn't really any issue? 

Quote:

Are you going to full replacement of the 55MHz system? Or just remove the 7dBm and then implement the proposed modification for the 55MHz line?

  15710   Fri Dec 4 22:41:56 2020 gautamUpdateASCFreq Gen Box revamp

This turned out to be a much more involved project than I expected. The layout is complete now, but I found several potentially damaged sections of cabling (the stiff cables don't have proper strain relief near the connectors). I will make fresh cables tomorrow before re-installing the unit in the rack. Several changes have been made to the layout so I will post more complete details after characterization and testing.

I was poring over minicircuits datasheets today, and I learned that the minicircuits bandpass filters (SBP10.7 and SBP60) are not bi-directional! The datasheet clearly indicates that the Male SMA connector is the input and the Female SMA connector is the output. Almost all the filters were installed the other way around 😱 . I'll install them the right way around now.

  15711   Sat Dec 5 20:44:35 2020 gautamUpdateASCFreq Gen Box re-installed

This work is now complete. The box was characterized and re-installed in 1X2. I am able to (briefly) lock the IMC and see PDH fringes in POX and POY so the lowest order checks pass.

Even though I did not deliberately change anything in the 29.5 MHz path, and I confirmed that the level at the output is the expected 13 dBm, I had to lower then IN1 gain of the IMC servo by 2dB to have a stable lock - should confirm if this is indeed due to higher optical gain at the IMC error point, or some electrical funkiness. I'm not delving into a detailed loop characterization today - but since my work involved all elements in the RF modulation chain, some detailed characterization of all the locking loops should be done - I will do this in the coming week.

After tweaking the servo gains for the POX/POY loops, I am able to realize the single arm locks as well (though I haven't dont the characterization of the loops yet).

I'm leaving the PSL shutter open, and allowing the IMC autolocker to run. The WFS loops remain disabled for now until I have a chance to check the RF path as well.


Unrelated to this work: Koji's swapping back of the backplane cards seems to have fixed the WFS2 issue - I now see the expected DC readbacks. I didn't check the RF readbacks tonight.

Update 7 Dec 2020 1 pm: A ZHL-2 with heat sink attached and a 11.06 MHz Wenzel source were removed from the box as part of this work (the former was no longer required and the latter wasn't being used at all). They have been stored in the RF electronics cabinet along the east arm.

Attachment 1: IFOverview.png
IFOverview.png
Attachment 2: IMG_0004.jpg
IMG_0004.jpg
Attachment 3: IMG_9007.jpg
IMG_9007.jpg
Attachment 4: IMG_0003.jpg
IMG_0003.jpg
Attachment 5: schematicLayout.pdf
schematicLayout.pdf
Attachment 6: EOMpath_postMod.pdf
EOMpath_postMod.pdf
  15712   Mon Dec 7 11:25:31 2020 gautamUpdateSUSMC1 suspension glitchy again

The MC1 suspension has begun to show evidence of glitches again, from Friday/Saturday. You can look at the suspension Vmon tab a few days ago and see that the excess fuzz in the Vmon was not there before. The extra motion is also clearly evident on the MCREFL spot. I noticed this on Saturday evening as I was trying to recover the IMC locking, but I thought it might be Millikan so I didn't look into it further. Usually this is symptomatic of some Satellite box issues. I am not going to attempt to debug this anymore.

  15713   Mon Dec 7 12:38:51 2020 gautamUpdateIOOIMC loop char

Summary:

There seems to be significant phase loss in the TTFSS path, which is limiting the IMC OLTF to <100 kHz. 

Details:

See Attachment #1 and #2. The former shows the phase loss, while the latter is just to confirm that the optical gain of the error point is roughly the same, since I noticed this after working on and replacing the RF frequency distribution unit. Unfortunately there have been many other changes also (e.g. the work that Rana and Koji did at the IMC rack, swapping of backplane controls etc etc - maybe they have an OLTF measurement from the time they were working?) so I don't know which is to blame. Off the top of my head, I don't see how the RF source can change the phase lag of the IMC servo at 100 kHz. The only part of the IMC RF chain that I touched was the short cable inside the unit that routes the output of the Wenzel source to the front panel SMA feedthrough. I confirmed with a power meter that the power level of the 29.5 MHz signal at that point is the same before and after my work.

The time domain demod monitor point signals appear somewhat noisier in todays measurement compared to some old data I had from 2018, but I think this isn't significant. Once the SR785 becomes available, I will measure the error point spectrum as well to confirm. One thing I noticed was that like many of our 1U/2U chassis units, the feedthrough returns are shorted to the chassis on the RF source box (and hence presumably also to the rack). The design doc for this box makes many statements about the precautions taken to avoid this, but stops short of saying if the desired behavior was realized, and I can't find anything about it in the elog. Can someone confirm that the shields of all the connectors on the box were ever properly isolated? My suspicion is that the shorting is happening where the all-metal N-feedthroughs touch the drilled surfaces on the front panel - while the front and back surfaces of the panel are insulating, the machined surfaces are not.

This is an unacceptable state but no clear ideas of how to troubleshoot quickly (without going piece by piece into the IMC servo chain) occur to me. I still don't understand how the freq source work could have resulted in this problem but I'm probably overlooking something basic. I'm also wondering why the differential receiving at the TTFSS error point did not require a gain adjustment of the IMC servo? Shouldn't the differential-receiving-single-ended-sending have resulted in an overall x0.5 gain?


Update 8 Dec 1200: To test the hypothesis, I bypassed the SR560 based differential receiving and restored the original config. I am then able to run with the original gain settings, and you see in Attachment #4 that the IMC OLTF UGF is back above 100 kHz. It is still a little lower than it was in June 2019, not sure why. There must be some saturation issues somewhere in the signal chain because I cannot preserve the differential receiving and retain 100 kHz UGF, either by raising the "VCO gain" on the MC servo board, setting the SR560 to G=2, or raising the "Common Gain Adjust" on the FSS box by 6 dB. I don't have a good explanation for why this worked for some weeks and failed now - maybe some issue with the SR560? We don't have many working units so I didn't try switching it.

So either there is a whole mess of lines or the frequency noise suppression is limited. Sigh.

Attachment 1: OLTFcomparison.pdf
OLTFcomparison.pdf
Attachment 2: demodMons.pdf
demodMons.pdf
Attachment 3: OLTFcomparison.pdf
OLTFcomparison.pdf
  15714   Mon Dec 7 14:32:02 2020 gautamUpdateLSCNew demod phases for POX/POY locking

In favor of keeping the same servo gains, I tuned the digital demod phases for the POX and POY photodiode signals to put as much of the PDH error signal in the _I quadrature as possible. The changes are summarized below:

POX / POY demod phases
PD Old demod phase [deg] New demod phase [deg]
POX11 79.5 -75.5
POY11 -106.0 116.0

The old locking settings seem to work fine again. This setting isn't set by the ifoconfigure scripts when they do the burt restore - do we want it to be?

Attachments #1 and #2 show some spectra and TFs for the POX/POY loops. In Attachment #2, the reference traces are from the past, while the live traces are from today. In fact, to have the same UGF as the reference traces (from ~1 year ago), I had to also raise the digital servo loop gain by ~20%. Not sure if this can be put down to a lower modulation depth - at least, at the output on the freq ref box, I measured the same output power (at the 0dB variable attenuator gain setting we nominally run in) before and after the changes. But I haven't done an optical measurement of the modulation depth yet. There is also a hint of lesser phase available at ~100 Hz now compared to a year ago.

Attachment 1: POX_POY_OLTF.pdf
POX_POY_OLTF.pdf
Attachment 2: POX_POY_spectra.pdf
POX_POY_spectra.pdf
  15715   Mon Dec 7 22:54:30 2020 gautamUpdateLSCModulation depth measurement

Summary:

I measured the modulation depth at 11 MHz andf 55 MHz using an optical beat + PLL setup. Both numbers are ~0.2 rad, which is consistent with previous numbers. More careful analysis forthcoming, but I think this supports my claim that the optical gain for the PDH locking loops should not have decreased.

Details:

  • For this measurement, I closed the PSL shutter between ~4pm and ~9pm local time. 
  • The photodiode used was the NF1611, which I assumed has a flat response in the 1-200 MHz band, and so did not apply any correction/calibration.
Attachment 1: modDepth.pdf
modDepth.pdf
  15716   Tue Dec 8 15:07:13 2020 gautamUpdateComputer Scripts / Programsndscope updated

I updated the ndscope on rossa to a bleeding edge version (0.7.9+dev0) which has many of the fixes I've requested in recent times (e.g. direct PDF export, see Attachment #1). As usual if you find issue, report it on the issue tracker. The basic functionality for looking at signals seems to be okay so this shouldn't adversely impact locking efforts.


In hindsight - I decided to roll-back to 0.7.9, and have the bleeding edge as a separate binary. So if you call ndscope from the command line, you should still get 0.7.9 and not the bleeding edge.

Attachment 1: test.pdf
test.pdf
  15717   Wed Dec 9 11:54:11 2020 gautamUpdateOptical LeversITMX HeNe replaced

The ITMX Oplev (installed in March 2019) was near end of life judging by the SUM channel (see Attachment #1). I replaced it yesterday evening with a new HeNe head. Output power was ~3.25 mW. The head was labelled appropriately and the Oplev spot was recentered on its QPD. The lifetime of ~20 months is short but recall that this HeNe had already been employed as a fiber illuminator at EX and so maybe this is okay.

Loop UGFs and stability margins seem acceptable to me, see Attachment #2-#3.

Attachment 1: OLtrend_old_ndscope.png
OLtrend_old_ndscope.png
Attachment 2: ITMX_OL_P.pdf
ITMX_OL_P.pdf
Attachment 3: ITMX_OL_Y.pdf
ITMX_OL_Y.pdf
  15718   Wed Dec 9 12:02:04 2020 gautamUpdateLSCPOX locking still unsatisfactory

Continuting the IFO recovery - I am unable to recover similar levels of TRX RIN as I had before. Attachment #1 shows that the TRX RIN is ~4x higher in RMS than TRY RIN (the latter is commensurate with what we had previously). The excess is dominated by some low frequency (~1 Hz) fluctuations. The coherence structure is confusing - why is TRY RIN coherent with IMC transmission at ~2 Hz but not TRX? But anyways, doesn't look like its intensity fluctuations on the incident light (unsurprisingly, since the TRY RIN was okay). I thought it may be because of insufficient low-frequency loop gain - but the loop shape is the same for TRX and TRY. I confirmed that the loop UGF is similar now (red trace in Attachment #2) as it was ~1 month ago (black trace in Attachment #2). Seismometers don't suggest excess motion at 1 Hz. I don't think the modulation depth at 11 MHz is to blame either. As I showed earlier, the spectrum of the error point is comparable now as it was previously.

What am I missing?

Attachment 1: armRIN.pdf
armRIN.pdf
Attachment 2: POX_OLTF.pdf
POX_OLTF.pdf
  15719   Wed Dec 9 15:37:48 2020 gautamUpdateCDSRFM switch IP addr reset

I suspect what happened here is that the IP didn't get updated when we went from the 131.215.113.xxx system to 192.168.113.xxx system. I fixed it now and can access the web interface. This system is now ready for remote debugging (from inside the martian network obviously). The IP is 192.168.113.90.

Managed to pull this operation off without crashing the RFM network, phew.

BTW, a windows laptop that used to be in the VEA (I last remember it being on the table near MC2 which was cleared sometime to hold the spare suspensions) is missing. Anyone know where this is ?

Attachment 1: Screenshot_2020-12-09_15-39-20.png
Screenshot_2020-12-09_15-39-20.png
Attachment 2: Screenshot_2020-12-09_15-46-46.png
Screenshot_2020-12-09_15-46-46.png
  15720   Wed Dec 9 16:22:57 2020 gautamUpdateSUSYet another round of Sat. Box. switcharoo

As discussed at the meeting, I decided to effect a satellite box swap for the misbehaving MC1 unit. I looked back at the summary pages Vmon for the SRM channels, and found that in the last month or so, there wasn't any significant evidence of glitchiness. So I decided to effect that swap at ~4pm today. The sequence of steps was:

  • SRM and MC1 watchdogs were disabled.
  • Unplugged the two satellite boxes from the vacuum flanges.
  • For the record: S/N 102 was installed at MC1, and S/N 104 was installed at SRM. Both were de-lidded, supposedly to mitigate the horrible thermal environment a bit. S/N 104 was the one Koji repaired in Aug 2019 (the serial number isn't visible or noted there, but only one box has jumper wires and Koji's photos show the same jumper wires). In June 2020, I found that the repaired box was glitching again, which is when I swapped it for S/N 102.
  • After swapping the two units, I re-enabled the local damping on both optics, and was able to re-lock the IMC no issues.

One thing I was reminded of is that the motion of the MC1 optic by controlling the bias sliders is highly cross-coupled in pitch and yaw, it is almost diagonal. If this is true for the fast actuation path too, that's not great. I didn't check it just now.

While I was working on this, I took the opportunity to also check the functionality of the RF path of the IMC WFS. Both WFS heads seem to now respond to angular motion of the IMC mirror - I once again dithered MC2 and looked at the demodulated signals, and see variation at the dither frequency, see Attachment #1. However, the signals seem highly polluted with strong 60 Hz and harmonics, see the zoomed-in time domain trace in Attachment #2. This should be fixed. Also, the WFS loop needs some re-tuning. In the current config, it actually makes the MC2T RIN worse, see Attachment #3 (reference traces are with WFS loop enabled, live traces are with the loop disabled - sorry for the confusing notation, I overwrote the patched version of DTT that I got from Erik that allows the user legend feature, working on getting that back).

Quote:

The MC1 suspension has begun to show evidence of glitches again, from Friday/Saturday. You can look at the suspension Vmon tab a few days ago and see that the excess fuzz in the Vmon was not there before. The extra motion is also clearly evident on the MCREFL spot. I noticed this on Saturday evening as I was trying to recover the IMC locking, but I thought it might be Millikan so I didn't look into it further. Usually this is symptomatic of some Satellite box issues. I am not going to attempt to debug this anymore.

Attachment 1: WFS2.png
WFS2.png
Attachment 2: WFS_lineNoise.png
WFS_lineNoise.png
Attachment 3: WFSchar.pdf
WFSchar.pdf
  15721   Wed Dec 9 20:14:49 2020 gautamUpdateVACUPS failure

Summary:

  1. The (120V) UPS at the vacuum rack is faulty.
  2. The drypump backing TP2 is faulty.
  3. Current status of vacuum system: 
    • The old UPS is now powering the rack again. Sometime ago, I noticed the "replace battery" indicator light on this unit was on. But it is no longer on. So I judged this is the best course of action. At least this UPS hasn't randomly failed before...
    • main vol is being pumped by TP1, backed by TP3.
    • TP2 remains off.
    • The annular volumes are isolated for now while we figure out what's up with TP2.
    • The pressure went up to ~1 mtorr (c.f. ~600utorr that is the nominal value with the stuck RV2) during the whole episode but is coming back down now.
  4. Steve seems to have taken the reliability of the vacuum system with him.

Details:

Around 7pm, the UPS at the vacuum rack seems to have failed. Don't ask me why I decided to check the vacuum screen 10 mins after the failure happened, but the point is, this was a silent failure so the protocols need to be looked into.

Going to the rack, I saw (unsurprisingly) that the 120V UPS was off. 

  • Pushed the power on button - the LCD screen would briefly light up, say the line voltage was 120 V, and then turned itself off. Not great.
  • I traced the power connection to the UPS itself to a power strip under the rack - then I moved the plug from one port to another. Now the UPS stays on. okay...
  • but after ~3 mins while I'm hunting for a VGA cable, I hear an incessant beeping. The UPS display has the "Fault" indicator lit up. 
  • I decided to shift everything back to the old UPS. After the change was made, I was able to boot up the c1vac machine again, and began the recovery process.
  • When I tried to start TP2, the drypump was unusually noisy, and I noticed PTP2 bottomed out at ~500 torr (yes torr). So clearly something is not right here. This pump supposedly had its tip-seal replaced by Jordan just 3 months ago. This is not a normal lifetime for the tip seal - we need to investigate more in detail what's going on here...
  • Decided that an acceptable config is to pump the main volume (so that we can continue working on other parts of the IFO). The annuli are all <10mtorr and holding, so that's just fine I think.

Questions:

  1. Are the failures of TP2 drypump and UPS related? Or coincidence? Who is the chicken and who is the egg?
  2. What's up with the short tip seal lifetime?
  3. Why did all of this happen without any of our systems catching it and sending an alert??? I have left the UPS connected to the USB/ethernet interface in case anyone wants to remotely debug this.

For now, I think this is a safe state to leave the system in. Unless I hear otherwise, I will leave it so - I will be in the lab another hour tonight (~10pm).

Some photos and a screen-cap of the Vac medm screen attached.

Attachment 1: rackBeforenAfter.pdf
rackBeforenAfter.pdf
Attachment 2: IMG_0008.jpg
IMG_0008.jpg
Attachment 3: IMG_0009.jpg
IMG_0009.jpg
Attachment 4: vacStatus.png
vacStatus.png
  15725   Thu Dec 10 14:29:26 2020 gautamUpdateVACUPS failure

I don't buy this story - P2 only briefly burped around GPStime 1291608000 which is around 8pm local time, which is when I was recovering the system.

Today. Jordan talked to Jon Feicht - apparently there is some kind of valve in the TP2 forepump, which only opens ~15-20 seconds after turning the pump on. So the loud sound I was hearing yesterday was just some transient phenomenon. So today morning at ~9am, we turned on TP2. Once again, PTP2 pressure hovered around 500 torr for about 15-20 seconds. Then it started to drop, although both Jordan and I felt that the time it took for the pressure to drop in the range 5 mtorr - 1 mtorr was unusually long. Jordan suspects some "soft-start" feature of the Turbo Pumps, which maybe spins up the pump in a more controlled way than usual after an event like a power failure. Maybe that explains why the pressure dropped so slowly? One thing is for sure - the TP2 controller displayed "TOO HIGH LOAD" yesterday when I tried the first restart (before migrating everything to the older UPS unit). This is what led me to interpret the loud sound on startup of TP2 to indicate some issue with the forepump - as it turns out, this is just the internal valve not being opened.

Anyway, we left TP2 on for a few hours, pumping only on the little volume between it and V4, and PTP2 remained stable at 20 mtorr. So we judged it's okay to open V4. For today, we will leave the system with both TP2 and TP3 backing TP1. Given the lack of any real evidence of a failure from TP2, I have no reason to believe there is elevated risk.

As for prioritising UPS swap - my opinion is that it's better to just replace the batteries in the UPS that has worked for years. We can run a parallel reliability test of the new UPS and once it has demonstrated stability for some reasonable time (>4 months), we can do the swap.


I was able to clear the FAULT indicator on the new UPS by running a "self-test". pressing and holding the "mute" button on the front panel initiates this test according to the manual, and if all is well, it will clear the FAULT indicator, which it did. I'm still not trusting this unit and have left all units powered by the old UPS.


Update 1100 Dec 11: The config remained stable overnight so today I reverted to the nominal config of TP3 pumping the annuli and TP2 backing TP1 which pumps the main volume (through the partially open RV2).

Quote:
 

According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.

Attachment 1: vacDiag1.png
vacDiag1.png
  15728   Thu Dec 10 16:24:13 2020 gautamUpdateEquipment loanNoliac PZT --> Paco

I gave one Noliac PZT from the two spare in the metal PMC kit to Paco. There is one spare left in the kit.

  15730   Thu Dec 10 22:45:42 2020 gautamUpdateSUSMore spare OSEMs

I acquired several spare OSEMs (in unknown condition) from Paco. They are stored alongside the shipment from UF.

  15731   Thu Dec 10 22:46:57 2020 gautamUpdateASCWFS head assembled

The assembly of the head is nearly complete, I thought I'd do some characterization before packaging everything up too nicely. I noticed that the tapped holes in the base are odd-sized. According to the official aLIGO drawing, these are supposed to be 4-40 tapped, but I find that something in between 2-56 and 4-40 is required - so it's a metric hole? Maybe we used some other DCC document to manufacture these parts - does anyone know the exact drawings used? In the meantime, the circuit is placed inside the enclosure with the back panel left open to allow some tuning of the trim caps. The front panel piece for mounting the SMA feedthroughs hasn't been delivered yet so hardware-wise, that's the last missing piece (apart from the aforementioned screws).

Attachment #1 - the circuit as stuffed for the RF frequencies of relevance to the 40m.

Attachment #2 - measured TF from the "Test Input" to Quadrant #1 "RF Hi" output.

  • There is reasonable agreement, but not sure what to make of the gain mismatch at most frequencies.
  • The photodiode itself hasn't been installed yet, so there will be some additional tuning required to account for the interaction with the photodiode's junction capacitance.
  • I noticed that the Qs of the resonances in between the notches is pretty high in this config, but the SPICE model also predicts this, so I'm hopeful that they will be tamed once the photodiode is installed.
  • One thing that is worrying is the feature at ~170 MHz. Could be some oscillation of the LM opamp. All the aLIGO WFS test procedure documentation shows measurements only out to 100 MHz. Should we consider increasing the gain of the preamp from x10 to x20 by swapping the feedback resistor from 453 ohms to 1 kohm? Is this a known issue at the sites
  • Any other comments?

Update 11 Dec: For whatever reason, whoever made this box decided to tap 4-40 holes from the bottom (i.e. on the side of the base plate), and didn't thread the holes all the way through, which is why I was unable to get a 4-40 screw in there. To be fair the drawing doesn't specify the depth of the 4-40 holes to be tapped. All the taps we have in the lab have a maximum thread length of 9/16" whereas we need something with at least 0.8" thread length. I'll ask Joe Benson at the physics workshop if he has something I can use, and if not, I'll just drill a counterbore on the bottom side and use the taps we have to go all the way through and hopefully that does the job.

The front panel I designed for the SMA feedthroughs arrived today. Unfortunately, it is impossible for the D-sub shaped holes in this box to accommodate 8 insulated SMA feedthroughs (2 per quadrant for RF low and RF high) - while the actual SMA connector doesn't occupy so much space, the plastic mold around the connector and the nut to hold it are much too bulky. For the AS WFS application, we will only need 4 so that will work, but if someone wants all 8 outputs (plus an optional 9th for the "Test Input"), a custom molded feedthrough will have to be designed. 

As for the 170 MHz feature - my open loop modeling in Spice doesn't suggest a lack of phase margin at that frequency so I'm not sure what the cause is there. If this is true, just increasing the gain won't solve the issue (since there is no instability at least by the phase margin metric). Could be a problem with the "Test Input" path I guess. I confirmed it is present in all 4 quadrants.

Attachment 1: aLIGO_wfs_v5_40m.pdf
aLIGO_wfs_v5_40m.pdf
Attachment 2: TF_meas.pdf
TF_meas.pdf
  15735   Tue Dec 15 12:38:41 2020 gautamUpdateElectronicsDC power strip

I installed a DC power strip (24 V variant, 12 outlets available) on the NW upright of the 1X1 rack. This is for the AS WFS. Seems to work, all outlets get +/- 24 V DC.

The FSS_RMTEMP channel is very noisy after this work. I'll look into it, but probably some Acromag grounding issue.

In the afternoon, Jordan and I also laid out 4x SMA LMR240 cables and 1x DB15 M/F cable from 1X2 to the NE corner of the AP table via the overhead cable trays.

  15736   Thu Dec 17 15:23:56 2020 gautamUpdateASCWFS head characterization

Summary:

I think the WFS head performs satisfactorily.

  • The (input-referred) dark noise level at the operating frequency of 55 MHz is ~40pA/rtHz (modelled) and ~60 pA/rtHz (measured, converted to input-referred). See Attachment #1. Attachment #5 has the input referred current noise spectral densities, and a few representative shot noise levels.
  • The RF transimpedance gain at the operating frequency is ~500 ohms when driving a 50 ohm load (in good agreement with LTspice model). See Attachment #2 and Attachment #3.
  • The resonant gain to notch ratios are all > 30 dB, which is in line with numbers I can find for the WFS installed at the sites (and in good agreement with the LTspice model as well).
  • There are a few lines visible in the noise measurement. But these are small enough not to be a show-stopper I think.

Details and remarks:

  1. Attachment #4 shows a photo of the setup. 
    • The QPD used was S/N #84.
    • The heat sinks have a bunch of washers because the screw holes were not tappe at time of manufacture.
    • There isn't space to have 8 SMA feedthroughs in the D-shaped cutouts, so we can only have the 4 "RF HI" outputs without some major metalwork.
    • C9 has been remvoed in all channels (to isolate the "TEST INPUT").
  2. I found that some quadrants displayed a ~35 MHz sine-wave of a few mV pk-pk when I had the back of the enclosure off (for tuning the notches). The hypothesis is that this was due to some kind of stray capacitance effect. Anyways, once I closed everything up, for the noise measurement, this peak was no longer visible. With an HP8447A preamp, I measured an RMS voltage of ~2mV rms on an oscilloscope. After undoing the 20 dB gain of the amplifier, each quadrant has an output voltage noise of ~200 uVrms (as returned by the "measure" utility on the scope, I don't know the specifics of how it computes this). Point is, there wasn't any clear sine-wave oscillations like I saw on two channels when the lid was off. 
  3. Some of the lines are present during some measurement times but not others (e.g. Q4 blue vs red curve in Attachment #1). I was doing this work in the elec-bench area of the lab, right next to the network switches etc so not exactly the quietest environment. But anyway, I don't see anything in these measurements that suggest something is seriously wrong.
  4. In the transfer function measurements, above 150 MHz, there are all sorts of features. But I think this is a measurement artefact (stray cable capacitance etc) and not anything real in the RF signal path. Koji saw similar effects I believe, and I didn't delve further into it.
  5. The dark noise of the circuit is such that to be shot noise limited, each quadrant needs 10 mA of DC photocurrent. The light bulb we have has a max current rating of 0.25A, with which I could only get 3 mA DC per quadrant. So the 55 MHz sideband power needed to be shot noise limited is ~50 mW - we will never have such high power. But I think to have better performance will need a major re-work of the circuit design (finite Qs of inductors, capacitors etc).
  6. Regarding the transimpedance gains - in my earlier plots, I omitted the 50ohm input impedance of the AG4395A network analyzer. The numbers I report here are ~half of those earlier in this thread for this reason. In any case, I think this number is what is important, since the ADT-1-1 on the demod board RF input has an input impedance of 50ohm. 
  7. Regarding grounding - the RF ground on the head is actually isolated from the case pretty well. Two locations of concern are (i) the heat sinks for the voltage regulator ICs and (ii) the DB15 connector shield. I've placed electrically insulating (but thermally conducting) pads from TO220 mounting kits between both sets of objects and the case. However, for the Dsub connector, the shape of the pad doesn't quite fit all the way round the connector. So if I over-tighten the 4-40 mounting bolts, at some point, the case gets shorted to the RF ground, presumably because the connector deforms slightly and touches the case in a spot where I don't have the isolating pad installed. I think I've realized a tightness that is mechanically satisfying but electrically isolating.
  8. I will do the fitting at my leisure but the eye-fit is already suggesting that things are okay I think.

If the RF experts see some red flags / think there are more tests that need to be performed, please let me know. Big thanks to Chub for patiently supporting this build effort, I'm pleasantly surprised it worked.

Attachment 1: oNoise.pdf
oNoise.pdf
Attachment 2: Z_Hi.pdf
Z_Hi.pdf
Attachment 3: Z_Low.pdf
Z_Low.pdf
Attachment 4: IMG_9030.jpg
IMG_9030.jpg
Attachment 5: iNoise.pdf
iNoise.pdf
  15737   Fri Dec 18 10:52:17 2020 gautamUpdateCDSRFM errors

As I was working on the IFO re-alignment just now, the rfm errors popped up again. I don't see any useful diagnostics on the web interface.

Do we want to take this opportunity to configure jumpers and set up the rogue master as Rolf suggested? Of course there's no guarantee that will fix anything, and may possibly make it impossible to recover the current state...

Attachment 1: RFMdiag.png
RFMdiag.png
  15741   Sat Dec 19 20:24:25 2020 gautamUpdateElectronicsWFS hardware install

I installed 4 chassis in the rack 1X2 (characterization on the E-bench was deemed satisfactory, I will upload the analysis later). I ran out of hardware to make power cables so only 2 of them are powered right now (1 32ch AA chassis and 1 WFS head interface). The current limit on the +24V Sorensens was raised to allow for similar margin to the limit with the increased current draw.

Remaining work:

  1. Make 2 more power cables for ISC whitening chassis and quad demod chassis.
  2. Make a 2x 4pin LEMO-->DB9 cable to digitize the FSS and PMC diagnostic channels with the new AA chassis. If RnD cables has a very short turnaround time, might be worth it to give this to them as well.
  3. Connect ADC1 on c1ioo machine to new AA chassis (transfer SCSI cable from existing AA unit to the new one). This will necessarily involve some model changes as well.
  4. Make a short cable to connect 55 MHz output from RFsource box to the LO input on the quad demod chassis.
  5. Install the WFS head on the AS table at a suitable location. Probably will need a focusing lens as well. 
  6. Connect WFS head to the signal processing electronics (the cables were already laid out by Jordan and I).
  7. Make the necessary CDS model changes (WFS filters, matrices, servos etc). I personally don't see the need for a new model but if anyone feels strongly about separating the IMC WFS and AS WFS we can set up another model.
  8. Commission the system.

While I definitely bumped various cables, I don't seem to have done any lasting damage to the CDS system (the RFM errors remain of course).

  15743   Mon Dec 21 18:18:03 2020 gautamUpdateCDSMany model changes

The CDS model change required to get the AS WFS signals into the RTCDS system are rather invasive.

  • We use VCS for these models. Linus Torvald may question my taste but I also made local backups of the models, just in case...
  • Particularly, the ADC1 card on c1ioo is completely reconfigured.
  • I also think it's more natural to do all the ASC computations on this machine rather than the c1lsc machine (it is currently done in the c1ass model). So there are many IPC changes as well.
  • I have documented everything carefully, and the compile/install went smoothly.
  • Taking down all the FE servers at 1830 local time
    1. To propagagte the model changes
    2. To make a hardware change in the c1rfm card in the c1ioo machine to configure it as "ROGUE MASTER 0"
    3. To clear the RFM errors we are currently suffering from will require a model reboot anyways.
  • Recovery was completed by 1930 - the RFM errors are also cleared, and now we have a "ROGUE MASTER 👾" on the network. Pretty smooth, no major issues with the CDS part of the procedure to report.
  • The main issue is that in the AA chassis I built, Ch14 (with the first channel as Ch1) has the output saturated to 28V (differential). I'm not sure what kind of overvoltage protection the ADC has - we frequently have the inputs exceed the spec'd +/-20 V (e.g. when the whitening filters are engaged and the cavity is fringing), but pending further investigation, I am removing the SCSI connection from the rear of the AA chassis.

In terms of computational load, the c1ioo model seems to be able to handle the extra load no issues - ~35us/60us per cycle. The RFM model shows no extra computational time

After this work, the IMC locking and POX/POY locking, and dither alignment servos are working okay. So I have some confidence that my invasive work hasn't completely destroyed everything. There is some hardware around the rear of 1X2 that I will clear tomorrow.

Attachment 1: CDSoverview.png
CDSoverview.png
  15744   Tue Dec 22 22:11:37 2020 gautamUpdateCDSAA filt repaired and reinstalled

Koji fixed the problematic channel - the issue was a bad solder joint on the input resistors to the THS4131. The board was re-installed. I also made a custom 2x4-pin LEMO-->DB9 cable, so we are now recording the PMC and FSS ERR/CTRL channel diagnostics again (spectra tomorrow). Note that Ch32 is recording some sort of DuoTone signal and so is not usable. This is due to a misconfiguration - ADC0 CH31 is the one which is supposed to be reserved for this timing signal, and not ADC1 as we currently have. When we swap the c1ioo hosts, we should fix this issue.

I also did most of the work to make the MEDM screens for the revised ASC topology, tried to mirror the site screens where possible. The overview screen remains to be done. I also loaded the anti-whitening filters (z:p 150:15) at the demodulated WFS input signal entry points. We don't have remote whitening switching capability at this time, so I'll test the switching manually at some point.

Quote:

The main issue is that in the AA chassis I built, Ch14 (with the first channel as Ch1) has the output saturated to 28V (differential). I'm not sure what kind of overvoltage protection the ADC has - we frequently have the inputs exceed the spec'd +/-20 V (e.g. when the whitening filters are engaged and the cavity is fringing), but pending further investigation, I am removing the SCSI connection from the rear of the AA chassis.

  15745   Wed Dec 23 10:13:08 2020 gautamUpdateCDSNear term upgrades

Summary:

  1. There appears to be insufficient number of PCIe slots in the new Supermicro servers that were bought for the BHD upgrade.
  2. Modulo a "riser card", we have all the parts in hand to put one of the end machines on the Dolphin network. If the Rogue Master doesn't improve the situation, we should consider installing a Dolphin card in the c1iscex server and connecting it to the Dolphin network at the next opportunity. 

Details:

Last night, I briefly spoke with Koji about the CDS upgrade plan. This is a follow up.

Each server needs a minimum of two peripheral devices added to the PCIe bus:

  • A PCIe interface card that connects the server to the Expansion Chassis (copper or optical fiber depending on distance between the two).
  • A Dolphin or RFM card that makes the IPC interconnects. 
  • I'm pretty certain the expansion chasiss cannot support the Dolphin / RFM card (it's only meant to be for ADCs/DACs/BIO cards). At least, all the existing servers in the 40m have at least 2 PCIe cards installed, and I think we have enough to worry about without trying to engineer a hack.
  • I attach some photos of new and old Supermicro servers to illustrate my point, see Attachment #1

As for the second issue, the main question is if we had an open PCIe slot on the c1iscex machine to install a Dolphin card. Looks like the 2 standard slots are taken (see Attachment #1), but a "low profile" slot is available. I can't find what the exact models of the Supermicro servers installed back in 2010 are, but maybe it's this one? It's a good match visually anyways. The manual says a "riser card" is required. I don't know if such a riser is already installed. 

Questions I have, Rolf is probably the best person to answer:

  1. Can we even use the specified host adaptor, HIB25-X4, which is "PCIe Gen2", with the "PCIe Gen3" slots on the new Supermicro servers we bought? Anyway, the fact that the new servers have only 1 PCIe slot probably makes this a moot point.
  2. Can we overcome this slot limitation by installing the Dolphin / RFM card in the expansion chassis?
  3. In the short run (i.e. if it is much faster than the full CDS shipment we are going to receive), can we get (from the CDS test stand in Downs or the site) 
    • A riser card so that we may install the Gen 1 Dolphin card (which we have in hand) in the c1iscex server?
    • A compatible (not sure what PCIe generation we need) PCIe host to ECA kit so we can test out the replacement for the Sun Microsystems c1ioo server?
    • A spare RFM card (VMIC 5565, also for the above purpose). 
  4. What sort of a test should we run to test the new Dolphin connection? Make a "null channel" differencing the same signal (e.g. TRX) sent via RFM and Dolphin? Or is there some better checksum diagnostic?
Attachment 1: IMG_0020.pdf
IMG_0020.pdf
  15746   Wed Dec 23 23:06:45 2020 gautamConfigurationCDSUpdated CDS upgrade plan
  1. The diagram should clearly show the host machines and the expansion chassis and the interconnects between them.
  2. We no longer have any Gentoo bootserver or diskless FEs.
  3. The "c1lsc" host is in 1X4 not 1Y3.
  4. The connection between c1lsc and Dolphin switch is copper not fiber. I don't know how many Gbps it is. But if the switch is 10 Gbps, are they really selling interface cables that have lower speed? The datasheet says 10 Gbps.
  5. The control room workstations - Debian10 (rossa) is the way forward I believe. it is true pianosa remains SL7 (and we should continue to keep it so until all other machines have been upgraded and tested on Debian 10).
  6. There is no "IOO/OAF". The host is called "c1ioo".
  7. The interconnect between Dolphin switch and c1ioo host is via fiber not copper.
  8. It'd be good to have an accurate diagram of the current situation as well (with the RFM network).
  9. I'm not sure if the 1Y1 rack can accommodate 2 FEs and 2 expansion chassis. Maybe if we clear everything else there out...
  10. There are 2 "2GB/s" Copper traces. I think the legend should make clear what's going on - i.e. which cables are ethernet (Cat 6? Cat 5? What's the speed limitation? The cable? Or the switch?), which are PCIe cables etc etc. 

I don't have omnigraffle - what about uploading the source doc in a format that the excellent (and free) draw.io can handle? I think we can do a much better job of making this diagram reflect reality. There should also be a corresponding diagram for the Acromag system (but that doesn't have to be tied to this task). Megatron (scripts machine) and nodus should be added to that diagram as well.

Please send me any omissions or corrections to the layout.

  15748   Wed Jan 6 15:28:04 2021 gautamUpdateVACVac rack UPS batteries replaced

[chub, gautam]

the replacement was done this afternoon. The red "Replace Battery" indicator is no longer on.

  15749   Wed Jan 6 16:18:38 2021 gautamUpdateOptical LeversBS Oplev glitchy

As part of the hunt why the X arm IR transmission RIN is anomalously high, I noticed that the BS Oplev Servo periodically kicks the optic around - the summary pages are the best illustration of this happening. Looking back in time, these seem to have started ~Nov 23 2020. The HeNe power output has been degrading, see Attachment #1, but this is not yet at the point where the head usually needs replacing. The RIN spectrum doesn't look anomalous to me, see Attachment #2 (the whitening situation for the quadrants is different for the BS and the TMs, which explains the HF noise). I also measured the loop UGFs (using swept sine) - seems funky, I can't get the same coherence now (live traces) between 10-30 Hz that I could before (reference traces) with the same drive amplitude, and the TF that I do measure has a weird flattening out at higher frequencies that I can't explain, see Attachment #3.

The excess RIN is almost exactly in the band that we expect our Oplevs to stabilize the angular motion of the optics in, so maybe needs more investigation - I will upload the loop suppression of the error point later. So far, I don't see any clean evidence of the BS Oplev HeNe being the culprit, so I'm a bit hesitant to just swap out the head...

Attachment 1: missingData.png
missingData.png
Attachment 2: OLRIN.pdf
OLRIN.pdf
Attachment 3: BS_OL_P.pdf
BS_OL_P.pdf
Attachment 4: BS_OL_suppression.pdf
BS_OL_suppression.pdf
  15750   Wed Jan 6 19:00:04 2021 gautamUpdateLSCPhase loss in POX/POY loops

I've noticed that there is some phase loss in the POX/POY locking loops - see Attachment #1, live traces are from a recent measurement while the references are from Nov 4 2018. Hard to imagine a true delay being responsible to cause so much phase loss at 100 Hz. Attachment #2 shows my best effort loop modeling, I think I've got all the pieces, but maybe I missed something (I assume the analog whitening / digital anti-whitening are perfectly balanced, anyway this wasn't messed with anytime recently)? The fitter wants to add 560 us (!) of delay, which is almost 10 clock cycles on the RTS, and even so, the fit is poor (I constrain the fitter to a maximum of 600 us delay so maybe this isn't the best diagnostic). Anyway, how can this change be explained? The recent works I can think of that could have affected the LSC sensing were (i) RF source box re-working, and (ii) vent. But I can't imagine how either of these would introduce phase loss in the LSC sensing. Note that the digital demod phase has been tuned to put all the PDH signal in the "I" quadrature, which is the condition in which the measurement was taken.

Probably this isn't gonna affect locking efforts (unless it's symptomatic of some other larger problem).

Attachment 1: POXloop.pdf
POXloop.pdf
Attachment 2: loopFit.pdf
loopFit.pdf
  15751   Wed Jan 6 22:47:41 2021 gautamUpdateALSNoisy ALS

Summary:

I want to get back to locking the interferometer so I can test out the newly installed AS WFS. However, the ALS noise is far too high, at least the transition of arm length control from IR to ALS fails reliably with the same settings that worked so reliably previously. I worked on investigating it a bit today.

Timeline

In the latter half of last year, I was focused on the air-BHD setup, so I wasn't checking in on the ALS noise as regularly. 

  1. On Aug 17, the noise was fine.
  2. But on Oct 29, the noise is bad (and it continues to remain so, to the point where I cannot lock the interferometer). 
  3. Koji and Anchal confirmed nothing was touched while they were investigating the ALS system, also on Oct 29. The spectra attached in #15650 don't make any sense to me, the noise at 100 Hz cannot be <100mHz/rtHz. So, inconclusive.

Excess noise:

All tests are done with the arm cavity length locked to the PSL frequency using POX. Then, the EX laser is locked to the arm cavity length using the AUX PDH servo. The fluctuations in the beatnote between the two lasers is what is monitored as a diagnostic. See Attachment #1. The reference traces in the top pane are from a known good time. The large excess noise between ~80-200 Hz is what I'm concerned about.

A separate issue that can improve the noise is to track down the noise in the 20-80 Hz band - probably some IMC frequency noise issue.

Noise budget:

See Attachment #2

  • I am pretty confident the electronics after the beat mouth are not to blame - I injected a 50 MHz signal from a Marconi and adjusted the signal amplitude to mimic what we get from the beat mouth. The trace labelled "DFD electronics noise" is the noise in this config.
  • The unsuppressed AUX frequency noise was measured with an SR785 (converted to freqnecy noise units knowing the PDH horn-to-horn voltage and the cavity linewidth). I didn't confirm the sensing noise level (dark noise of the AUX PDH loop), but I figure that at 100 Hz (voltage noise of ~100 uV/rtHz on the SR785), we are above the sensing noise level, and so are truly measuring the in-loop frequency noise of the stabilized AUX laser. I also confirmed that the loop UGF was ~10 kHz and phase margin was ~60 degrees, which are nominal numbers.
  • The fact that the excess noise is only in the X arm channel means the PSL frequency is not to blame.

So what could it be? The only things I can think of are (i) the beat mouth photodiode (NF1611) or (ii) excess noise in the fiber carrying the light from EX to the PSL table (but only on this fiber, and not on the EY fiber). Both seem remote to me - I'll test the former by switching the EX and EY fiber inputs to the beat mouth, but apart from this, I'm out of ideas... 

To avoid this kind of issue, we should really have scripted locks of all the basic IFO configs and record the data to summary pages or something - maybe something to do once Guardian is installed, it'd be pretty hacky to do cleanly with shell scripts.

Attachment 1: ALSX_excess.png
ALSX_excess.png
Attachment 2: budget.pdf
budget.pdf
  15752   Thu Jan 7 19:16:11 2021 gautamUpdateALSNoisy ALS

I'm also wondering why the error monitors for the X and Y loops report such wildly different spectra for the suppressed frequency noise of the AUX laser relative to the cavity length, see Attachment #1. The y-axis should be approximately Hz/rtHz. In both cases, the servo's error point monitor is connected to the DAQ system via a G=10 SR560. With the SR785, I measure for EX a nice bucket-shaped spectrum, bottoming out at ~10 uV/rtHz around 40 Hz, see Attachment #2. The SR560 should have an input-referred noise much less than this at the G=10 setting. The ADC noise level is only ~5 uV/rtHz, and indeed, the EY spectrum shows the expected shape. So what's up with the EX error mon? Tried swapping out the SR560 for a different unit, no change. And both the SR560 noise, and the ADC noise, check out when everything is checked individually. So some kind of interaction once everything is connected together, but it's only present at EX...

Today, I tried switching the EX and EY fibers going into the beat mouth, but I preserved the channel mapping after the beat mouth by switching the electrical outputs as well (the goal was to make sure that the beat photodiodes weren't the issue here, I think the electronics are already exonerated since driving the channel with a Marconi doesn't produce these noisy features). The EX spectrum remains noisy. I've switched everything back to the nominal configuration now to avoid further confusion. So it would appeat that this is real frequency noise that gets added in the EX fiber somehow. What can I do to fix this? The source of coupling isn't at the PSL table, else the EY channel would also show similar features. Visually, nothing seems wrong to me at EX either. So the problem is somehow in the cable tray along which the 40m of fiber is routed? This is already inside some nice foam/tubing setup, what can be done to improve it? Still doesn't explain why it suddenly became noisy...

Attachment 1: ALS_ERR_MON.pdf
ALS_ERR_MON.pdf
Attachment 2: AUXnoise.pdf
AUXnoise.pdf
  15754   Thu Jan 7 21:16:22 2021 gautamUpdateALSNoisy ALS

I thought about it, but wouldn't that show up at the AUX PDH error point? Or because the loop gain is so high there we wouldn't see a small excess? I suppose there could be some clipping on the Faraday or something like that. But the GTRX level and the green REFL DC level when locked are nominal.

Quote:

How about resurrecting the PSL table green beat for the X arm to see if the non-fiber setup shows the same level of the freq noise (e.g. the PDH locking became super noisy due to misalignment etc).

  15756   Fri Jan 8 20:01:11 2021 gautamUpdateALSNoisy ALS

I did this test today. The excess noise around 100 Hz doesn't show up in the green beat.

See Attachment #1. The setup was as usual:

  • X-Arm cavity length stabilized to PSL frequency using the POX locking loop.
  • EX laser frequency locked to the X-Arm cavity length using the AUX PDH loop.
  • The "BEATX" channel records frequency fluctuations in the beat sensed on the IR beat photodiode, while the "BEATY" channel records frequency fluctuations in the beat sensed on the Green beat photodiode.
  • Since the green beat frequency fluctuations are twice that of the IR beat frequency fluctuations, I scaled the former ASD by a factor of 0.5 so as to compare apples to apples.
  • At low frequencies, the green beat is noisier, but that channel doesn't show the excess noise at mid frequencies you see in the IR beat. So the AUX PDH sensing noise is not to blame I think.

So, this excess appears to truly be excess phase noise on the fiber (though I have no idea what the actual mechanicsm could be or what changed between Aug and Oct 2020 that could explain it. Maybe the HEPA?

For this work, I had to spend some time aligning the two green beams onto the beat photodiode. During this time, I shuttered the PSL, disabled feedback via the FSS servo, turned the HEPA high, and kept the EX green locked to the arm so as to have a somewhat stable beat signal I could maximize. Everything has been returned to nominal settings now (obviously, since I locked the arms to get the data).


You may ask, why do we care. In terms of RMS frequency noise, it would appear that this excess shouldn't matter. But in all my trials so far, I've been unable to transition control of the arm cavity lengths from POX/POY to ALS. I suppose we could try using the green beat, but that has excess low frequency noise (which was the whole point of the fiber coupled setup). 

Quote:

How about resurrecting the PSL table green beat for the X arm to see if the non-fiber setup shows the same level of the freq noise (e.g. the PDH locking became super noisy due to misalignment etc).

Attachment 1: ALSX_IR_green.pdf
ALSX_IR_green.pdf
  15761   Tue Jan 12 11:42:38 2021 gautamHowToCDSAcromag wiring investigation

Thanks for the systematic effort.

  1. Can you please post some time domain plots (ndscope perferably or StripTool) to clearly show the different failure modes?
  2. The majority of our AI channels are "Referenced Single Ended Source" in your terminology. At least on the c1psl Acromag crate, there are no channels that are truly differential drive (case #3 in your terminology). I think the point is that we saw noisy inputs when the IN- wasn't connected to RTN. e.g. the thorlabs photodiode has a BNC output with the shield connected to GND and the central conductor carrying a signal, and that channel was noisy when the RTN was unconnected. Is that consistent with your findings?
  3. What is the prescription when we have multiple power supplies (mixture of Sorensens in multiple racks, Thorlabs photodiodes and other devices powered by an AC/DC converter) involved?
  4. I'm still not entirely convinced of what the solution is, or that this is the whole picture. On 8 Jan, I disconnected (and then re-connected) the FSS RMTEMP sensor from the Acromag box, to check if the sensor output was noisy or if it was the Acromag. The problem surfaced on Dec 15, when I installed some new electronics in the rack (though none of them were connected to the Acromag directly, the only common point was the Sorensen DCPS. And between 8 Jan and today, the noise RMS has decreased back to the nominal level, without me doing anything to the grounding. How to reconcile this?
  15763   Thu Jan 14 11:46:20 2021 gautamUpdateCDSExpansion chassis from LHO

I picked the boxes up this morning. The inventory per Fil's email looks accurate. Some comments:

  1. They shipped the chassis and mounting parts (we should still get rails to mount these on, they're pretty heavy to just be supported on 4 rack nuts on the front). idk if we still need the two empty chassis that were requested from Rich.
  2. Regarding the fibers - one of the fibers is pre-2012. These are known to fail (according to Rolf). One of the two that LHO shipped is from 2012 (judging by S/N, I can't find an online lookup for the serial number), the other is 2011. IIRC, Rolf offered us some fibers so we may want to take him up on that. We may also be able to use copper cables if the distances b/w server and expansion chassis are short.
  3. The units are fitted with a +24V DC input power connector and not the AC power supplies that we have on all the rest of the chassis. This is probably just gonna be a matter of convenience, whether we want to stick to this scheme or revert to the AC input adaptor we have on all the other units. idk what the current draw will be from the Sorensen - I tested that the boards get power, and with noi ADCs/DACs/BIOs, the chassis draws ~1A (read off from DCPS display, not measured with a DMM). ~Half of this is for the cooling fans It seems like KT @ LLO has offered to ship AC power supplies so maybe we want to take them up on that offer.
  4. Without the host side OSSI PCIe card, timing interface board, and supermicro servers that actually have enough PCIe slots, we still can't actually run any meaningful test. I ran just a basic diagnostic that the chassis can be powered on, and the indicator LEDs and cooling fans run.
  5. Some photos of the contents are here. The units are stored along the east arm pending installation.

    >     Koji,
    >
    >     Barebones on this order.
    >
    >        1. Main PCIe board
    >        2. Backplane (Interface board)
    >        3. Power Board
    >        4. Fiber (One Stop) Interface Card, chassis side only
    >        5. Two One Stop Fibers
    >        6. No Timing Interface
    >        7. No Binary Cards.
    >        8. No ADC or DAC cards
    >
    >     Fil Clara
    >       
  15765   Thu Jan 14 12:32:28 2021 gautamUpdateCDSRogue master may be doing something good?

I think the "Rogue Master" setting on the RFM network may be doing some good. 5 mins, ago, all the CDS indicators were green, but I noticed an amber light on the c1rfm screen just now (amber = warning). Seems like at GPS time 1294691182, there was some kind of error on the RFM network. But the network hasn't gone down. I can clear the amber flag by running the global diag reset. Nevertheless, the upgrade of all RT systems to Dolphin should not be de-prioritized I think.

Attachment 1: Screenshot_2021-01-14_12-35-52.png
Screenshot_2021-01-14_12-35-52.png
  15767   Fri Jan 15 16:54:57 2021 gautamUpdateCDSExpansion chassis from LHO

Can you please provide a link to this "list of boards"? The only document I can find is T1800302. In that, under "Basic Requirements" (before considering specific motherboards), it is specified that the processor should be clocked @ >3GHz. The 3 new supermicros we have are clocked at 1.7 GHz. X10SRi-F boards are used according to that doc, but the processor is clocked at 3.6 or 3.2 GHz.

Please also confirm that there are no conflicts w.r.t. the generation of PCIe slots, and the interfaces (Dolphin, OSSI) we are planning to use - the new machines we have are "PCIe 2.0" (though i have no idea if this is the same as Gen 2). 

Quote:

The motherboard actually has six PCIe slots and is on the CDS list of boards known to be compatible.

As for the CX4 cable - I still think it's good to have these on hand. Not good to be in a situation later where FE and expansion chassis have to be in different racks, and the copper cable can't be used.

Attachment 1: Screenshot_2021-01-15_17-00-06.png
Screenshot_2021-01-15_17-00-06.png
  15768   Fri Jan 15 17:04:45 2021 gautamUpdateLSCMessed up LSC sensing

I want to lock the PRFPMI again (to commission AS WFS). Have had some success - but in doing characterization, I find that the REFL port sensing is completely messed up compared to what I had before. Specifically, MICH and PRCL DoFs have no separation in either the 1f or 3f photodiodes. 

  • A sensing line driven in PRCL doesn't show up in the AS55 photodiode signal - this is good and as expected.
  • For MICH - I set the MICH--->PRM actuation matrix element so as to minimize the height of the peak at the MICH drive frequency that shows up at the PRCL error point. My memory is that I used to be able to pretty much null this signal in the past, but I can't find a DTT spectrum in the elog as evidence. Anyways, the best effort nulling I can achieve now still results in a large peak at the PRCL error point. Since the sensing matrix doesn't actually make any sense, idk if it is meaningful to even try and calibrate the above qualitative statement into quantitative numbers of cross coupling in meters.
  • With the PRMI locked on 1f error signals (ETMs misaligned, PRCL sensed with REFL11_I, MICH sensed with AS55_Q) - I tried tweaking the digital demod phase of the REFL33 and REFL165 signals. But I find that the MICH and PRCL peaks move in unison as I tweak the demod phase. This suggests to me that both signals are arriving optically in phase at the photodiode, which is weird.
  • The phenomenon is seen also in the REFL11 signal.

I did make considerable changes to the RF source box, and so now the relative phase between the 11 MHz and 55 MHz signals is changed compared to what it was before. But do we really expect any effect even in the 1f signal? I am not able to reproduce this effect in simulation (Finesse), though I'm using a simplified model. I attach two sensing matrices to illustrate what i mean:

  1. Attachment #1 is in the PRFPMI state, with the IFO on RF control (CARM on REFL11, PRCL on REFL165_I, MICH on REFL165_Q, DARM on AS55_Q). 
  2. Attachment #2 is between the transition to RF control (CARM and DARM on ALS, PRCL on REFL165_I, MICH on REFL165_Q). The CARM offset is ~4nm (c.f. the linewidth of ~20pm), so the circulating power in the arm cavities is low.
Attachment 1: PRFPMI_Jan12sensMat.pdf
PRFPMI_Jan12sensMat.pdf
Attachment 2: PRMI3f_ALS_Jan11_largeOffsetsensMat.pdf
PRMI3f_ALS_Jan11_largeOffsetsensMat.pdf
ELOG V3.1.3-