40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 186 of 339  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  14686   Fri Jun 21 19:36:26 2019 KojiUpdateIOOIMC diagnostics

The IMC REFL error signal was measured to compare it with the other spectra (if we have).

The blue curve is the in-loop IMC error and the red is the dark noise. So they are not an apple-to-apple comparison. But the red noise is going to be suppressed by the loop, and still the red is below blue. This means that the blue curve is the measured noise rather than the readout noise.

We suspect that the current issue is the PC drive saturation (as usual). Does this indicate that the laser freq noise is actually increased?

----

Another suspect was that the degradation of the LO level. We used to have the issue of slowly dying ERA-5 (ERA-5SM indeed). The RF levels on the demod board were measured using an active probe.

The LO input: 0dBm, ERA-5 input: -2.7dBm and -2.1dBm for I and Q. I found that the outputs of the ERA-5SM were +10.5dBm and +10.6dBm.
This lead me to replace the chips but the situation was not changed. Then I realized that the LO levels should have been measured with the load replaced from the mixers to a 50Ohm load. Somehow these mixers lower the apparent LO levels. So I decided to say this is OK.

Attachment 1: IMC_error.pdf
IMC_error.pdf
  14687   Sun Jun 23 08:09:53 2019 gautamUpdateIOONPRO diagnostics

Summary:

Over the last few days, I've been doing some (complementary) measurements to what Aaron and Koji have been looking at. The motivation was to identify if the problems we are seeing are optical (i.e. imprinted on the PSL light) or electronic. My findings:

  1. 60 Hz line noise in PMC REFL and PMC TRANS is heavily dependent on whether I connect cables between the measuring PDs and Acromag ADC or not - but even with the Acromag cable disconnected, the 60 Hz RIN is HUGE - 10 mVpp out of 670 mV DC, and lines are much dirtier if you have connections to the SLOW ADCs. Measurement was made by looking at the time-domain signals on a battery powered Tektronix oscilloscope. See Attachment #1. I believe this line noise is higher it was. Cause is unknown to me at this point.
  2. The NPRO noise eater seems to function as advertised. The measured RIN with the noise eater enabled (our nominal operating condition) is in line with what the manual tells us it should be. See Attachment #2.
  3. There isn't strong evidence of excess frequency noise (measured with PLL) out to 100 kHz. I didn't measure the high-frequency part yet, but maybe I'm doing something wrong with the PLL setup which should be first corrected. See Attachments #3, #4.
  4. The beat note frequency between the free-running PSL and EX NPRO's is definitely slewing more than the quadrature sum of the advertised 1 MHz/min slewing per the manual.

Evidence:

Attachment #1: Time domain look at PMC Refl and Trans signals under various operating conditions. During this work, I took the chance to remove ~4 BNC T connectors that were connected on the PMC TRANS photodiode (Thorlabs). Now, there is one cable going to the Acromag ADC, and one going to the Oscilloscope used to monitor these signals. Any further T-ing can be done at the oscilloscope.

Attachment #2: RIN measurement of the NPRO light. I opted to place a Thorlabs PDA55 in the IR ALS pickoff light path. This is before the light sees the PMC. A DC block was inserted between the PDA55 and the AG4395 used to make the measurement. DC level of the PD output was 3.1 V into high-Z and I used half this value to normalize the measurement made by the 50-ohm input AG4395 into RIN units. The measurement was made with the PZT and slow temperature controls to the NPRO connected/disconnected, but I saw no significant difference. 

Attachment #3: Frequency noise measurement via PLL. This shows the loop transfer funtion for the PLL. Some details of the setup:

  • The beat note for locking the PLL was made between the PSL NPRO and the EX NPRO (output of the IR ALS BeatMouth). ~4dBm beatnote.
  • Local oscillator was sourced by a Marconi, f_carrier=33 MHz, RF level = +10dBm.
  • Level 7 Mixer and LB1005 controller from the mode-spectroscopy PLL setup.
  • PLL control signal routed to EX NPRO PZT via Heliax cable running along south arm. 
  • Why EX and not PSL or Marconi FM? Latter has limited range, ~1/10th of that offered by NPRO PZT. PSL PZT has a 2.9 Hz corner freq Pomona box. I could disconnect this for the purpose of PLL locking, but I thought it may be interesting to see if there’s any hints of the problem being electrical, by looking at PLL spectra with / without Pomona box. The expected delay due to cabling is only 400 ns, so not really a limiting factor for the PLL bandwidth.
  • LB 1005 settings:
    • PI corner = 3 kHz.
    • G = 2.30 (I could not increase this further - with the PSL+Lightwave NPRO PLL, we could achieve a UGF of ~60 kHz, but in this setup, I can't do much better than ~7kHz before the loop starts oscillating, not sure if the fact that the PZT actuation coefficient for the Innolight is ~5x lower than for the Lightwave is enough to explain this?).
    • LFGL = 90 dB.
  • Mixer output had a maximum value of 800 mVpp => PLL discriminant is 400 mV/rad.
  • The "eye fit" is just the transfer function of two poles at DC (one for frequency to phase conversion in the PLL and one for the LB1005 integrator), and a zero at 3kHz (PI corner). I scaled the gain till the "fit" and measurement lined up, and then used this model to undo the loop suppression of the error signal to extract the frequency noise without worrying about the frequency vector of the measurement being limited.
  • Once again, slow temperature control and PZT controls to the PSL NPRO were disconnected so this measurement was made with two free-running NPROs.

Attachment #4: Frequency noise measurement via PLL. This shows the frequency noise. I've overlaid the expected frequency noise between 2 free-running NPROs, model used is in the text box in the plot. There isn't strong evidence of excess high frequency noise in this measurement. The fact that the "LB 1005 input terminated" trace is below all the others supports the hypothesis that I'm measuring real frequency noise. The bump around a few kHz could indicate some gain peaking?

However, I'm unable to find good agreement between the measured frequency noise using the error point and the control point. For the former, I used the PLL discriminant mentioned above of 400 mV/rad, and undid the loop suppression, and for the latter I used a PZT discriminant of 1.7 MHz/V. However, there is still a constant scale difference between these two traces. So I'm doing something wrong?

Next steps:

  1. More interpretation of the PLL measurement results required.
  2. Measure the PLL error signal spectrum to higher frequencies using the AG4395. 
  3. ???

I've not disturbed the PLL setup in case anyone else wants to repeat these measurements, but I have restored the normal electrical connections to the PSL PZT and temperature control.

Some other activity:

  1. Alignment into the PMC was tweaked.
  2. NPRO laser pump current was increased from 1.9 A to 2.0 A.
  3. PMC servo gain was changed from +18 to +17 to prevent the servo from oscillating.
Attachment 1: consolidatedOscopeScreenCaps.pdf
consolidatedOscopeScreenCaps.pdf
Attachment 2: RINcomp.pdf
RINcomp.pdf
Attachment 3: PLL_OLTF.pdf
PLL_OLTF.pdf
Attachment 4: PLLnoise.pdf
PLLnoise.pdf
  14688   Sun Jun 23 09:36:32 2019 gautamUpdateIOOIMC is locking normally again

After typing up the elog, I decided to try locking the IMC again - now it locks again with the "OLD" gain settings. I tested it ~5 times, the autolocker brings the lock back and the PC drive levels are normal. IMC transmission and MC REFL DC light levels in lock are normal. The PC Drive RMS voltage is <1V. What's more, there is no longer any evidence of 60 Hz line harmonics any more in the PMC diagnostics channels. Compare attachment #1 to this elog.

WTF.

I undid the changes Koji made to the autolocker gains, and am trying the old settings again. Let' see how stable or otherwise the config is. I must've jiggled some poor cable connection back into a good spot while working on the PSL table?

Anyway, this helps Kruthi and Milind.

Attachment 1: PMCdiag.pdf
PMCdiag.pdf
  14689   Sun Jun 23 14:43:14 2019 KojiUpdateIOOIMC is locking normally again

Note that I have removed an SR785, an oscilloscope, some SRS instruments from the PSL and PMC last night.

But they (and RF Network Analyzer) were not there when the problem started.

We should record the IMC error (at test point monitor) too? If the IMC locks on Monday too, I'll do it.

  14690   Mon Jun 24 08:12:10 2019 gautamUpdateIOOIMC is locking normally again

Over the last 24 hours, the IMC autolocker was able to keep the MC locked ~60% of the time. This is not particularly good, but is an improvement on ~2 weeks ago when the IMC couldn't be locked.

There are two periods, which I've indicated by vertical cursors, between which the autolocker was doing something strange - usually this kind of trend is caused by one or more of the VME crates being unresponsive and the autolocker gets stuck, but I confirmed that both c1psl and c1iool0 are telnet-able. So I conclude that the stability and reliability of the IMC loop is still not as good as it used to be.

Note also that while the PC drive RMS level mostly hovers around 1 V, there are several excursions above that level. This in itself isn't a new phenomenon. I will do some more characterization by measuring the in-loop error signal spectrum and maybe the OLTF of the IMC locking loop.

Quote:
 

Let' see how stable or otherwise the config is. I must've jiggled some poor cable connection back into a good spot while working on the PSL table? Or the NPRO decided to be less noisy on Sunday.

Attachment 1: IMCdutycycle.png
IMCdutycycle.png
  14691   Mon Jun 24 11:48:35 2019 gautamUpdateIOOIMC in-loop error spectra and OLTF

Attachment #1 - In loop error spectra, measured as Koji posted end of last week.

  • Main difference is that the line noise seems much lower.
  • For the "dark" measurement, I set the IN1 gain of the servo board to the value of +4 dB, which is what it is in lock.
  • As Koji mentioned, this isn't an apple-to-apple comparison as the IMC loop will squish the plotted orange trace.
  • Nevertheless, the fact that the blue trace is above orange everywhere gives confidence that we are in fact measuring frequency noise.
  • For the higher frequency measurement, I used the AG4395 analyzer, which has 50 ohm input impedance. So to get the measurements with the SR785 to line up, I multiplied these by x2.
  • For the frequency axis calibration, I used the value of 13 kHz/V for the PDH discriminant, which was what I measured it to be last year (but I didn't check again today).
  • Note that the IMC locking loop OLTF has not been undone, so this isn't the actual laser frequency noise on the transmitted beam. In order to measure the latter, we'd have to use (for example) an arm cavity as an analyzer.

Attachment #2 - OLTF of the IMC loop.

  • Measurement was made using the IN1/IN2 method, injection was done at the "A EXC" front panel BNC input.
  • For comparison, I've overlaid a measurement from the 2017 IMC loop investigations. Doesn't seem to be significantly different.
  • UGF and phase margin are in the ballpark of what they were reported to be in the past.

Attachment #3 - Photo courtesy Koji showing the bank of BNC connectors used for these measurements.

Clearly, these measurements were taken in a time when the IMC was "well behaved". How to characterize what's happening when this isn't the case?

Attachment 1: IMCfreqNoise.pdf
IMCfreqNoise.pdf
Attachment 2: IMC_OLTF.pdf
IMC_OLTF.pdf
Attachment 3: IMC_CMboard.jpg
IMC_CMboard.jpg
  14693   Mon Jun 24 15:49:05 2019 aaronUpdateIOOIMC diagnostics

aI went to repeat these measurements using the mixer out channel from the servo box, and with a slower sweep for the PDH calibration.

I had trouble getting the PDH signal, here are some notes:

  • I added a 50 Ohm terminator to BNC T on the mixer box. This had been terminated before I started, but I noticed no terminator today.
  • Noticed some distortion of my driving triangle wave if I measured it on ch3 and 4 of the tektronix scope, not present on ch 1/2
  • Initially wasn't finding a signal because I was opening the loop by turning off the Test 1 switch, but this meant the mixer mon on the servo box also did not receive the PDH signal. Instead, I cut the loop with the "BLANK" switch on the PMC screen, which instead blanks out the op amp between the mixer mon and the PMC drive conditioning (so the external drive still reaches the PZT).

attachment 1 is the configuration of the PMC screen when I was trying to get some PDH signal; I did move the DC output adjust to 0V, but found that this led to the output being railed; this makes sense, the op amp at U9 has a negative bias at GND.

Rana came by and gave me some tips.

  • I'd been using the wrong servo board diagram, it should be in D1400221
  • We removed the LP filter from the mixer output (before going to FP1TEST on the servo board), since the board itself already is filtering the IF.
  • We might have observed the thermal locking? See for yourself, the trans and refl signals while sweeping the PZT drive at 5 Hz and 30 Hz respectively are in attachments 2 and 3.
  • Rather than using an SR560, I should use an RF coupler between the mixer and FP1TEST to measure the error signal spectrum. I found a ZFDC-20-5-S+ (0.1-2000 MHz) and sent an SMA cable from the coupled port to channel R of the Agilent 4395.

We finally got the PDH signal again, and I recorded the PDH signal while driving with the following settings on the Siglent function generator.

  • 1.1 Hz triangle wave, 6Vpp, -7Vdc offset, high impedance mode

I tried getting a spectrum using the coupler, the mixer mon is seeing a DC offset though and causing the PZT to rail. Will try to understand why, but in the meantime removing the coupler (still no LP filter) lets us lock the PMC again.

RXA: Kruthi thinks all of our subsequent IMC locking problems are Aaron's fault (she was quick to give him up as soon as the thumb screws were tightened...)

Attachment 1: sweep_config_updated.png
sweep_config_updated.png
  14696   Tue Jun 25 15:32:16 2019 gautamUpdateIOOPMC and IMC locked again, some MEDM maintenance

Aaron complained to me earlier that the PMC could not be locked. Turned out to be a classic sticky slider problem, I keyed the c1psl VME crate, and did the usual burtrestore trick. After that, I could immediately lock the PMC and IMC with the nominal gain settings.

I also looked at the wiring at the rack. An SLP250 was installed at the mixer IF output, in parallel with a 50 ohm terminator to ground. I removed this, because as Aaron pointed out, the PMC servo card "FP1 TEST" input is already 50 ohm, and has two cascaded LC filter stages immediately after to filter out the 2f component, so the extra low-pass filtering is superfluous (in any case, 250 MHz is much too high a cutoff to be using for cutting out the 2f component which will be at ~70 MHz).

Finally, in the last ~2 weeks, we have been running with the PMC servo gain of +17 (as opposed to +18 from before). The old gain is too high, as noted by Milind. But the MEDM field for this gain goes RED unless the gain is +18. I adjusted the value of the  C1:PSL-STAT_PMC_NOM_GAIN channel to +17, so that this is no longer the case. I also edited the PMC MEDM screen to get rid of my comment that the "SLOW ADC IS DEAD" for the PMC TRANS field, since I have now hooked up the PMC trans photodiode to our temporary Acromag box.

Attachment 1: PMCctrl.png
PMCctrl.png
  14699   Wed Jun 26 10:55:13 2019 aaronUpdateIOOPMC and IMC locked again, some MEDM maintenance

The PMC was locking again after Gautam's steps above. However, after I added the directional coupler between the mixer I and the servo card (coupled to the Agilent analyzer), the PMC was again not locking, except occasionally with gain of -10 dB.

I removed the coupler (so the mixer I goes directly to the PMC servo card, as Gautam had it), and the PMC was still not locking. While checking connections, I noticed that one of the SMA cables between the LO and the mixer was not even finger tight, so I tightened them to approximately the right torque with a non-torque wrench.

This did not lead to the PMC locking, so Millind helped me key the c1psl VME crate. I burt restored the latest snapshot. Now, the PMC locks up until gain of -5. I try burt restoring the previous snapshot, which was from when the PMC was locking, and now it locks. Adding in the directional coupler again leads to the PMC not locking, though this time removing the coupler restores the normal behavior. I also tried using the coupler with the coupling port connected to a 50 Ohm terminator, and this configuration also did not lock.

I had been using a ZFDC-20-5-S+ (0.1-2000 MHz) with SMA ports and SMA-to-BNC on the input and output ports (since the mixer has BNC connectors). To reduce the number of potentially flaky connections, I am trying the ZFDC-20-4 (1-1000 MHz) that I found with BNC ports. The PMC still doesn't lock.

To get some spectrum, I've connected the PMC servo card's 'mixer out' to the Agilent's A channel, and collected a spectra from [10 Hz, 75 kHz], [75 kHz, 750 kHz], and [750 kHz, 2 MHz].


Wed Jun 26 15:23:37 2019

After the lab cleaning, I added a BNC T on the mixer I port, so now the configuration is:

Mixer I -> BNC T

-> PMC Servo card FP1TEST

-> directional coupler -> coupled to the spectrum analyzer, out port is terminated with 50 Ohms.

I thought maybe the issue was that the TF from in->out on the directional coupler is not what I expect (and Gautam suggested the in-out port might block DC), but the PMC still does not lock in the above configuration, in which the coupler is not between the mixer and the servo board--so only reflections from the coupler should matter, I think.

However, even when I plug the mixer directly into the servo board, the PMC is not locking (again) with gain above -8 dB or so. I did a burt restore again, and this fixed the problem. I wasn't sure why this burt restore is working, because all I am changing is the DC output adjust voltage and the gain, and switching on/off FP1TEST. However, I observed that after running the PMC autolocking script, observing that the autolocker did not achieve lock as it swept through resonance, and cancelling the autolocker, the PMC again cannot be locked for high gains. When I let the autolocker complete, this doesn't happen, so probably I'm just not letting some channel return to its nominal value after being changed by the autolocker.

Now after another burt restore, I'm avoiding using the autolocker and am still having trouble locking with the BNC T + directional coupler configuration above. However, now I'm noticing that the PZT control mon is always railed, as long as FP1TEST is in the loop (and independent of the output adjust voltage). I try returning to the 'baseline' configuration (mixer -> PMC servo card directly), and the PMC locks but with only 0.68 V transmission (was >0.7 V before).


Per Gautam's earlier suggestion, I switched to using the Agilent 41800A probe instead of the directional coupler. I was able to lock the OMC with this probe on a BNC T coming out of the mixer (transmission is 0.71 V). I recorded the spectra of the PMC servo board's "Mixer Out" channel, and the mixer's I as seen by the probe. I recorded spectra from 10 Hz to 100 MHz. The soft linked netgpibdata folder I had in my users directory is no longer soft linked--presumably intentional so I don't tamper with it?

I'm a bit skeptical that I've used the probe correctly, so I'm checking out the manual.

Indeed, I needed to pull back the sheath; I also noticed that the GPIB script I've been using doesn't save the data from both channels when I take a spectrum in dual mode, so I'm taking the spectra again one at a time (lights are on, IMC is locked).

  14700   Wed Jun 26 11:11:40 2019 MilindUpdateIOOPMC and IMC locked again, some MEDM maintenance

After helping Aaron key the crate and do a burt restore, I realized that it would probably be best to record the steps that Koji showed me to do a burt restore as a reference for (anyone) the future

Commands (in terminal):

  1. burttoday: changes to the directory with snapshots for the day (/opt/rtcds/caltech/c1/burt/autoburt/today)
  2. burtgooey: opens a new window with several buttons of which "Restore" needs to be selected. This opens up a second window as shown in Attachment #1. Click on Snapshot files and navigate to the snapshot you wish to restore (these are present at /opt/rtcds/caltech/c1/burt/autoburt/snapshots) and select that. A green "OK" button indicates if the Restore can be performed without a hitch. Hit "Restore" to perform the burtrestore.

 

Also Gautam explained today that the sticky slider problem is a hardware issue. That it basically means that the signal (voltage output for instance) that you request from the medm screen is not what the hardware delivers. Twice now, we have got around that with a burtrestore. My understanding of a burt restore is that it is a restoration of values from a certain time to the EPICS channels. Therefore, I don't understand why a restoration at the software level should fix how the hardware responds? Why does this happen?

Attachment 1: burtgooey.pdf
burtgooey.pdf
  14701   Wed Jun 26 18:28:24 2019 ranaUpdateIOOPMC and IMC locked again, some MEDM maintenance

a useful piece of code that we should ask one of this summer's SURFs to write:

  1. read in a BURT .req file which is usually used to make the autosnap / restore.
  2. change ALL of the values to some value (slightly different from its current value)
  3. restore it to its current value

this will solve the sticky slider problem and do it in a systematic way. We can run it from the command line: e.g. 'unsticky.py c1psl c1ioo c1lsc'

Quote:

Aaron complained to me earlier that the PMC could not be locked. Turned out to be a classic sticky slider problem,

  14709   Sun Jun 30 19:47:09 2019 ranaUpdateIOOIMC WFS agenda

we are thinking of doing a sprucing up of the input mode cleaner WFS (sensors + electronics + feedback loops)

  1. WFS Heads
    1. it has been known since ~2002 that the RF circuits in the heads oscillate. 
    2. in the attached PDF you can see that 2 opamps (U3 & U4; MAX4106) are used to amplify the tank circuit made up of the photodiode capacitance and L6.
    3. due to poor PCB layout (the output of U4 runs close to the input of U3) the opamps oscillate if the if the Reed relay (RY2) is left open (not attenuating)
    4. we need to remove/disable the relay
    5. also remove U3 for each quadrant so that it has a fixed gain of (TBD) and a 50 Ohm output
    6. also check that all the resonances are tuned to 1f, 2f, & 3f respectively
  2. Demod boards
  3. DC quadrant readouts
  4. Whitening
  5. Noise budget of sensors, including electronics chain
  6. diagonalization of sensors / actuators
  7. Requirements -
  8. Optical Layout
  9. What does the future hold ?

  1. what is our preferred pin-for-pin replacement for the MAX4106/MAX4107? internet suggests AD9632. Anyone have any experience with it? The Rabbott uses LMH6642 in the aLIGO WFSs. It has a lower slew rate than 9632, but they both have the same distortion of ~ -60 dB for 29.5 MHz.
  2. the whole DC current readout is weird. Should have a load resistor and go into the + input of the opamp, so as to decouple it from the RF stuff. Also why such a fast part? Should have used a OP27 equivalent or LT1124.
  3. LEMO connectors for RF are bad. Wonder if we could remove them and put SMA panel mount on there.
  4. as usual, makes me feel like replacing with better heads...and downstream electronics...
Attachment 1: WFS-Head.pdf
WFS-Head.pdf
  14711   Sun Jun 30 22:21:07 2019 MilindUpdateIOOPMC and IMC locked again, some MEDM maintenance

Wrote the script. It currently lives at /users/milind/NonlinearControl/milind/unstick/unstick.py. Also pushed it to the repo here. It does the following:

  1. When run as python unstick.py c1aux (for instance) from the terminal, it parses the autoBurt.req file at /cvs/cds/caltech/target/c1aux/autoBurt.req and obtains the channels.
  2. Iterates through the channels and changes it to "some value"
    1. For channels corresponding to buttons on MEDM screen (Enable/Disable etc.) toggles between 0 and 1
    2. For channels corresponding to continuous values (such as say exposure time or the like) changes to abs(1+current_value)
  3. Resets to original value and then moves to the next channel

I tried print statements instead of actually writing to the channels as Gautam asked me to do that with supervision. Is this good enough?

Quote:

a useful piece of code that we should ask one of this summer's SURFs to write:

  1. read in a BURT .req file which is usually used to make the autosnap / restore.
  2. change ALL of the values to some value (slightly different from its current value)
  3. restore it to its current value

this will solve the sticky slider problem and do it in a systematic way. We can run it from the command line: e.g. 'unsticky.py c1psl c1ioo c1lsc'

Quote:

Aaron complained to me earlier that the PMC could not be locked. Turned out to be a classic sticky slider problem,

  14712   Sun Jun 30 23:52:09 2019 KojiUpdateIOOPMC and IMC locked again, some MEDM maintenance

> For channels corresponding to continuous values (such as say exposure time or the like) changes to abs(1+current_value)

Why abs? Is the current_value is like -5.4321 (for example for the alignment slider), this returns +4.4321 and the suspension will suffer from huge motion (well it will be returned to the original value soon though). 

  14713   Mon Jul 1 11:02:05 2019 MilindUpdateIOOPMC and IMC locked again, some MEDM maintenance

Made changes as discussed in this issue. Further, I need to add signal handling capabilities to the code. I belive Jon has pointed me to some code. I will take a look at that ASAP.

Quote:

> For channels corresponding to continuous values (such as say exposure time or the like) changes to abs(1+current_value)

Why abs? Is the current_value is like -5.4321 (for example for the alignment slider), this returns +4.4321 and the suspension will suffer from huge motion (well it will be returned to the original value soon though). 

  14721   Tue Jul 2 19:36:18 2019 aaronUpdateIOOIMC diagnostics

The latest in my fling with the PMC. Though PMC trans is back to nominal levels (~0.713 V), we'd still like to understand the PMC noise.

Last time, I took some spectra with the RF probe (Agilent 41800A). I had already measured the PDH error signal by sweeping the PZT at ~1 Hz. The notebook I used for analysis has been updated in /users/aaron/analysis/PDH_calibrate.ipynb. The analysis was the following:

  • fit the PDH error signal, assuming a 35.5MHz modulation frequency. Here are the (approximate) fit parameters:
    • Mapping of PZT mon voltage to Hz: 5.92 Hz/V_{PZT_mon}
    • P_carrier*P_signal: 0.193 W^2
    • HV mon voltage on resonance: 0.910 V
    • Error signal far off resonance: 0.249 V
    • Transmission: 0.00238
      • ​yikes. The nominal transmission is T=0.003. I let this parameter be free as a check, and to avoid overconstraining the data; is this consistent with measurements of the PMC optics' transmission?
    • Length: 0.0210 m
      • This is consistent with the nominal PMC length
  • Using the fit of the full PDH error signal, I am able to plot error signal vs frequency, and fit the linear portion of the carrier PDH signal. The results of this fit are:
    • -9.75e-7 V_PDH per Hz
    • 0.105 V error signal at DC
  • I then divide the power spectra by the squared slope of the linear fit above (V_PDH^2/Hz^2) to get the spectra in Hz^2/rtHz
    • I've plotted both the spectrum I took directly at the mixer I using the agilent probe, as well as the spectrum taken by sending the PMC servo card's mixer mon to an SR560 (G=2) then to the spectrum analyzer

There are a few problems remaining:

  • There should be a gain of 100 between the mixer I and the servo board's mixer out. It's not clear to me that this is reflected in the spectra. Moreover, the header files on the spectra I grabbed from the Agilent say that the R (mixer I) channel has 20dB of input attenuation, which is also not reflected. If I have swapped the two spectra and not accounted for either the gain of the servo card or the attenuation of the spectrum analyzer, these two gains would cancel, but I'm not confident that's what's going on.
Attachment 1: PDH_error.pdf
PDH_error.pdf
Attachment 2: PMC_Error_Spectrum.pdf
PMC_Error_Spectrum.pdf
  14737   Tue Jul 9 10:37:42 2019 MilindUpdateIOOkeyed psl crate, unstick.py, pmc autolocker code- working

Today, Gautam keyed the C1PSL crate and we got to test my unstick.py code. It seems to be working fine. Remarks:

  1. Gautam moved the unstick.py code to /opt/rtcds/caltech/c1/scripts/cds. Therefore, the steps to run this code are now:
    1. cd /opt/rtcds/caltech/c1/scripts/cds
    2. python unstick.py c1psl (for the c1psl machine)
  2. There is now a sleepTime global variable in the code which defines the amount of delay between successive channel toggles. We set this to 1ms and it took the code around 3s to run.
  3. Gautam was curious to see if this would work even if we set the sleepTime parameter to 0 but decided that that could be tested the next time something was keyed.
  4. I still need to add the signal handling thing to this code.

Following this, we tested my PMC autolocker code. The code ran for about a minute before achieveing lock. Remarks:

  1. Gautam moved my code (pmc_autolocker.py and autolocker_config.yaml) to /cvs/cds/rtcds/caltech/c1/scripts/PSL/PMC/ . Therefore, the steps to run this code are now:
    1. cd /cvs/cds/rtcds/caltech/c1/scripts/PSL/PMC/
    2. python pmc_autolocker.py (check code or use --help to see what the command line arguments do which is only for when you wanna override the details in the .yaml file)
  2. Gautam suggested that I add some delay between succesive steps of DC output adjust so that it locks quickly. I'll do that ASAP. For now, it works.
  14761   Mon Jul 15 14:53:40 2019 MilindUpdateIOOkeyed psl crate, unstick.py

Mode cleaner was not locked today. Koji came in and concluded that PSL had died. So we keyed it. Then we ran my unstick.py code. Mode cleaner is locked now.

Quote:

Today, Gautam keyed the C1PSL crate and we got to test my unstick.py code. It seems to be working fine. Remarks:

  1. Gautam moved the unstick.py code to /opt/rtcds/caltech/c1/scripts/cds. Therefore, the steps to run this code are now:
    1. cd /opt/rtcds/caltech/c1/scripts/cds
    2. python unstick.py c1psl (for the c1psl machine)
  2. There is now a sleepTime global variable in the code which defines the amount of delay between successive channel toggles. We set this to 1ms and it took the code around 3s to run.
  3. Gautam was curious to see if this would work even if we set the sleepTime parameter to 0 but decided that that could be tested the next time something was keyed.
  4. I still need to add the signal handling thing to this code.

Following this, we tested my PMC autolocker code. The code ran for about a minute before achieveing lock. Remarks:

  1. Gautam moved my code (pmc_autolocker.py and autolocker_config.yaml) to /cvs/cds/rtcds/caltech/c1/scripts/PSL/PMC/ . Therefore, the steps to run this code are now:
    1. cd /cvs/cds/rtcds/caltech/c1/scripts/PSL/PMC/
    2. python pmc_autolocker.py (check code or use --help to see what the command line arguments do which is only for when you wanna override the details in the .yaml file)
  2. Gautam suggested that I add some delay between succesive steps of DC output adjust so that it locks quickly. I'll do that ASAP. For now, it works.
  14762   Mon Jul 15 18:55:05 2019 gautamUpdateIOOMegatron hard-rebooted

[koji, gautam]

In addition to c1psl needing a reboot, megatron was un-ssh-able (although it was responding to ping). Clue was that the NPRO PZT control voltage was drifting a lot on the StripTool trace. Koji hard-rebooted the machine. Now IMC is locked, and FSS slow servo is also running.

  14793   Sun Jul 21 20:23:35 2019 ranaUpdateIOOMC locked

I found the MC2 face camera disabled(?) and the MC servo board input turned off. I turned the MC locking back on but I don't see any camera image yet.

  14797   Mon Jul 22 13:26:41 2019 KruthiUpdateIOOMC locked

The MC2 has 2 GigE cameras right now. I'll put back the analog asap.

Quote:

I found the MC2 face camera disabled(?) and the MC servo board input turned off. I turned the MC locking back on but I don't see any camera image yet.

  14805   Wed Jul 24 12:24:43 2019 MilindUpdateIOOunstick.py and ifotest

Moved the unstick.py code to the ifotest repository here. It now handles signals like those generated by Ctrl-C and so forth. It can still be run as python unstick.py <machine1> <machine2> etc.

  14836   Thu Aug 8 12:01:12 2019 gautamUpdateIOOMC1 suspension oddness

At ~1am PDT today, all the MC1 shadow sensor readbacks (fast CDS channels and Slow Acromag channels, latter not shown here) went to negative values. Of course a negative value makes no sense. After ~3 hours, they came back to positive values again. But since then, the shadow sensor RMS noise has been significantly higher in the >20 Hz band, and there are frequent glitches which kick the suspension. The IMC has been having trouble staying locked. I claim that this has to do with the Satellite box.

No action being taken now while I work on the ALS. In the past the problem has fixed itself.

Attachment 1: MC1_suspension.png
MC1_suspension.png
Attachment 2: MC1_suspension.pdf
MC1_suspension.pdf
  14842   Mon Aug 12 19:58:23 2019 gautamUpdateIOOMC1 suspension oddness

Repair plan:

  1. Get "spare" satellite box working --- Chub
    • According to elog14441, this box has flaky connectors which probably need to be remade
  2. Re-make the 64-pin IDC crimped connection on the cable from the coil driver board to sat. box, at the Satellite box end --- Chub and gautam

Any other ideas? The problem persists and it's annoying that the IMC cannot be locked.

  14868   Tue Sep 10 15:41:37 2019 aaronUpdateIOOWFS measurements

[rika, aaron, rana]

We are getting the MC locked in anticipation of making some WFS transfer function measurements.

The PSL screen was all white boxes, so I keyed the PSL crate and burt restored the settings from 11:19am Sep 5 (somewhat earlier than we started rebooting computers). Following this, I ran Milind's unstick.py and then the PSL autolocker script; both worked on the first go, great work Milind!

The modecleaner autolocking script is having substantially more trouble. Rana found that pitch and yaw sliders for all MC optics have been swapped--we think it's because the camera at MC2 has been rotated. Note that for now, sliding pitch gives a change in yaw, and sliding yaw changes pitch.

Improving MC alignment

We noticed that with the WFS servo on, the modecleaner would be well aligned for a while (MC trans ~ 14000), only to lose lock after several minutes. We held the MC2_TRANS_PIT/YAW outputs at 0, so the MC2 QPD does not affect the WFS loop; the beam is well centered on WFS1/2, but not on the MC2 QPD, and with this signal out of the loop MC TRANS recovers to ~15000 counts (consistent with the quiet times over the last 90 days, see attachment 2). Attachment 1 shows the MC lock degrading, followed by some noise where we lost lock, and finally a visible increase in MC trans when we remove the MC2 QPD from the WFS loop.

mode cleaner alignment setting

MC1 Pich 4.4762     Yow 4.4669

MC2 Pich 3.7652     Yow -1.5482

MC3 Pich -0.4159    Yow 1.1477

 

After automatic locking MC, we stopped automatical locking and took alignment to the center of QPD.

And then again did the automatic locking MC. Finaly Rana move to best alignment.

 

Mode cleaner Alignment Setting

MC1 Pich 4.4942   Yow 4.6956

MC2 Pich 3.7652   Yow -1.5600

MC3 Pich -0.3789   Yow 1.1477

 

Measured sine response

We used diaggui to measure the response of WFS1/WFS2/MC2 pitch (yaw) to excitations in MC1/MC2/MC3 pitch (yaw). Seeing fluctuations of amplitude ~1 on the MCX_PIT/YAW_OUT channels, we used an amplitude 0.01 excitation at 20 Hz. We will work on scripting some of this tomorrow.

 

 

Attachment 1: Screenshot_from_2019-09-10_18-51-28.png
Screenshot_from_2019-09-10_18-51-28.png
Attachment 2: mctrend_190910.png
mctrend_190910.png
  14871   Wed Sep 11 10:26:56 2019 aaronUpdateIOOWFS measurements

Gameplan

We should also have a plan for the next couple weeks so we are organized; heavily adapted from. Here's what I'm thinking this morning:

  1. Construct the input/output matrix for the WFS. (basically, what we did yesterday)
    1. Measure a transfer function of MC[1, 2, 3]_[PIT, YAW] to [WFS1, WFS2, MC2_TRANS]_[PIT, YAW]. The transfer function above the loop bandwidth (few seconds BW, so we will excite >~ 10 Hz) characterizes the response of the sensor to the excitation.
    2. Invert the resulting 3x3 matrix and populate the inverted matrix at WFS_OUTMATRIX. This will map the WFS basis to the MC optics' pit/yaw basis.
    3. Script this process. If we make changes (for example, moving the telescoping lenses) to make this matrix more diagonal, we'll want to do these steps many times.
  2. Characterizing the loop
    1. Optimize the demodulation phase -- we want to minimize the signal in Q. This should also be automated. I found documentation in the white Wave Front Sensing binder
      1. Misalign a mirror in pitch or yaw, and rotate the phase to minimize the magnitude of Q (maximize I); this angle is 'R' on the WFSx_SETTINGS screen.
    2. We should measure a step response applied to each angular dof of the MC optics.
    3. Guoy Phase Calibration
  3. Characterizing / Calibrating the WFS heads
    1. The DCC has LIGO test procedures for their WFS RFPD, as does the white binder; the following checks are relevant for our WFS, and this is how I think we should carry them out (not identical to the procedure as written in the document). For many of these, we'll want to set up the JenneAM laser with a network analyzer for RF modulation.
      1. DC path transimpedance
        1. Measure the DC power of JenneAM with a power meter, and direct the beam to each of the QPD quadrants. Make sure the beam fits on a single quadrant.
        2. This will give us the product of the PD efficiency and DC transimpedance gain
        3. Last time this was measured (white WFS binder)
      2. notch tuning -- we are going to measure the TF, but I won't tune it without someone as ancient as the electronics
        1. Using the network analyzer, measure a transfer function from the laser AM to the QPD head's RF output
          1. Is there a pickoff available? The LIGO testing procedures recommend a FET probe
          2. We should do this while measuring the DC transimpedance for each quadrant
      3. notch rejection ratios
        1. While taking the RF transfer function, use the delta marker to record the difference between the notch and the RF operating frequency.
      4. RF transimpedance
        1. Illuminate the PD with white light from an incandescent bulb (a shot-noise limited source)
          1. 6-10 mA of photocurrent should be generated
        2. Use an RF spectrum analyzer and low noise RF pre-amplifier (gain ~20dB) to measure the shot noise limited spectrum
        3. A piece of scotch tape can be used to make the light uniformly illuminate the QPD
        4. Convert this RF PSD to an rms amplitude (voltage) spectral density, and also note the DC photocurrent. This can be used to calculate the RF transimpedance with
          1. Z_\mathrm{RF} = \sqrt{\frac{V_\mathrm{rms}^2I_\mathrm{DC}}{3.2\times 10^{-19}}}
      5. Shot noise limited input sensitivity
        1. Measure the RF PSD with the beam blocked and light off; this is the dark photocurrent, and can be used to calculate the shot noise limited sensitivity.

References:

  • Binders of documents about the 40m WFS
  • LIGO ISC WFS RFPD test procedure (T1200347 is dual frequency, T1200380 is single frequency)
    • The associated datasheet template is in T1200381
  • Wavefront Sensor (T960111). This document even has a calibration protocol with forms to fill in during testing, so I've printed an extra copy of that appendix.

Automation

It would be good to script some of what we did yesterday. I'm checking out some scripts I'd used for Qryo and armloss measurements to remember the best way to do this.

  • Existing WFS scripts (I didn't try these)
    • WFS_DC_offsets -- sets the WFS QPD dark offsets
      • block beam, then run script
    • MC2_TRANS_offsets -- sets the MC2 transmission offset (why isn't this in the same script as WFS_DC_offsets?)
      • MC should be aligned, beams centered on WFS, WFS servo off
    • mcWFSallowOn(Off) -- turns on (off) the ASC filter module outputs
    • mcwfshold -- turns off the input to WFS servos, but holds the current values of MC optic biases
    • mcwfsoff -- turns off the mc wfs loop
      • First, turns off the WFS outputs (eg WFS1_PIT OUTPUT)
      • Turns off the MC WFS input gains
      • Holds the WFS loop outputs

Miscellany

I noticed yesterday that the PSL_shutterqst box is white, and I've seen timeout requests when eg the reboot script tries to open/close the PSL shutter. It seems like a shutter that should open, so I should find the aux machine to restart it.

  14872   Wed Sep 11 14:37:43 2019 aaronUpdateIOOWFS measurements

[aaron, rika]

We identified the Jenne laser and found a long optical fiber that might be able to transport our beam to the AP table.

Now we're searching for documentation on using this laser. Kevin and John measured a TF last year. Koji advised that we needn't worry too much, the current limit is already set correctly and we need only power on the laser.

We moved the breadboard (including a couple PDs, collimating lenses, laser, steering mirrors, etc) over to the AP table, and set it on top of the panel next to the WFS. We mounted the laser on the AP table, and added one lens with f~68 mm after the laser to fit the beam on a single quadrant; the beam was about 1mm diameter (measured by eye) when it entered the QPD. We turned the laser driver on at ~19.4 mA, and directed it to WFS2 via the last two steering mirrors before WFS2.

We monitored the QPD segments' DC level with ndscope on a laptop, and were able to send the beam to each of the four quadrants in turn. We set up the Agilent network analyzer to drive the laser's amplitude modulation and sent the RF signal from the LEMO output on the QPD head directly to the network analyzer. We will take the measurements tomorrow morning.

Attachment 1: 20190911_WFS.jpg
20190911_WFS.jpg
Attachment 2: 20190911_WFS_2.jpg
20190911_WFS_2.jpg
  14874   Thu Sep 12 12:42:31 2019 aaronUpdateIOOWFS measurements

[rika, aaron]

At Seiji and Gautam's suggestion, we added an additional RF photodiode (NewFocus 1611) to the system so we can calibrate our transfer functions. The configuration is now laser -> BS --> lenses -> QPD and BS --> lenses -> RFPD. We added lenses to get the beams focused on the RFPD and QPD heads, and are again set up for TF measurement.

We took the following data. These parameters were consistent across all measurements:

  • 1kHz IF BW
  • log sweep with 801 points
  • 32 averages
  • auto attenuation
  • -10 dBm excitation amplitude
  • 19.2 mA DC current to the laser
  • The DC level of the reference PD is -, and with the beam blocked (dark current) it is
Measurement file parameters
WFS2_SEG1 / RFPD
TFAG4395A_12-09-2019_155901.txt
100 MHz - 500 MHz
WFS2_SEG1 / RFPD TFAG4395A_12-09-2019_160811.txt 10 MHz - 100 MHz
WFS2_SEG1 / RFPD
TFAG4395A_12-09-2019_170234.txt
100 kHz - 10 MHz
WFS2_SEG2 / RFPD AG4395A_12-09-2019_183125.txt 100 MHz - 500 MHz
WFS2_SEG2 / RFPD TFAG4395A_12-09-2019_183614.txt 10 MHz - 100 MHz
WFS2_SEG2 / RFPD TFAG4395A_12-09-2019_183930.txt 100 kHz - 10 MHz
WFS2_SEG3 / RFPD TFAG4395A_12-09-2019_225243.txt 100 MHz - 500 MHz
WFS2_SEG3 / RFPD TFAG4395A_12-09-2019_225601.txt 10 MHz - 100 MHz
WFS2_SEG3 / RFPD TFAG4395A_12-09-2019_225922.txt 100 kHz - 10 MHz
WFS2_SEG4 / RFPD
TFAG4395A_12-09-2019_230758.txt
100 MHz - 500 MHz
WFS2_SEG4 / RFPD TFAG4395A_12-09-2019_232058.txt 10 MHz - 100 MHz
WFS2_SEG4 / RFPD TFAG4395A_12-09-2019_234447.txt 100 kHz - 10 MHz

After taking the data for segment 1, I moved the beam to segment 2. The beam didn't fit on segment 2 without partially illuminating segment 1 (tested by maximizing the signal on segment 2, then blocking the beam. If the beam is entirely on one segment, only that segment should be effected; in this case, we found that segment 1's DC signal also changed when the beam was blocked). We readjusted the telescoping lenses to get the beam a bit smaller, and now the beam fits on segment 2. We know it is entirely on segment 2 because small beam movements do not change the signal on segment 2.

We are trying to take the remaining data, but AGmeasure keeps hanging while sending the data (after taking the measurement, over 10 min). We tried restarting the network analyzer to no avail. I was able to grab the data by cancelling the measurement and running

AGmeasure --getdata -i vanna

I've uploaded the spectrum for segment 1 in the meantime. Zero model is on the way.

When I finished up the measurements on WFS2, I removed the cables from the AP table and closed the cover.

EDIT: I forgot to switch the LEMO connector to measure the other segments, so we measured the RF signal from segment 1 even when the beam was on segments 2-4. We'll have to try again tomorrow.

Attachment 1: WFS2_TFs.pdf
WFS2_TFs.pdf
Attachment 2: D755499D-9FDF-4E2B-BFC1-016B459DD35D.jpeg
D755499D-9FDF-4E2B-BFC1-016B459DD35D.jpeg
  14875   Fri Sep 13 10:36:03 2019 aaronUpdateIOOWFS measurements

[rika, aaron]

We are at it again. Rika is setting up the TF measurement, I'm looking into scripting the WFS sensing matrix measurement we made earlier in the week so we can return to it next week.

 

Measurement file parameters
WFS2_SEG1 / RFPD

 

 
100 MHz - 500 MHz
WFS2_SEG1 / RFPD   10 MHz - 100 MHz
WFS2_SEG1 / RFPD
 
100 kHz - 10 MHz
WFS2_SEG2 / RFPD TFAG4395A_13-09-2019_181415.txt 100 MHz - 500 MHz
WFS2_SEG2 / RFPD TFAG4395A_13-09-2019_180955.txt 10 MHz - 100 MHz
WFS2_SEG2 / RFPD TFAG4395A_13-09-2019_182918.txt 100 kHz - 10 MHz
WFS2_SEG3 / RFPD TFAG4395A_13-09-2019_121533.txt 100 MHz - 500 MHz
WFS2_SEG3 / RFPD TFAG4395A_13-09-2019_123820.txt 10 MHz - 100 MHz
WFS2_SEG3 / RFPD TFAG4395A_13-09-2019_123243.txt 100 kHz - 10 MHz
WFS2_SEG4 / RFPD
TFAG4395A_13-09-2019_161834.txt
100 MHz - 500 MHz
WFS2_SEG4 / RFPD TFAG4395A_13-09-2019_170007.txt 10 MHz - 100 MHz
WFS2_SEG4 / RFPD TFAG4395A_13-09-2019_172001.txt 100 kHz - 10 MHz

 

When we mesuring TF of SEG4, the beam leaking to SEG1 about 1%.

We finished mesurement SEG2-4 and get the figure by running PDH_calibrate.ipynb .

edit: We observed during segment 2 measurements that blocking the beam reduced the DC level of segment 1 by less than 1%, but still clearly observable. As you can see in the plots, something is suspicious about the normalization of these TFs. We took segment 1 data a few days before the other segments, so perhaps we weren't getting the full beam on the reference PD during the later measurements? When I make this measurement for WFS1, I will try to fix some of these problems by choosing different telescoping optics, and I will consider whether removing the QPD heads from their table will improve the measurement.

Attachment 1: TF-.png
TF-.png
Attachment 2: WFS2_TFs.pdf
WFS2_TFs.pdf
  14876   Fri Sep 13 10:53:40 2019 aaronUpdateIOOWFS loop measurements

I'm scripting the WFS sensing matrix measurements. I haven't really scripted DTT before, so I'm trying to find documentation or existing scripts. I came across this elog where Gautam measured a sensing matrix during DRMI lock, and he pointed me to some .xml files used for these measurments.

 

  14878   Mon Sep 16 05:08:04 2019 ranaUpdateIOOWFS loop measurements

not need to use DTT. I'm attaching some half-finished notebooks that give the gist.

  1. Download the data with NDS2
  2. Downsample the data for ease of use.
  3. save the data as hdf5 for easy loading later.
  4. demodulate the data at the specified frequencies.

That's it! Now you have the complex, single frequency TFs. Next you invert the matrix.

Attachment 1: LSCsensingMatrix.ipynb
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Get some ASC data - Calculate Sensing Matrix \n",
    "### also make the radar plots"
   ]
  },
... 327 more lines ...
Attachment 2: ASCsensingMatrix.ipynb
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Get some ASC data - Calculate Sensing Matrix \n",
    "### also make the radar plots"
   ]
  },
... 325 more lines ...
  14880   Mon Sep 16 11:55:58 2019 rikaUpdateIOOWFS loop measurements

[rika, aaron]

We aligned optics of WFS as it was. Now auto-locker is working to lock MC.

But it still doesn't lock. We notice that the c1lsc machine doesn't work. So we run rebootCILSC.sh.

 

Now we reset the hardware!

 

17:11

After reset, auto locking didn't work well. Gautum and Aaron reboot slow c1ioo. Then it works, and Gautam returned the MC to a good alignment.

We found the beam is not in the center of the QPD, we (turned off the MC autolocker and MC loop, then) realigned to make beam to get in to the QPD center. Afterwards we start auto locking.

With the WFS on, the maximum MC transmission we observe is 14,700 counts; after the transmission level stabilizes (MC_TRANS pit and yaw brought to 0), the MC transmission is only 14,200 counts. Perhaps the MC_TRANS QPD offsets need adjustment. We relieve the WFS servo of its DC offsets. This is the configuration we'll use for WFS loop measurements this week.

  14882   Mon Sep 16 12:38:59 2019 aaronUpdateIOOWFS measurements

I wanted to make a zero model of this circuit to get a handle on the results. I couldn't import zero on pianosa, and I tried pip installing zero, but was denied due to not finding version 3.0.3 of matplotlib. I finally got it to install using

pip3 install zero --user

 Oddly, even though I can now import zero when I open a python3 session from the command line, when I open a jupyter notebook and switch to a python3 kernel, the zero module is still unavailable. I think I recall that conda manages the jupyter environment -- is pip managing an entirely separate environment (annoying)?

edit: Yeah, it was something like that. I reminded myself how this works with this article.

  14886   Tue Sep 17 09:41:48 2019 gautamUpdateIOOWFS loop measurements

Let's not worry about C1LSC until the c1iscaux upgrade is done.

 

But it still doesn't lock. We notice that the c1lsc machine doesn't work. So we run rebootCILSC.sh.

  14887   Tue Sep 17 10:34:48 2019 aaronUpdateIOOWFS loop measurements

I'm using the notebooks from rana as a starting point, and making a script to measure and fill the WFS sensing matrix. It lives at /users/aaron/WFS/scripts/WFSsensingMatrix.ipynb for now. Here's what it does; what's been tested is in green, untested is goldenrod, uncoded is fire brick.

  1. Sets up an nds connection, listening to the WFS channels and the MC#_PIT/YAW IN1 channels.
  2. Loops over the excitation channels. For now, I'm assuming the user is injecting excitations one at a time in awggui; in principle, we could excite the various MC angular dof at several frequencies and take a single measurement, or use the natural frequencies of the suspensions.
    1. For each excitation, grab the data
    2. Filter the data. I'm using a 30 Hz to 40 Hz cheby filter
    3. Take an FFT, hold on to that for future reference
    4. Generate an LO at the excitation frequency, and demodulate the signals. Strong low pass.
    5. The single-frequency transfer function is now [WFS channel] / [excited MC channel]. Each iteration of this loop generates a column of the sensing matrix.
  3. Invert the sensing matrix
  4. Populate in the appropriate channels of the WFS_OUTMATRIX

Grabbing data with nds

To run these on pianosa, I ran (inside the jupyter notebook)

import sys
!{sys.executable} -m pip install astropy --user

I'm getting an error when starting the nds2 connection

conn = nds2.connection('192.168.113.201', 31200)
Failed to establish a connection[INFO: Request SASL authentication protocol]+

 I didn't find anything on the elog about this error, but I'm looking at the nds user manual. The problem was, I didn't have a valid Kerberos ticket; I opened one on Pianosa with my albert.einstein (note all caps ligo.org).

kinit aaron.markowitz@LIGO.ORG

 I'm now able to run the scripts Rana mentions, but I haven't been able to grab the channels I want (eg C1:SUS-MC1_ASCPIT_IN1_OUT); it says the channel isn't found. When I check how many of the Caltech channels are available (conn.count_channels('C1*')), there are none. I was connecting to nds.ligo.caltech.edu, but this must be the wrong server (it has all the channels for the sites). fb and fb1 (and the IP they point to, 192.168.113.201) cannot be connected to, giving the error 'Error occurred trying to write to socket.'

I recall that in the cryo lab, we need to use port 8088 to get data from cymac1, and indeed substituting 31200 -> 8088 lets me access the C1 channels (I can count the channels), but no matter what time I request, nds tells me there is no data available (gap). Gautam came by and diagnosed that the gaps I'm seeing in the frames' data are real, fb is down (see elog).


WFS Sensing Matrix Script

Saving extra channels

Continuing, I'm going to modify the script to grab live data. I'm using the iterate and next methods. I noticed that the MC2_TRANS pit/yaw channels are not saved to frames, even though WFS1/2 pit/yaw are. Since I expect I'll want to lookback at these channels, I followed the instructions for adding a daq channel, uncommenting the following line in /opt/rtcds/caltech/c1/chans/daq/C1IOO.ini:

[C1:IOO-MC2_TRANS_PIT_OUT_DQ]
acquire=1
datarate=512
chnnum=10186
datatype=4
[C1:IOO-MC2_TRANS_YAW_OUT_DQ]
acquire=1
chnnum=10189
datarate=512
datatype=4

I made a backup of the old version of this .ini file, which can be found in /users/aaron/backups/190917_C1IOO.ini. I did not remake the model, as I couldn't find the c1ioo model in /opt/rtcds/caltech/c1/userapps/trunk or from the matlab command prompt. I restarted the fb via telnet, but didn't restart the model or check the svn (got an error?). The _DQ channels are now reachable on dataviewer, so things seem to be working.

awgpy

I also tried importing cdsutils, so I can control awg in the same script that we read out the sensing matrix, but I'm getting the python3 error when I import cdsutils:

No module name '__version'

I tried pip upgrading cdsutils, but it's already up-to-date. I get the above error even if I switch to a python 2 kernel; cdsutils is installed in the python2.7 directory, so I don't know why pip is finding it when I'm running a python 3 kernel. I can move on from this for now, but it would be useful to be able to script the excitation along with the measurement.


Changes to the user environment

jupyter on donatella

Tangentially related, Rika wanted to be running some jupyter notebooks while working on donatella. I ran, on donatella:

conda install jupyter

 hm, that didn't work. Also jupyter is installed when you install conda, so I'm not sure how there is a version of conda but not of jupyter. I also see that pip and pip3 are not recognized commands on donatella.

scipy on pianosa

I noticed that some of the functions in the scipy signal processing toolbox were out of date on pianosa. The cheby and welch filters now accept additional kwargs (for eg, before you needed to give IIR filter methods a cutoff frequency normalized to the Nyquist rate, but now you can give it the frequencies and sampling rate separately).

I want to update this package, but I hesitate to break everyone's existing scripts.

  14888   Tue Sep 17 10:47:44 2019 rikaUpdateIOOWFS loop measurements

[aaron, rika]

Once stop the auto-locker and realigned to make beam to get into QPD again.

After we lock MC, we took TFs from suspension MC1/2/3 PIT/YAW to WFS1/2 PIT/YAW. 

----- 

Diagnotics test tools

range: 7 Hz to 50 Hz

 avarage=61

Column 0: WFS2_PIT   1: WFS2_YAW   2:WFS1_PIT   3: WFS1_YAW   4: TRANCE_PIT   5:TRANCE_YAW 

-----

I'm wondering weather the MC1data I saved is correct, becouse I found the channel was changed when I exported MC2 data. So I took MC1 data again.

 

We got all data for TFs already.  Each data is devided to real part and imaginary part. Then we are arranging the datas to obtain TFs. 

TF of MC2 is attachiment 1. So tomorrow, I make other TF.

Quote:

[rika, aaron]

We aligned optics of WFS as it was. Now auto-locker is working to lock MC.

But it still doesn't lock. We notice that the c1lsc machine doesn't work. So we run rebootCILSC.sh.

 

Now we reset the hardware!

 

17:11

After reset, auto locking didn't work well. Gautum and Aaron reboot slow c1ioo. Then it works, and Gautam returned the MC to a good alignment.

We found the beam is not in the center of the QPD, we (turned off the MC autolocker and MC loop, then) realigned to make beam to get in to the QPD center. Afterwards we start auto locking.

With the WFS on, the maximum MC transmission we observe is 14,700 counts; after the transmission level stabilizes (MC_TRANS pit and yaw brought to 0), the MC transmission is only 14,200 counts. Perhaps the MC_TRANS QPD offsets need adjustment. We relieve the WFS servo of its DC offsets. This is the configuration we'll use for WFS loop measurements this week.

 

Attachment 1: MC2.pdf
MC2.pdf
  14896   Wed Sep 18 14:45:52 2019 rikaUpdateIOOWFS loop measurements

[aaron, rika]

Gettng TFs

In the data we got yesterday, we can see some filter's effect. 

But it is not good coherence above 10Hz, so we mesured again. And this time we save the data as xml file.

And also we chaned the frequency regions broader to watch corner frequency of suspension.

-----

 Diagnotics test tools

 range: 0.1 Hz to 100 Hz

 points: 120 

 Amplitude: 1000

----

but at low frequency, the mode maching cavity was unloked cause of too much shaking.

So, we saw single frequency TF, and searched the good amplitude.

 

First, I tried to get TF @0.1~1 Hz .

-----

0.1 to 1 Hz

points: 61 (I think it's too much becous it takes about an hour)

amplitude: 5

-----

The TFs and coherence of MC1/PIT to each QPD is below. [above window: coherence, below: TF]

During the mesurement, something happened @0.2-0.3Hz so I stopped it.

We found the coherence of WFS1P and WFS2Y is not good, but others are good.

we guess that it could come from alignment which made Q chainging to small.

 

Finaly, I also got the  .xml data of MC1P 1 Hz to 10 Hz. In this time,

-----

1 to 10 Hz

points: 41 

amplitude: 90

-----

 

Making matrics

Now we took single frequency 6 TFs (MC1/2/3 PIT/YAW) @7Hz (Because this frequency has good coherence in all channel).

Aaron wrote the script using dtt to making matrics. 

 

 

Quote:

[aaron, rika]

Once stop the auto-locker and realigned to make beam to get into QPD again.

After we lock MC, we took TFs from suspension MC1/2/3 PIT/YAW to WFS1/2 PIT/YAW. 

----- 

Diagnotics test tools

range: 7 Hz to 50 Hz

 avarage=61

Column 0: WFS2_PIT   1: WFS2_YAW   2:WFS1_PIT   3: WFS1_YAW   4: TRANCE_PIT   5:TRANCE_YAW 

-----

I'm wondering weather the MC1data I saved is correct, becouse I found the channel was changed when I exported MC2 data. So I took MC1 data again.

 

We got all data for TFs already.  Each data is devided to real part and imaginary part. Then we are arranging the datas to obtain TFs. 

TF of MC2 is attachiment 1. So tomorrow, I make other TF.

Quote:

[rika, aaron]

We aligned optics of WFS as it was. Now auto-locker is working to lock MC.

But it still doesn't lock. We notice that the c1lsc machine doesn't work. So we run rebootCILSC.sh.

 

Now we reset the hardware!

 

17:11

After reset, auto locking didn't work well. Gautum and Aaron reboot slow c1ioo. Then it works, and Gautam returned the MC to a good alignment.

We found the beam is not in the center of the QPD, we (turned off the MC autolocker and MC loop, then) realigned to make beam to get in to the QPD center. Afterwards we start auto locking.

With the WFS on, the maximum MC transmission we observe is 14,700 counts; after the transmission level stabilizes (MC_TRANS pit and yaw brought to 0), the MC transmission is only 14,200 counts. Perhaps the MC_TRANS QPD offsets need adjustment. We relieve the WFS servo of its DC offsets. This is the configuration we'll use for WFS loop measurements this week.

 

 

Attachment 1: Screenshot_from_2019-09-18_18-15-34.png
Screenshot_from_2019-09-18_18-15-34.png
  14897   Wed Sep 18 15:27:45 2019 gautamUpdateIOOTT cables need to be remade

Summary:

The custom ribbon cables piping the coil driver board outputs to the eLIGO (?) TTs (a.k.a. TT1 and TT2) are damaged. They need to be re-made. I can't find any pin-mapping for them.

Details:

While waiting for the LSC photodiode whitening switching cross-connect work to be done, I thought I'd re-align the IFO a bit. However, I was unable to find any beam making it to the REFL/AS ports despite some TT steering. I remembered that Chub had undone the TT connections at 1Y2 as well, and thought I'd check the cabling to make sure all was in order. On going to the rack, however, I found that these connections were damaged at the coil-driver end (see Attachment #1), presumably during the cable extraction. These need to be re-made...😔 

Attachment 1: IMG_7945.JPG
IMG_7945.JPG
  14898   Thu Sep 19 09:39:30 2019 gautamUpdateIOOTT cables need to be remade

While debugging this problem, c1lsc models crashed. I ran the reboot script this morning to bring the models back. There was a 0x4000 error on the DC indicators for the c1lsc models (mx_stream error which couldn't be fixed by restarting the mx service) the first time I ran the script so I did it again, now the indicator lights are in their nominal state.

Attachment 1: CDSoverview.png
CDSoverview.png
  14899   Thu Sep 19 11:26:18 2019 gautamUpdateIOOTT cables DON'T need to be remade

False alarm - the mistake was mine. Looking at the schematic diagram, the AI/Dewhite board, D000316, accepts the inputs from the DAC on the P2 connector. While restoring the connections at 1Y2, I had plugged the outputs of the DAC interface board into the P1 connectors of the AI boards. Having rectified this problem, I am now able to move the beam on the AS camera in both PIT and YAW using TT1 or TT2. So to zero-th order, this subsystem appears to work. A more in-depth analysis of the angular stability of the TTs can only be done once we re-align the arms and lock some cavities.

  14914   Mon Sep 30 13:20:55 2019 aaronUpdateIOOshot noise measurement

I wanted to measure the RF transimpedance of the WFS heads, as outlined above.

Summary: Measurement is not done.

Details:

  • closed the PSL shutter
  • taped over the WFS 2 opening with frosted scotch tape
  • illuminated the QPD with an incandescent flashlight.
    • All of the D batteries were close to dead, so it seemed dimmer than usual
  • Observed the WFS2 segment 1 RF spectrum on the Agilent, but saw no difference between the spectrum with and without the flashlight. Must need a brighter light, and possibly also better alignment.
  • Needed to skype someone and pass off the IFO to gautam, so I untaped the QPD, returned the appropriate LEMO connector, and opened the PSL shutter.
  14929   Thu Oct 3 11:38:35 2019 aaronUpdateIOOWFS measurements

I set up the spectrum analyzer to make the WFS head RF transfer function measurement (V/W) on WFS1. I placed the Jenne laser on the AP table, along with the reference PD power supply, laptop, and laser power supply. The Agilent output AM modulates the laser; the reference PD is again NewFocus 1611, with its AC output sent to Agilent's R channel and DC output sent to an oscilloscope;

At Koji's suggestion, I've started setting up a small breadboard to hold the fiber collimator, BS, and reference PD. I haven't really used fiber optics before, I'd appreciate another set of eyes before I get too deep.
Gautam showed me the collimator and fiber BS.

I closed the PSL shutter while checking for a location to place the breadboard, and opened it while writing this. Headed back to Cryo to pick up the large incandescent bulb we'd borrowed over the summer.

  14946   Mon Oct 7 19:50:33 2019 gautamUpdateIOOIMC locking not working after this work

See trend. This is NOT symptomatic of some frozen slow machine - if I disable the WFS servo inputs, the lock holds just fine.

Turns out that the beam was almost completely missing the WFS2 QPD. WTF 😤. I re-aligned the beam using the steering mirror immediately before the WFS2 QPD, and re-set the dark offsets for good measure. Now the IMC remains stably locked. 

Please - after you work on the interferometer, return it to the state it was in. Locking is hard enough without me having to hunt down randomly misaligned/blocked beams or unplugged cables.


I took this opportunity to do some WFS offset updates.

  • First I let the WFS servo settle to some operating point, and then offloaded the DC offsets to the IMC suspensions.
  • Then I disabled the WFS servo.
  • I hand-tweaked MC1 and MC3 PIT/YAW (while leaving MC2 untouched) to minimize IMC REFL (a more sensitive indicator of the optimal cavity alignment than the transmission).
  • Once I felt the IMC REFL was minimized (~1-2% improvement), I set the RF offsets for the WFS while the IMC remained locked. I chose this way of setting the RF offsets as opposed to unlocking the cavity and having the high-power TEM00 mode incident on the WFS QPDs.
  • Overnight, I'm going to run the MC2 spot position scanning code (in a tmux session on pianosa, started ~945pm) to see if we can find a place where the transmission is higher, looking at Kruthi's code now to see it makes sense...
  • The convergence time of the MC2 spot position loop is pretty slow, so the scan is expected to take a while... Should be done by tomorrow morning though, and I expect no work with the IFO tonight.
  • Does this loop have to be so slow? Why can't the gain be higher?
Attachment 1: IMCflaky.png
IMCflaky.png
Attachment 2: IMG_8015.JPG
IMG_8015.JPG
  14950   Tue Oct 8 10:29:19 2019 gautamUpdateIOOMC Transmission scan

Summary:

There is ~ 7% variation in the power seen by the MC2 trans QPD, depending on the WFS offsets applied to the MC2 PIT/YAW loops. Some more interpretation is required however, before attributing this to spot-position-dependent loss variation inside the IMC cavity.

Analysis:

Attachment #1This shows a scatter plot of the MC2 transmission and IMC REFL average values after the WFS loops have converged to the set offset positions. The size of the points are proportional to the normalized variance of the quantity. The purpose of this plot is to show that there is significant variation of the transmission, much more than the variance of an individual datapoint during the course of the averaging (again, the size of the circles is only meant to be indicative, the actual variance in counts is much smaller and wouldn't be visible on this plot scale). For a critically coupled cavity, I would have expected that the TRANS/REFL to be perfectly anti-correlated, but in fact, they are, if anything, correleated. So maybe the WFS loops aren't exactly converging to optimize the inoput pointing for a given offset? 

Attachment #2Maps of the transmission/reflection as a function of the (YAW, PIT) offset applied. The radial coordinate does not yet mean anything physical - I have to figure out the calibration from offset counts to spot position motion on the optic in mm, to get an idea for how much we scanned the surface of the optic relative to the beam size. The gray circles indicate the datapoints, while the colormaps are scipy-based interpolation. 

Attachment #3After talking with Koji, I explicitly show the correlation structure between the IMC REFL DCMON and MC2 TRANS. The shaded ellipses indicate the 1, 2 and 3-sigma bounds for the 2D dataset going radially outwards. The correlation coefficient for this dataset is 0.46, which implies moderate positive correlation. 🤔 

Scan algorithm:

The following was implemented in a python scipt:

  1. Choose 2 independent random numbers from the uniform distribution in the interval [-0.5, 0.5] (in uncalibrated counts).
  2. One of these numebrs is set as the error point offset for the QPD spot-centering PITCH WFS loop, while the other is the YAW offset.
  3. Wait for 600 seconds - this long wait is required because the step-response time for these loops is long. 
  4. If there is an MC unlock event - wait till the MC relocks, and then another 600 seconds, to give the WFS loops sufficient time to converge.
  5. Once the WFS loops have converged, average a few data channels (MC TRANS, REFL, WFS loop error points etc) for 10 seconds, and write these to a file.

I am now setting the offsets to the WFS QPD loop to the place where there was maximum transmission, to see if this is repeatable. In fact it was. Looking at the QPD segment outputs, I noticed that the MC2 transmission spot was rather off-center on the photodiode. So I went to the MC2 in-air optical table and centered the beam till the output on the 4 segments were more balanced, see Attachment #4. Then I re-set the MC2 QPD offsets and re-enabled the WFS servos. The transmission is now a little lower at ~14,500 counts (but still higher than the ~14200 counts we had before), presumably because we have more of the brightest part of the beam falling on the gap between quadrants. For a more reliable measurement, we should use a single-element photodiode for the MC2 transmission.

Quote:
  • Overnight, I'm going to run the MC2 spot position scanning code (in a tmux session on pianosa, started ~945pm) to see if we can find a place where the transmission is higher,
Attachment 1: MC2_transmission_scatter.pdf
MC2_transmission_scatter.pdf
Attachment 2: transmissionMaps.pdf
transmissionMaps.pdf
Attachment 3: correlStructure.pdf
correlStructure.pdf
  14952   Tue Oct 8 16:54:56 2019 ranaUpdateIOOIMC locking not working after this work

I think this offset setting thing is not so good. People do this every few years, but putting offsets in servos means that you cannot maintain a stable alignment when there are changes in the laser power, PMC trans, etc. The better thing is to do the centering of the WFS spots with the unlcoked beam after the control offsets have been offloaded to the suspensions.

  14957   Tue Oct 8 20:39:42 2019 aaronUpdateIOOWFS loop measurements

I installed nds2 on donatello with yum, but still can't import nds2.

  14958   Wed Oct 9 09:37:28 2019 aaronUpdateIOOWFS loop measurements

I installed nds2 again, this time successfully with

conda install -c conda-forge python-nds2-client

 

  Draft   Wed Nov 6 20:34:08 2019 KojiUpdateIOOEOM resonant box installed

 

Quote:

[Mirko / Kiwamu]

 The resonant box has been installed together with a 3 dB attenuator.

The demodulation phase of the MC lock was readjusted and the MC is now happily locked.

 

(Background)

We needed more modulation depth on each modulation frequency and so for the reason we installed the resonant box to amplify the signal levels.

Since the resonant box isn't impedance matched well, the box creates some amount of the RF reflections (#5339).

In order to reduce somewhat of the RF reflection we decided to put a 3 dB attenuator in between the generation box and the resonant box.

 

(what we did)

 + attached the resonant box directly to the EOM input with a short SMA connector.

 + put stacked black plates underneath the resonant box to support the wight of the box and to relief the strain on the cable between the EOM and the box.

 + put a 3 dB attenuator just after the RF power combiner to reduce RF reflections.

 + readjusted the demodulation phase of the MC lock.

 

(Adjustment of MC demodulation phase)

 The demodulation phase was readjusted by adding more cable length in the local oscillator line.

After some iterations an additional cable length of about 30 cm was inserted to maximize the Q-phase signal.

So for the MC lock we are using the Q signal, which is the same as it had been before.

 

 Before the installation of the resonant box, the amplitude of the MC PDH signal was measured in the demodulation board's monitor pins.

The amplitude was about 500 mV in peak-peak (see the attached pictures of the I-Q projection in an oscilloscope). Then after the installation the amplitude decreased to 400 mV in peak-peak.

Therefore the amplitude of the PDH signal decreased by 20 %, which is not as bad as I expected since the previous measurement indicated 40 % reduction (#2586).

 

 

  15019   Wed Nov 6 20:34:28 2019 KojiUpdateIOOPower combiner loss (EOM resonant box installed)

Gautam and I were talking about some modulation and demodulation and wondered what is the power combining situation for the triple resonant EOM installed 8 years ago. And we noticed that the current setup has additional ~5dB loss associated with the 3-to-1 power combiner. (Figure a)

N-to-1 broadband power combiners have an intrinsic loss of 10 log10(N). You can think about a reciprocal process (power splitting) (Figure b). The 2W input coming to the 2-port power splitter gives us two 1W outputs. The opposite process is power combining as shown in Figure c. This case, the two identical signals are the constructively added in the combiner, but the output is not 20Vpk but 14Vpk. Considering thge linearity, when one of the port is terminated, the output is going to be a half. So we expect 27dBm output for a 30dBm input (Figure d). This fact is frequently oversight particularly when one combines the signals at multiple frequencies (Figrue e). We can avoid this kind of loss by using a frequency-dependent power combiner like a diplexer or a triplexer.

Attachment 1: power_combiner.pdf
power_combiner.pdf
  15165   Tue Jan 28 16:01:17 2020 gautamUpdateIOOIMC WFS servos stable again

With all of the shaking (man-made and divine), it was a hard to debug this problem. Summary of fixes:

  1. The beam was misaligned on the WFS 1 and 2 heads, as well as the MC2 trans QPD. I re-aligned the former with the IMC unlocked, the latter (see Attachment) with the IMC locked (but the MC2 spot centering loops disabled).
  2. I reset the WFS DC and RF offsets, as well as the QPD offsets (once I had hand-aligned the IMC mirrors to obtain good transmission).

At least the DC indicators are telling me that the IMC locking is back to a somewhat stable state. I have not yet checked the frequency noise / RIN.

Attachment 1: QPD_recenter.png
QPD_recenter.png
ELOG V3.1.3-