40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 107 of 339  Not logged in ELOG logo
ID Date Author Type Category Subject
  11701   Tue Oct 20 11:24:29 2015 ericqHowToLSCHow to DRFPMI

Herein, I will describe the current settings and procedures used to achieve the DRFPMI lock, cobbled together from scripts, burts and such. 


Initial Alignment

  1. With arms POX/POY locked, run dither alignment servos. Set transmon QPD offsets here
  2. Restore "PRMI Carrier" configuration, run BS and PRM dither alignment servos simultaneously. (Note: this sacrifices some X arm alignment for better dark port alignment. In practice no appreciable loss of TRX is observed)
  3. Misalign PRM, align SRM and tune SRM alignment by eye while looking at AS camera. 
  4. Restore POX/POY arm lock, lock green to arms, check that powers are high enough and align if neccesary.

Initial Configuration

CARM, DARM

For CARM and DARM, the A channels are used for the ALS signals, whereas the B channels are used for blending the RF signals. 

ALS

  • BEATX and BEATY, I and Q channels: +0dB Whitening Gain, Whitening Filters ON
  • Green beatnotes somewhere between 20-80MHz, following sign convention of temperature slider UP makes beat freq go UP.  Check spectrum of PHASE_OUT_HZ vs references in ALS_outOfLoop_Ref.xml. The locking script automatically sets the correct phase tracker gain, so no need to adjust manually.
  • CARM_A = -1.0 x ALSX + 1.0 x ALSY, G=1.0
  • DARM_A = 1.0 x ALSX + 1.0 x ALSY, G=1.0

RF

  • CM Board: REFL11 I daugher board output -> IN1, IN1 Enabled, -32dB input gain, 0.0V offset, all boosts off, AO polarity positive, AO gain +0dB
  • MC Board: IN2 disabled, -32dB input gain
  • CM_SLOW: +0dB Whitening Gain, Whitening ON, LSC-CM_SLOW_GAIN = -5e-4 (Though, it would be good to reallocate this gain to the input matrix element)
  • CARM_B = 1.0 x CM_SLOW, FM4 FM10 ON, G=0 (FM4 = LP700 for AO crossover stability, FM10 = 120:5k for coupled cavity pole compensation)
  • AS55: +9dB Whitening Gain, Whitening filters manual, Demod angle -37.0
  • DARM_B = -1e-4 x AS55 Q, G=0

DRMI 3F

For the DRMI, the A channels are used for the 1F signals, whereas the B channels are used for the 3F signals. The settings for transitioning to 1F after locking the DRFPMI have not yet been determined. 

These settings are currently saved in the DRMI configurator, but the demod angles are set for DRFPMI lock, so the settings don't reliably work for misaligned arms. 

  • REFL33: +30dB Whitening Gain, Whitening filters trigger on DRMI lock, Demod angle: 136.0
  • REFL165: +24dB Whitening Gain, Whitening filters trigger on DRMI lock, Demod angle: -111.0
  • POP22: +15dB Whitening Gain, Whitening filters OFF, Demod angle: -114.0
  • AS110: +36dB Whitening Gain, Whitening filters OFF, Demod angle: -116.0
  • POPDC: +0dB Whitening Gain, Whitening filters OFF (used as a supplemental trigger signal when CARM and DARM are buzzing and POP22 fluctuates wildly)
  • MICH_B = 6.0 x REFL165Q, offset = 15.0
  • PRCL_B = 5.0 x REFL33I, offset = 45.0
  • SRCL_B = -0.6 x REFL165I + 0.24 x REFL33 I, offset=0

The REFL33 element in SRCL_B is to reduce the PRCL coupling, was found empirically by tuning the relative gains with the arms misaligned and looking at excitation line heights. The offsets were found by locking the DRMI on 1F signals with arms misaligned, and taking the average value of these 3F error signals.  

Servo filter configuration

The CARM and DARM ALS settings are largely scripted by scripts/ALS/Transition_IR_ALS.py, which takes you from arms POX/POY locked to CARM and DARM ALS locked. The DRMI settings are usually restored from the IFO_CONFIGURE screen. 

  • CARM: FM[1, 2, 3, 5, 6] , G=4.5, Trigger forced on, no FM triggers, output limit 8k
  • DARM: FM[1, 2, 3, 5, 6] , G=4.5, Trigger forced on, no FM triggers, output limit 8k
  • MICH: FM[4, 5], G= -0.03, Trigger POP22 I x 1.0 [50, 10], FM[2, 3, 7] triggered [50, 10], output limit 20k
  • PRCL: FM[4, 5], G= -0.003, Trigger POP22 I x 1.0 [50, 10], FM[1, 2, 8, 9] triggered [50, 10], output limit 8k
  • SRCL: FM[4, 5], G= -0.4, Trigger AS110 Q x 1.0 [500, 100], FM[2, 7, 9] triggered [500, 100], output limit 15k

Actuation Output matrix

  • MC2 = -1.0 x CARM
  • ETMX = -1.0 x DARM
  • ETMY = 1.0 x DARM
  • BS = 0.5 x MICH
  • PRM = 1.0 x PRCL - 0.2655 MICH
  • SRM = 1.0 x SRCL + 0.25 MICH (The mich compensation is very roughly estimated)

Locking Procedure

When arms are POX/POY locked, and the green beatnotes are appropriately configured, calling scripts/DRFPMI/carm_cm_up.sh initiates the following sequence of events:

  • Turn ON MC length feedforward and PRC angle feedforward
  • Set ALS phase tracker UGFs by looking at I and Q magnitudes
  • Set LSC-ALSX and LSC-ALSY offsets by averaging, ramp CARM+DARM gains up, XARM+YARM gains down, engage CARM+DARM boosts, now ALS locked
  • Move CARM away from resonance, offset = -4.0 (DRMI locks quicker on this side for whatever reason)
  • Restore PRM, SRM alignment. Set DRMI A FM gains to 0, B FM gains to 1.0. Enable LSC outputs for BS, PRM, SRM
  • When DRMI has locked, add POPDC trigger elements to DRMI signals and transition SRCL triggering to POP22I. NB: In the c1lsc model, the POPDC signal incident on the trigger matrix has an abs() operator applied to it first. 
    • MICH Trig = 1.0 x POP22 I + 0.5 x POPDC, [50, 10]
    • PRCL Trig = 1.0 x POP22 I + 0.5 x POPDC, [50, 10]
    • SRCL Trig = 10.0 x POP22 I + 5 x POPDC, [500, 100]
  • Reduce POX, POY whitening gains from their nominal +45dB to +0dB, so there aren't railing channels making noise in the whitening chassis and ADCs
  • DC couple ITM oplevs (average spot position, set FM offset, turn on DC boost filter, let settle)
  • With an 8 second ramp, reduce CARM offset to 0 counts. 
  • MANUALLY adjust CARM_A and DARM_A offsets to where CARM_B_IN and DARM_B_IN are seen to fluctuate symetrically around their zero crossing.
    • Note: Last week, this adjustment tended to be roughly the same from lock to lock, unlike the PRFPMI which generally didn't need much adjustment. Also, by jumping from CARM offset of -0.4 to 0.4, it could be seen that the zero crossing in  CARM_B aka CM_SLOW aka REFL11 had some offset, so CARM_B_OFFSET was set to 0.005, but this may change. 

When CARM and DARM are buzzing around true zero, powers maximized:

  • CARM and DARM FM1 (18,18:1,1 boosts) OFF
  • CARM_B_GAIN 0.0 -> 1.0, FM7 ON (20:0 boost)
  • DARM_B_GAIN 0.0 -> 0.015, FM7 ON (20:0 boost) 
  • MC servo board IN2 ENABLE, IN2 gain -32dB -> -16dB
  • Turn ALL MC2 violin filters OFF (smoothen out AO crossover)
  • If stable, CM board IN1 gain -32dB -> -10dB (This is the overall CARM gain, the arm powers stabilize within the last few dB of this transition)
  • CARM_A_GAIN 1.0 -> 0.7
  • CARM_A FM9 ON (LP1k), sleep, FM 1 ON (1:20 deboost), sleep, FM 2 ON (1:20 deboost), HOLD OUPUT, CARM now RF only
  • DARM_B_GAIN 0.015 -> 0.02, sleep, DARM_A_GAIN 1.0 -> 0.0 (This may not be the ideal final DARM_B gain, UGF hasn't been checked yet)

IFO is now RF only!

  • Turn on transmon QPD servos.
  • Adjust comm/diff QPD servo offsets to correct any problems evident on AS/REFL cameras. This usually brings powers from ~100-120 to ~130-140. 

This is as far as we've taken the DRFPMI so far, but the CARM bandwidth is still only at a few kHz. Based on PRFPMI locking, the next steps will be:

  • CM BOARD +12dB or so additional IN1 gain, more AO gain may be needed to get crossover to final position of ~100Hz
  • MC2 viollin filters back on
  • CM boost(s) on
  • AS55 whitening on
  • Transition DRMI to 1F
  11700   Mon Oct 19 16:20:52 2015 ericqUpdateLSCGreen beatnote couplers installed

Last Friday, I installed some RF couplers on the green BBPDs' outputs, and sent them over to Gautam's frequency divider module. At first I tried 20dB couplers, but it seemed like not enough power was reaching the dividers to produce a good output. I could only find one 10dB coupler, and I stuck that on the X BBPD. With that, I could see some real signals come into the digital system.

I don't think it should be a problem to leave the couplers there during other activities.

  11699   Mon Oct 19 16:16:01 2015 ericqUpdateCDSC1ALS crashes

Gautam was working on his digital frequency counter stuff when the c1als model crashed. I had trouble bringing it back until I realized that, for reasons unknown to me, the safe.snap file that the model looks for at boot had been deleted. (This file lives in /target/c1als/c1alsepics/burt). I copied over this morning's version from Chiara's local backup. 

At the sites, these files are under version control in the userapps svn repository, presumably symlinked into the target directory. We should definitely do something along these lines. 

  11698   Mon Oct 19 15:23:22 2015 ericqUpdateLSCLonger DRFPMI lock

Here is a longer lock, about 100 seconds RF only, from later that same night. The in-loop CARM and DARM error signals have the order of magnitude of 1nm per count. 

From ~-150 to -103, we were fine tuning the ALS offsets to try and get close to the real CARM/DARM zero points then blending the RF CARM signal. 

At -100, the CARM bandwidth increases to a few kHz and stabilizes the arm powers. By -81, the error signals are all RF. At -70, I turned on the transmon QPD servos, which brought the power up a bit. 

If I recall correctly, lock was lost because I put waaaay too big of an excitation on DARM with the goal of running its UGF servo for a bit. The number I entered was appropriate for ALS, but most certainly too huge for AS55...

  11697   Mon Oct 19 11:20:34 2015 gautamUpdateVACRGA scan reset

Steve pointed out that in the aftermath of the Nitrogen running out a couple of times last week, the RGA had shut itself off thinking that there was a leak and so it was not performing the scheduled scans once a day. So the data files from the scheduled scans were empty in the /opt/rtcds/caltech/c1/scripts/RGA/logs directory. The wiki page for getting it up and running again is up-to-date, but the script RGAset.py did not exist on the c0rga machine, which the RGA is communicating with via serial port. I copied over the script RGAset.py from rossa to c0rga and ran the script on that machine - but the error flags it returned were not all 0 (indicating some error according to the manual) - so I edited the script to send just the initialize command ('IN0') and commented out the other commands, after which I got error flags which were all 0. After this, I ran a manual scan using 'RGAlogger.py', and it appears that the RGA is now able to take scans again - I'm attaching a plot of the scan results. We've saved this scan as a reference to compare against after a few days. 

Attachment 1: RGAscan_151019.png
RGAscan_151019.png
  11696   Sat Oct 17 18:55:07 2015 ranaUpdateLSCDRFPMI Locked for 20 sec

smiley in addition to Koji's words I feel like we should also thank those who made small but positive contributions. Its hard not to notice that this locking only happened after the new StripTool PEM colors were implemented...

From the times series plot I guess that the fuzz of the in-loop DARM is 1 pm RMS (based on memory). This means that the ALS was holding the DARM at 10 pm from the RF resonances.

There is no significant shift in the DRMI error signals, so new weird CARM effect. Would be interesting to see what the 1f signals do in the last 60 seconds before RF lock.

For documentation, perhaps Gautam can post the loop gain measurements of the 5 loops on top of the Bode plots of the loop models.

  11695   Fri Oct 16 18:27:52 2015 SteveUpdateVAC vacuum normal condition

Vacuum normal configuration with VM1-closed, VM2-open valve positions. Power load normal 24V 0.2A

Maglev rotation 560 Hz at room temp body temp.

"NO COMM" error message on medm screen. Gauge controller pressures are read able.

All vac comp LEDs are green. We have to reboot on Monday to enable communication.

 

Attachment 1: oct16Fpm2015.png
oct16Fpm2015.png
  11694   Thu Oct 15 14:39:58 2015 ericqUpdateComputer Scripts / Programsnodus web apache simlinks too soft
Quote:

None of the links here seem to work. I forgot what the story is with our special apache redirect frown

https://wiki-40m.ligo.caltech.edu/Core_Optics

The story is: we currently don't expose the whole /users/public_html folder. Instead, we are symlinking the folders from public_html to /export/home/ on nodus, which is where apache looks for things

So, I fixed the links on the Core Optics page by running:

controls@nodus|~ > ln -sfn /users/public_html/40m_phasemap /export/home/

  11693   Thu Oct 15 10:59:12 2015 KojiUpdateLSCDRFPMI Locked for 20 sec

Great job!
Many thanks Eric, Gautam, and all the current and past colleagues
for your tremendous contributions to bring the 40m to this achievement.

  11692   Thu Oct 15 04:14:14 2015 ericqUpdateLSCDRFPMI Locked for 20 sec

Fast ALS was still a problem tonight. I don't think high frequency ALS noise saturating the PC drive is the issue; I put two 10k poles before the CM board (shooting for just 2-3kHz bandwidth), and the PC drive levels would be stable and low up until the lockloss, which was always conincident with a step in the AO gain.

After working with that for a few hours, we turned back to our more standard locking attempts. First, we dither aligned the PRMI, and then centered the REFL beam on REFL11. It's hard to say for certain, but we may have been a little close to the edge of the PD. The only other thing that differed from Monday's attempts was using 6dB less AO gain when trying the up the overall gain. 

The script now reliably breaks through to stable high powers, we had a handful of pure-RF locks tonight. The digital DARM gain needs tuning, and the CARM bandwidth still isn't at its final state, but these are very tractable. Off the top of my head, the way forward now includes:

  • Set proper final DARM loop shape
  • Set final CARM loop shape
  • Take full sensing matrix
  • Make 1F handoff
  • Set up the CAL model to produce (at least roughly) calibrated spectra
  • measure noise couplings and other fun stuff

Unrelated: I feel that the PRC angular FF may have deteriorated a bit. I'm leaving the PRC locked on carrier to collect data for wiener filter recalculation. 

  11691   Thu Oct 15 03:08:57 2015 ericqUpdateLSCDRFPMI Locked for 20 sec

[ericq, Gautam]

For real this time.

Attachment 1: DRFPMI_locked.pdf
DRFPMI_locked.pdf
  11690   Wed Oct 14 17:40:50 2015 gautamUpdateCDSFrequency divider box - further tests

Summary:

I carried out some further diagnostics and found some ways in which I could optimize the zero-crossing-counting algorithm, such that the error in the measured frequency is now entirely within the expected range (due to a +-1 clock cycle error in the counting). We can now determine frequencies up to ~60 MHz with less than 1 MHz systematic error and <10 kHz statistical error (fluctuations after the 20 Hz lowpass). This should be sufficient for slow control of the end-laser temperatures.

Details: 

The conclusion from my earleir tests was that there was possibly an improvement that could be made to setting the thresholds for the Schmitt trigger stage in the model. In order to investigate this, I wanted to have a look at the 64K sampled raw input to the ADCs. Yesterday Eric helped me edit the appropriate .par file for viewing these channels for c1x03, and for an input frequency of 70MHz (after division, ~4.3 kHz square wave), the signal looked as expected (top left plot, attachment #1). This prompted me to check the counting algorithm again with the help of various test points I had setup within the model. I found that there was a tendency to under-count the number of clock-cycles between zero-crossings by more than 1 clock cycle, due to the way my code was organized. I fixed this and found that the performance improved dramatically, compared to my previous trials. With the revised counting algortihm, there was at most a +-1 clock cycle error in the counting, and the systematic error between the measured and requested RF frequencies is now completely accounted for taking this consideration into account. The origin of this residual error can be understood by looking at the top right plot in Attachment #1 - presumably because of the effects of some downsampling filter, the input signal to the Schmitt trigger isnt a clean square wave (even at 4kHz) - specifically, the time spent in the LOW and HIGH states of the Schmitt trigger can vary between successive zero crossings because of the shape of the input waveform. As a result, there can be a +-1 clock cycle error in the counting process. Attachment #2 shows this - the red and blue lines envelope the measured frequency for the whole range investigated: 10-70MHz. Attachment #3 shows the systematic error as a function of the requested frequency.

If there was some way to bypass the downsampling filter, perhaps the high-frequency performance could be improved a little. 

Attachment 1: time_series_input_signals.pdf
time_series_input_signals.pdf
Attachment 2: calibration_20151012.pdf
calibration_20151012.pdf
Attachment 3: systematics_20151012.pdf
systematics_20151012.pdf
  11689   Wed Oct 14 16:53:22 2015 KojiUpdateGeneralNon inverting buffer for SR560

Eric needed a buffer to drive low input impedance (~130Ohm) of his pomona box, I quickly made a non-iverting buffer with G=+10. The DC power is obtained from the back of SR560. It uses 1.02K and 9.09K
to have the gain of ~10. The chip is OP27. In fact this limits the output swing to be +/-5V for the load resistance of 130Ohm. Eric thinks this is enough. If we need more, we need to swap the chip.

As SR560s tend to saturate too quickly, it would be very useful to have this kind of kit in all the labs
once it is packed in a box.

Attachment 1: IMG_20151014_145608547.jpg
IMG_20151014_145608547.jpg
Attachment 2: IMG_20151014_145548751.jpg
IMG_20151014_145548751.jpg
  11688   Wed Oct 14 15:59:06 2015 ranaUpdateComputer Scripts / Programsnodus web apache simlinks too soft

None of the links here seem to work. I forgot what the story is with our special apache redirect frown

https://wiki-40m.ligo.caltech.edu/Core_Optics

  11687   Tue Oct 13 17:04:54 2015 ericqUpdateCDSCDS things

After some discussion at last week's 40m meeting, I increased the frequency of daqd trying to write out minute trends from hourly to every two hours.

This has eliminated the hourly crashes. daqd still crashes sometimes, but only a few times per day. yes

However, looking at the oplev summary pages that actually use the minute trends, it looks like they're only sporadically getting succesfully written out. no


Also, I was having a lot of problems with the frontends' EPICS processes dying when I would try to update the SDF table. I rebuilt all of the frontends with RCG 2.9.6, which differs from the 2.9.4 that we had been running by SDF bugfixes and an RMS calculation bugfix. The SDF procedures are much more stable now. 

I have not yet discovered anything broken by this chage, and the tests I made for the last upgrade were all fine; last weeks tiny DRFPMI lock was achieved after this change. 

  11686   Tue Oct 13 16:28:21 2015 ericqUpdateLSCFast ALS pomona

I've made a cascaded passive 2-pole pomona box for fast ALS use, using LISO to check that it'll give the right shape when hooked up to the CM board's input stage. 

First stage is a 133Ohm + 10uF cap for ~120Hz LP, second is 1.15kOhm + 47nF cap for ~3.8kHz LP. The DC gain is ~0.75, which is much better than what I was doing before. The second stage would normally make a 2.9kHz LPF on its own, but the loading of the input stage moves the corner up. 

It seems the 133 Ohm resistor is a reasonable load on the output AD829 of the ALS demod board (short-circuit output current of 32mA and a series output resistor of 499Ohm). To be able to use the digitized ALSX I and the lowpassed analog version simultaneously, I had to buffer the signal with a SR560 before the pomona box, otherwise the signals looked distorted. This isn't a good long-term solution. Maybe I can used the further-buffered differential output to drive the LPF+CM board. 

The LISO files used to model the filter and CM board input stage, and fit the pole frequencies are attached. 

I made some attempts to get the AO path going today, but I suspect this daytime noise is just too much; the PC drive seems too irritable

Attachment 1: liso_lp.zip
Attachment 2: 2LPfit.pdf
2LPfit.pdf
  11685   Tue Oct 13 05:48:39 2015 ericqUpdateLSC:/

[ericq, Gautam]

Despite our best efforts, the grappa remains out of reach: the DRFPMI was not locked tonight. 

We spent a fair amount of time with the AUX X laser, as it was glitching madly again.

DRMI was finicky until I found some more reliable triggering settings; namely aquiring with AS110Q, but after that transitioning the trigger to the same POP22+POPDC combo as PRCL and MICH. With this in place, the DRMI lock seems really indefinite no matter what CARM seems to do; or at least, I always lost lock due to CARM shenanigans after this. 

The most frustrating part was the fact that I just couldn't cross over the AO path stably. It never "clicked" into high circulating power as it normally does (either in PRFPMI, or how it was last week). Various crossover filters and tweaks were attempted to no avail. Morning traffic starts soon, so we're calling it a night. 

  11684   Mon Oct 12 17:04:02 2015 gautamUpdateCDSFrequency divider box - further tests

I carried out some more tests on the digital frequency counting system today, mainly to see if the actual performance mirrors the expected systematic errors I had calculated here

Setup and measurement details:

I used the Fluke 6061A RF signal generator to output an RF signal at various frequencies, one at a time, between 10 and 70 MHz. I split the signal (at -15 dBm) into two parts, one for the X-channel and one for the Y-channel using a mini-circuits splitter. I then looked at the input signal using testpoints I had set up within the model, to decide what thresholds to set for the Scmitt trigger. Finally, I averaged the outputs of the X and Y channels using z avg -s 10 C1:ALS-FC_X_FREQUENCY_OUT and also looked at the standard deviation as a measure of the fluctuations in the output (these averages were taken after a low-pass filter stage with two poles at 20Hz, chosen arbitrarily).

Results:

  • Attachment #1 shows a plot of the measured RF frequency as a function of the frequency set on the Fluke 6061A. The errorbars on this plot are the standard deviations mentioned above. 
  • Attachment #2 shows a plot of the systematic error (mean measured value - expected value) for the two channels. It is consistent with the predictions of Attachments #3 and #4 in elog 11628 (although I need to change the plots there to a frequency-frequency comparison). This error is due to the inherent limitations of frequency counting using zero crossings, I can't think of a way to get around this).
  • I found that a lower threshold of 1800 and an upper threshold of 2200 worked well over this range of frequencies (the output of the Wenzel dividers goes between 0V and 2.5V, and the "zero" level for the digitized signal corresponds to ~2000, as determined by looking at a dataviewer plot of a tetspoint I had set up in my model). Koji suggested taking a look at the raw ADC input signal sampled at 64 kHz but this is not available for c1x03, the machine that c1als runs on. 

 

 

Attachment 1: calibration.pdf
calibration.pdf
Attachment 2: systematics.pdf
systematics.pdf
  11683   Fri Oct 9 19:54:58 2015 gautamUpdateCDSFrequency divider box - installation in 1X2 rack

The new ZKL-1R5 RF amplifier that Steve ordered arrived yesterday. I installed this in the frequency divider box and did a quick check using the Fluke RF signal generator and an oscilloscope to verify that both the X and Y paths were working. 

I've now installed the box in the 1X2 rack where the olf "RF amplifiers for ALS and FOL" box used to sit (I swapped that out as I needed the L brackets on that chassis to mount mine, see Attachment #1 for the new layout). The power cable that used to power the old chassis was available, but the connector was of the wrong gender, so I had to switch this out. After verifying that I was getting the correct voltage (+15V), I connected it to the chassis.

I then did a quick check with the Fluke generator to make sure that all was working as expected - Eric had set up some ADC channels for me earlier today in the C1ALS model, and I copied over my frequency counting module from C1TST into C1ALS, and recompiled the model. The RF generator was set to generate a 25MHz signal at -20dBm, which I then split using an RF power splitter between the X and Y arms. I then checked the output using dataviewer - I recovered an output frequency of ~27.64 MHz with a jitter of ~0.02 MHz with a 20Hz low-pass filter in place (see Attachment #2), which looks consistent with the systematic error inherent in the zero-crossing counting algorithm and random fluctuations I had observed in my earlier trials, discussed here. But a more systematic investigation needs to be carried out in this regard. The interfacing between the hardware and software seems to be working alright though. I've left the RF generator near the 1x2 rack for now, though its powered off. 

The mode cleaner unlocked quite a few times while I was working but looks stable now. 

 

Attachment 1: IMG_0027.JPG
IMG_0027.JPG
Attachment 2: time_seris_25MHz.pdf
time_seris_25MHz.pdf
  11682   Fri Oct 9 16:43:50 2015 ericqUpdateIOOWeird IMC behavior

A few minutes ago, Gautam and I were poking around the IOO rack, looking at where he should power his frequency divider box, and what ADC innputs to use. 

Looking at the mode cleaner signals, it looks like we may have jostled something in a good way. Weird. 

  11681   Fri Oct 9 16:23:25 2015 ericqUpdateLSCALS plant shape

Ah, I understand it now! Since the additive offset path keeps the post-cavity frequency TF flat, the pre-cavity frequency must grow above the cavity pole, which is why ALS sees a zero. 

Ok, so this means we want to apply two lowpasses to the ALS signal for use as fast CARM control, if we want it to be capable of scalar blending with REFL11: one at ~120Hz to imitate the CARM coupled cavity pole present in REFL11, and one at ~3.8kHz to undo the "IMC cavity zero" present in ALS. 

At this point, I'm starting to prefer an active circuit to do this lowpassing; using LISO to check designs for two cascaded passive LPFs it looks like the ALS signal would have to be attenuated by a factor of ~20 at DC if we don't use resistors smaller than 1k, given the low input impedence of the CM board. 

  11680   Fri Oct 9 14:50:18 2015 KojiUpdateLSCALS plant shape

ALS is the comparison of the PSL laser freq vs the end laser freq that is locked to the arm cavity resonant freq

On the other hand, the AS55 PDH is the comparison of the PSL laser freq after the IMC vs the arm cavity resonant freq. Here the PDH signal involves the arm cavity pole.

In total you observe the difference by the IMC cav pole + the arm cav pole.

  11679   Fri Oct 9 13:31:21 2015 ericqUpdateLSCALS plant shape

To get a better look at how to do fast ALS, I took some "Plant TF" measurements of the X arm. 

Specifically, in single arm POX lock and the both Y TMs misaligned, I used the SR785 to inject into EXC B of the common mode board with the CM fast output gain and IMC IN2 gain both at 0dB, and looked at the transfer function of that excitation into the analog ALSX I and AS55 Q out-of-loop signals. (ALSX I tuned to a zero crossing via the delay line box as usual.)

My expectation was to see them only differ by the IR single arm cavity pole, which should be around 8-9kHz ( FSR/450 = 3.9MHz/450 ~ 8.6kHz). The green cavity pole at ~18k shouldn't show up since we're not touching the green light, and the IMC pole at ~3.8kHz shouldn't show up since this is well within the IMC loop bandwidth and we're actuating on its error point.  

Instead, I see them differ by a double pole at 4.3kHz. (or zero, if you look at it the reciprocal way). Vectfit actually fits them as a slightly complex pair, with a Q of 0.53/ I imagine that the wiggles are due to the digital control loop.

My question is: why is there a double zero here? Where has my reasoning led me astray?

 

Attachment 1: xarm_plantTFs.pdf
xarm_plantTFs.pdf
  11678   Fri Oct 9 11:41:09 2015 ericqUpdateVACN2 Pressure fell

At 10:02AM, the N2 Pressure fell below 60 PSI. The watch script saw this happen, but I did not recieve the email it is supposed to send frown

C1:Vac-P1_pressure reads 7e-4, which is the same as it has for the past ~2 days, so the V1 interlock worked fine. 

I've put some fresh N2 into the system, and Bob will pop in over the weekend to check it. I'll stay on top of it until Steve gets back. 

After consulting ELOGs and the 40m wiki, I reasoned it was ok to open the V1 to reconnect the turbo pump to the main IFO volume and VM1 to reconnect the RGA, and have now done so. 

  11677   Fri Oct 9 11:24:06 2015 JenneUpdateLSCDRFPMI Progress

I hope the grappa was already cold, and ready to drink! 

  11676   Fri Oct 9 09:22:38 2015 ericqUpdateLSCDRFPMI Progress

Look upon this three second lock, ye Mighty, and rejoice!

Attachment 1: oct8_allRF.pdf
oct8_allRF.pdf
  11675   Thu Oct 8 21:35:49 2015 ranaUpdateLSCDRFPMI Progress

Give us a lockloss or other kind of time series plot so we can bask in the glory.

  11674   Thu Oct 8 16:48:23 2015 KojiUpdateLSCDRFPMI Progress

Awesome

  11673   Thu Oct 8 14:14:50 2015 ericqUpdateLSCDRFPMI Progress
Quote:

Please clarify: I wonder if you were at the zero offset for CARM and DARM or not. 

Yes, this was at the full DRFPMI resonance.

  11672   Thu Oct 8 13:13:20 2015 KojiUpdateLSCDRFPMI Progress

Please clarify: I wonder if you were at the zero offset for CARM and DARM or not. I am 25% excited right now.

  11671   Thu Oct 8 04:48:50 2015 ericqUpdateLSCDRFPMI Progress

Progress was made. CARM was stably locked on RF only. DARM was RF only for a few moments before I typed in a wrong number...

A change was made to the LSC model's triggering section to make the DRMI hold more reliably at zero CARM offset. Namely, the POPDC signal now has its absolute value taken before the trigger matrix. Even unwhitened, it occaisionally would somehow go negative enough to break the DRMI trigger.

AUX X laser was acting up again. As before, tweaking laser current is the temporary fix.

  11670   Tue Oct 6 16:56:40 2015 gautamUpdateGeneralFOL fiber box revamp

[gautam, ericq]

We had a look at the IR beat (PSL+Xarm) today using the new FOL fiber box, and compared it to the green beat signal for the same combination. We first switched out the green Y beat input into the RF amplifiers on the PSL table with the PSL+Xarm IR beat input (so in all the plots, the BEATY channels really correspond to the IR beat for PSL+X). The IR and green beat notes were found without much difficulty, and we compared the beat signal PSDs for the green and IR signals (see Attachment #1 - arms were locked to green and the X slow control was turned on). The pink trace (labeled REF1) corresponds to the green beat signal, and was in good agreement with an earlier reference trace Eric had saved for the same signal. The teal trace (labeled REF0) corresponds to the the IR beat signal monitored simultaneously. 

We then went back to the PSL table to check the amplitude of the signal from the broadband fiber PDs using the Agilent network analyzer. An initial measurement yielded a beat note (@~50MHz) at ~-22dbm (17mV rms). We figured that by bypassing the 90-10 splitter in this path, we could get a stronger signal. But after switching out the fiber connections we found that the signal amplitude had fallen to ~-27dbm (10mV rms). As per my earlier measurements here, we expect ~600uW of light on the PD, and a quick calculation suggested the signal should be more like 60mV, so we used the fiber power meter to check the power levels after each of the couplers again. We then found that the fiber connector on the front panel of the box for the PSL input wasnt ideal (the laser power after the first 50-50 coupler was only ~250 uW, though the input was ~1.2  mW). The power after the first coupler also fluctuated unpredictably (<100 uW to 350 uW) in response to slightly tightening/loosening the fiber connections on the front panel. I then switched the PSL input to one of the two unused fiber connectors on the front panel (meant for the 10% of the beat signal for the DC readout), and found that this input behaved much better, with ~450 uW of power available after the first 50-50 coupler. The power going into the beat PD was also measured to be ~550uW, closer to what was expected. The beat signal peak now was ~-14dbm (~30mV rms).

We then once again repeated the comparison between green and IR beat signals - but while in the control room, I noticed that the beat signal amplitude on the network analyzer in the control room was fluctuating by nearly 1.5 divisions on the vertical scale - not sure what the reason for this is. A look at the PSD of the IR beat with higher power incident on the PD was also not encouraging (see blue trace in Attachment #1), it seems to have gotten worse in the 10-30 Hz range. We also looked at the coherence between the beat spectrum and the beat note amplitude in order to look for any linear coupling between the two, but from Attachment #2, we cannot explain the disparity between the green and IR beat spectra. This warrants further investigation.

Everything on the PSL table has now been restored to the configurations before these investigations (i.e. the Y+PSL green beat cable has been reconnected to the RF amplifier, and both green beat PDs have been powered back ON. The fiber PDs are powered OFF) 

Attachment 1: 20151006_Xbeat_psd.pdf
20151006_Xbeat_psd.pdf
Attachment 2: 20151006_Xbeat_coherence.pdf
20151006_Xbeat_coherence.pdf
  11669   Tue Oct 6 03:30:17 2015 ericqUpdateLSCDRFPMI Progress

[ericq, Gautam]

Highlight of the night: the DRFPMI was held at arm powers > 110 for 20 seconds. ALS feedback was still running though, but so was some nonzero REFL11 AO path action.

In short, time was spent finding the right FM trigger settings to keep the DRMI locked while CARM is fluctuating through resonance, what CARM offset to acquire DRMI lock at, order of operations of turning on AO / turning up overall CARM gain, etc. 

Sadly, for the past hour or so, the DRMI has refused to stay locked for more than ~20 seconds, so I haven't been able to push things much further. This is a shame, since I'm very nearly at the equivalent point in the PRFPMI locking script where the ALS control is turned off completely. 

  11668   Mon Oct 5 16:41:22 2015 SteveUpdateSUSETMY OL laser replaced

This JDSU 1103P laser, sn P892324 lived for 2 years. It's power output is 0.05 mW now

It was replaced with brand new JDSU 1103P,  sn P919645, Mfg date 12/2014 with 2.75 mW output.

There is 0.14 mW  light returning to the qpd = 7,250 counts without AR 632 lenses

  11667   Mon Oct 5 11:25:21 2015 ericqUpdateSUSETMY OL laser dead

Gautam alerted me that the Y arm looked like it was being dithered, even though the ASS was turned off. I found that the ETMY OL signals were garbage, leading to the servos flipping back and forth between their rails. 

We went out to the ETMY table, and found the HeNe laser to be emitting a paltry <0.5mW; the OL QPD could not register the puny beam incident on it.

Here is the last 30 days of OL_SUM:

Steve will replace the laser this afternoon. 

  11666   Mon Oct 5 10:11:35 2015 SteveUpdateVACRGA scan pd78 day 386

RGA background scan.

The IFO is closed off from RGA with VM1

CC4 ( and CC1)  is still flaky and it's interlock closes VM1

 

Attachment 1: RGA_background.png
RGA_background.png
  11665   Sun Oct 4 14:32:49 2015 jamieConfigurationCDSCSD network test complete

Here's an example of the glitches we've been seeing, as seen in the StripTool trace of the front end oscillator:

You can clearly see the glitch at around T = -18.  Obviously during non-glitch times the sine wave is nice and cleanish (there are still the very small discretisation from the EPICS sample times).

  11664   Sun Oct 4 14:28:03 2015 jamieUpdateDAQmore failed attempts at getting new fb working

I tried to look at fb1 again today, but still haven't made any progress.

The one thing I did notice, though, is that every hour on the hour the fb1 daqd process dies in an identical manor to how the fb daqd dies, with these:

[Sun Oct  4 12:02:56 2015] main profiler warning: 0 empty blocks in the buffer

errors right as/after it tries to write out the minute trend frames.

This makes me think that this new hardware isn't actually going to fix the problem we've been seeing with the fb daqd, even if we do get daqd "working" on fb1 as well as it's currently working on fb.

  11663   Sun Oct 4 14:23:42 2015 jamieConfigurationCDSCSD network test complete

I've finished, for now, the CDS network tests that I was conducting.  Everything should be back to normal.

What I did:

I wanted to see if I could make the EPICS glitches we've been seeing go away if I unplugged everything from the CDS martian switch in 1X6 except for:

  • fb
  • fb1
  • chiara
  • all the front end machines

What I unplugged were things like megatron, nodus, the slow computers, etc.  The control room workstations were still connected, so that I could monitor.

I then used StripTool to plot the output of a front end oscillator that I had set up to generate a 0.1 Hz sine wave (see elog 11662).  The slow sine wave makes it easy to see the glitches, which show up as flatlines in the trace.

More tests are needed, but there was evidence that unplugging all the extra stuff from the switch did make the EPICS glitches go away.  During the duration of the test I did not see any EPICS glitches.  Once I plugged everything back in, I started to see them again.  However, I'm currently not seeing many glitches (with everything plugged back in) so I'm not sure what that means.  I think more tests are needed.  If unplugging everything did help, we still need to figure out which machine is the culprit.

  11662   Sun Oct 4 13:53:30 2015 jamieUpdateLSCSENSMAT oscillator used for EPICS tests

I've taken over one of the SENSMAT oscillators for a test of the EPICS system.

These are the channels I've modified, with their original and current settings:

controls@donatella|~ > caget C1:LSC-OUTPUT_MTRX_7_13 C1:CAL-SENSMAT_CARM_OSC_FREQ C1:CAL-SENSMAT_CARM_OSC_CLKGAIN
C1:LSC-OUTPUT_MTRX_7_13          -1
C1:CAL-SENSMAT_CARM_OSC_FREQ    309.21
C1:CAL-SENSMAT_CARM_OSC_CLKGAIN   0
controls@donatella|~ > caget C1:LSC-OUTPUT_MTRX_7_13 C1:CAL-SENSMAT_CARM_OSC_FREQ C1:CAL-SENSMAT_CARM_OSC_CLKGAIN
C1:LSC-OUTPUT_MTRX_7_13           0
C1:CAL-SENSMAT_CARM_OSC_FREQ      0.1
C1:CAL-SENSMAT_CARM_OSC_CLKGAIN   3
controls@donatella|~ >

 

 

  11661   Sun Oct 4 12:07:11 2015 jamieConfigurationCDSCSD network tests in progress

I'm about to start conducting some tests on the CDS network.  Things will probably be offline for a bit.  Will post when things are back to normal.

  11660   Fri Oct 2 15:33:09 2015 SteveUpdatesafetysafety training

Gautom has received 40m specific basic safety training today.

  11659   Fri Oct 2 15:11:08 2015 SteveUpdatePEMcable squashed

Cable numbered #53 from Accelerometer 4 to 1X7 / DAQ input c26 was squased while removing network card from Sun Fire x4600 today.

This cable has to be tested.

  11658   Fri Oct 2 03:29:16 2015 ericqUpdateLSCFast ALS progress - AO path crossed over, but no high BW

I've been using an SR560 to experiment with differnent pole frequencies, to try and cancel the mystery zero. It's after the ALS demod board, before the pomona LPF with a gain of five. 

A pole frequency of 3kHz seems to recover sensible loop shapes. I've been able to crossover the AO path to make a nice long phase bubble which isn't the prettiest, but seems workable.

Getting to this point is now almost entirely scripted and repeatable; one just has to make sure that the ALS beat has the correct sign and adjust the delay line length. Most frustratingly, due to the dependence of the ALS gain on beat frequency / magnitude / delay, which can all vary on the order of a few dB, the AO gain settings to get to the crossed over point are not always the same, so at the end it's a lot of small steps and frequent loop measurements. 

The FSS crossover and overall IMC loop gain have to be pretty actively managed too. It's all too easy to drive the pockel's cell crazy. And if it's going crazy on its own anyways, there's no hope in trying to pile ALS sensing noise on top of it... It would really help in this effort to fix the whole PC situation up. 

Unfortunately, lock is lost when increasing the overall gain on the common mode board even by 1dB.angry We've seen in the single arm tests, that the gain settings have an appreciable difference in offset between them. Maybe this step is more than what the loop can handle? Or maybe it's the voltage glitches... Maybe some gain reallocation can put me on a region of the slider that glitches less.

In terms of the mystery plant features, I figure I'd like to take the analog TF of AO control signal to, say, AS55, and see what may or may not be there. I just haven't done this tonight since it would involve recabling the analyzer, and I still need frequent loop measurements to get to the crossed over state. Having ITMY misaligned and using the digital AS55Q spectrum as an out of loop monitor has been very helpful. 

Attachment 1: crossedover.pdf
crossedover.pdf
  11657   Thu Oct 1 20:26:21 2015 jamieUpdateDAQSwapping between fb and fb1

Swapping between fb and fb1 as DAQ is very straightforward, now that they are both on the DAQ network:

  • stop daqd on fb
  • on fb sudoedit /diskless/root/etc/init.d/mx_stream and set: endpoint=fb1:0
  • start daqd on fb1.  The "new" daqd binary on fb1 is at: ~controls/rtbuild/trunk/build/mx-localtime/daqd

Once daqd starts, the front end mx_stream processes will be restarted by their monits, and be pointing to the new location.

Moving back is just reversing those steps.

  11656   Thu Oct 1 20:24:02 2015 jamieUpdateDAQmore failed attempts at getting new fb working

I just realized that when running fb1, if a single mx_stream dies they all die.

  11655   Thu Oct 1 19:49:52 2015 jamieUpdateDAQmore failed attempts at getting new fb working

Summary

I've not really been able to make additional progress with the new 'fb1' DAQ.  It's still flaky as hell.  Therefore we're still using old 'fb'.

Issues

mx_stream

The mx_stream processes on the front ends initially run fine, connecting to the daqd and transferring data, with both DAQ-..._STATUS and FE-..._FB_NET_STATUS indicators green.  Then after about two minutes all the mx_stream processes on all the front ends die.  Monit eventually restarts them all, at which point they come up green for a while until the crash again ~2 minutes later.  This is essentially the same situation as reported previously.

In the daqd logs when the mx_streams die:

Aborted 2 send requests due to remote peer 00:30:48:be:11:5d (c1iscex:0) disconnected
Aborted 2 send requests due to remote peer 00:14:4f:40:64:25 (c1ioo:0) disconnected
Aborted 2 send requests due to remote peer 00:30:48:d6:11:17 (c1iscey:0) disconnected
Aborted 2 send requests due to remote peer 00:25:90:0d:75:bb (c1sus:0) disconnected
Aborted 1 send requests due to remote peer 00:30:48:bf:69:4f (c1lsc:0) disconnected
mx_wait failed in rcvr eid=000, reqn=176; wait did not complete; status code is Remote endpoint is closed
disconnected from the sender on endpoint 000
mx_wait failed in rcvr eid=000, reqn=177; wait did not complete; status code is Connectivity is broken between the source and the destination
disconnected from the sender on endpoint 000
mx_wait failed in rcvr eid=000, reqn=178; wait did not complete; status code is Connectivity is broken between the source and the destination
disconnected from the sender on endpoint 000
mx_wait failed in rcvr eid=000, reqn=179; wait did not complete; status code is Connectivity is broken between the source and the destination
disconnected from the sender on endpoint 000
mx_wait failed in rcvr eid=000, reqn=180; wait did not complete; status code is Connectivity is broken between the source and the destination
disconnected from the sender on endpoint 000
[Thu Oct  1 19:00:09 2015] GPS MISS dcu 39 (PEM); dcu_gps=1127786407 gps=1127786425

[Thu Oct  1 19:00:09 2015] GPS MISS dcu 39 (PEM); dcu_gps=1127786408 gps=1127786426

[Thu Oct  1 19:00:09 2015] GPS MISS dcu 39 (PEM); dcu_gps=1127786408 gps=1127786426

In the mx_stream logs:

controls@c1iscey ~ 0$ /opt/rtcds/caltech/c1/target/fb/mx_stream -r 0 -W 0 -w 0 -s 'c1x05 c1scy c1tst' -d fb1:0
mmapped address is 0x7f0df23a6000
mmapped address is 0x7f0dee3a6000
mmapped address is 0x7f0dea3a6000
send len = 263596
Connection Made
isendxxx failed with status Remote Endpoint Unreachable
disconnected from the sender

daqd

While the mx_stream processes are running daqd seems to write out data just fine.  At least for the full frames.  I manually verified that there is indeed data in the frames that are written.

Eventually, though, daqd itself crashes with the same error that we've been seeing:

main profiler warning: 0 empty blocks in the buffer

I'm not exactly sure what the crashes are coincident with, but it looks like they are also coincident with the writing out of the minute and/or second trend files.  It's unclear how it's related to the mx_stream crashes, if at all.  The mx_stream crashes happen every couple of minutes, whereas the daqd itself crashes much less frequently.

The new daqd can't handle EDCU files.  If an EDCU file is specified (e.g. C0EDCU.ini in our case), the daqd will segfault very soon after startup.  This was an issue with the current daqd on fb, but was "fixed" by moving where the EDCU file was specified in the master file.

Conclusion

There are a number of differences between the fb1 and fb configurations:

  • newer OS (Debian 7 vs. ancient gentoo)
  • newer advLigoRTS (trunk vs. 2.9.4)
  • newer framecpp library installed from LSCSoft Debian repo (2.4.1-1+deb7u0 vs. 1.19.32-p1)

It's possible those differences could account for the problems (/opt/rtapps/epics incompatible with this Debian install, for instance).  Somehow I doubt it.  I wonder if all the weird network issues we've been seeing are somehow involved.  If the NFS mount of chiara is problematic for some reason that would affect everything that mounts it, which includes all the front ends and fb/fb1.

There are two things to try:

  • Fix the weird network problem.  Try remove EVERYTHING from the network except for chiara, fb/fb1, and the front ends and see if that helps.
  • Rebuild fb1 with Ubuntu and daqd as prescribed by Keith Thorne.
  11654   Wed Sep 30 15:44:06 2015 SteveUpdateGeneralSun Fire X4600

Gautam and Steve,

The decommissioned server from LDAS is retired to the 40m   with 32 cores and 128GB of memory in rack 1X7   http://docs.oracle.com/cd/E19121-01/sf.x4600/

  11653   Wed Sep 30 13:59:49 2015 jamieUpdateDAQattempts at getting new fb working

I got Steve to get us a new Myrinet fiber network adapter card for fb1:

  • Myrinet 10G-PCIE-8B-S

I just finished installing the card in fb1, and it came up fine.  We happened to have a spare fiber, and a spare fiber jack in the DAQ switch, so I went ahead and plugged it in in parallel to the old fb:

controls@fb1:~/rtbuild/trunk 130$ /opt/mx/bin/mx_info
MX Version: 1.2.16
MX Build: controls@fb1:/opt/src/mx-1.2.16 Fri Sep 18 18:32:59 PDT 2015
1 Myrinet board installed.
The MX driver is configured to support a maximum of:
    8 endpoints per NIC, 1024 NICs on the network, 32 NICs per host
===================================================================
Instance #0:  364.4 MHz LANai, PCI-E x8, 2 MB SRAM, on NUMA node 0
    Status:         Running, P0: Link Up
    Network:        Ethernet 10G

    MAC Address:    00:60:dd:43:74:62
    Product code:   10G-PCIE-8B-S
    Part number:    09-04228
    Serial number:  485052
    Mapper:         00:60:dd:46:ea:ec, version = 0x00000000, configured
    Mapped hosts:   7

                                                        ROUTE COUNT
INDEX    MAC ADDRESS     HOST NAME                        P0
-----    -----------     ---------                        ---
   0) 00:60:dd:43:74:62 fb1:0                             1,0
   1) 00:25:90:0d:75:bb c1sus:0                           1,0
   2) 00:30:48:be:11:5d c1iscex:0                         1,0
   3) 00:30:48:d6:11:17 c1iscey:0                         1,0
   4) 00:30:48:bf:69:4f c1lsc:0                           1,0
   5) 00:14:4f:40:64:25 c1ioo:0                           1,0
   6) 00:60:dd:46:ea:ec fb:0                              1,0

We can now work on fb1 while fb continues to run and collect data from the front ends.

I'm still not getting the mx_stream connections to the new fb1 daq to work.  I'm leaving everything running as is on fb for the moment.

  11652   Wed Sep 30 13:07:13 2015 gautamUpdateGeneralFOL fiber box revamp

Eric pointed out that the 1x2 couplers that were used in the previous arrangement and which I recycled, were in fact NOT appropriate - they are not 50-50 couplers but 90-10 couplers, which explains the measured power levels I quoted here.

I switched out these for a pair of the newly arrived 2x2 couplers, and have also replaced the datasheets on the inside of the top cover. I then redid the power level measurements, and got some sensible values this time (see Attachment #1 for revised layout and measured power levels, numbers in red are powers for PSL light, numbers in green are for AUX laser light, and all numbers are in mW). I did find that the 90-10 splitter in the PSL+Y path was not working (though the one in the PSL+X path seems to be working fine), and hence, have not quoted power levels at the output of these splitters. For now, I guess we can bypass the splitters and take the PSL+AUX light from the 2x2 couplers directly to the PDs.

Attachment 1: FOL_schematic.pdf
FOL_schematic.pdf
ELOG V3.1.3-