40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 209 of 339  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  13266   Tue Aug 29 02:08:39 2017 gautamUpdateALSFiber ALS noise measurement

I was having a chat with EricQ about this today, just noting some points from our discussion down here so that I remember to look into this tomorrow.

  • I believe that currently, the channels C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ and the Y arm analog read out the frequency of the green beat, in Hz.
  • In the comparison I plotted, I WRONGLY divided the spectrum of the IR beat by 2, instead of multiplying in by 2, which is what should actually be done for an apples-to-apples comparison.
  • The deeper question is, what should this channel actually readout?
  • Looking at my codes from past arm scans etc, I see that I am dividing the downloaded data by 2 in order to convert the X-axis of these scans to "IR Hz". But this should really be all we care about.
  • So I think I will have to re-do the cts-to-Hz calibration in the ALS models. It should be possible to do ~10FSR scans with the IR beat, and then we can use the sideband resonances (presumably the sideband frequencies are known with better precision than the arm length, and hence the FSR) to calibrate the phase tracker.
  • I don't think this changes the fact that the Fiber ALS situation has been improved - but I will have to repeat the measurement to be sure. The improvement may not be as stellar as I tried to sell in my previous elog sad.

    Other thoughts: 

  • Can we make use of the Jetstor raid array for some kind of consolidated 40m CDS backup system? Once we've gotten everything of interest out of it...

  13267   Tue Aug 29 15:04:59 2017 gautamUpdateSUSETMY Oplev PIT loop gain changed

Last night, while we were working on the ALS, I noticed the GTRY spot moving around (in PITCH) on the CCD monitor in the control room at ~1-2Hz. The operating condition was that the arm was locked to the IR, and the PSL green shutter was closed, so that only the arm transmissions were visible on the CCD screens. There was no such noticable movement of the GTRX spot. When looking at the out-of-loop ALS nosie in this configuration (but now with the PSL green shutter open of course), the Y arm ALS noise at low frequencies was much worse than the X arm.

Today, I looked into this a little more. I first checked that the Y-endtable enclosure was closed off as usual (as I had done some tweaking to the green input pointing some days ago). There are various green ghost beams on the Y-endtable. When time permits, we should make an effort to cleanly dump these. But the enclosure was closed as usual.

Then I looked at the in-loop Oplev error signal spectra for the ITMY and ETMY Oplev loops. There was high coherence between ETMYP Oplev error signal and GTRY. So I took a loop transfer function measurement - the upper UGF was around 3.5Hz. I increased the loop gain such that the upper UGF was around 4.5Hz, with phase margin ~30degrees. Doing so visibly reduced the angular movement of the GTRY spot on the CCD. Attachment #1 shows the Oplev loop TF after the gain increase, while Attachment #2 compares the GTRX and GTRY spectra (DC value is approximately the same for both, around 0.4). GTRY still seems a bit noisier at low frequencies, but the out-of-loop ALS noise for the Y arm now lines up much more closely with its reference trace from a known good time. 

Quote:
 

Y-arm ALS wasn't so stellar tonight, especially at low frequencies. I can see the GTRY spot moving on the CCD monitor, so something is wonky. To be investigated.

 

Attachment 1: ETMY_OLPIT.pdf
ETMY_OLPIT.pdf
Attachment 2: GTR_comparison.pdf
GTR_comparison.pdf
  13273   Wed Aug 30 10:54:26 2017 gautamUpdateCDSslow machine bootfest

MC autolocker and FSS loops were stuck because c1psl was unresponsive. I rebooted it and did a burtrestore to enable PSL locking. Then the IMC locked fine.

c1susaux and c1iscaux were also unresponsive so I keyed those crates as well, after taking the usual steps to avoid ITMX getting stuck - but it still got stuck when the Sat. Box. connectors were reconnected after the reboot, so I had to shake it loose with bias slider jiggling. This is annoying and also not very robust. I am afraid we are going to knock the ITMX magnets off at some point. Is this problem indicative of the fact that the ITMX magnets were somehow glued on in a skewed way? Or can we make the situation better by just tweaking the OSEM-holding fixtures on the cage?

In any case, I've started listing stuff down here for things we may want to do when we vent next.

 

  13275   Wed Aug 30 15:00:06 2017 gautamUpdateGeneralEdgeswitch fiber swap

A couple of minutes ago, Larry W swapped the fibers to our 40m Edgeswitch (BROCADE FWS 648G) to a faster connection. This is the switch to which our gateway machine, NODUS, is connected. The actual swap itself happened at the core router in Bridge, and took only a few seconds. After the switch, I double checked that I was able to ssh into nodus from my laptop, and Larry informed me that everything is working as expected on his end.

Larry also tells us that the other edgeswitch at the 40m (Foundry Networks), to which most of our GC network machines are connected, is a 100MBPS switch, and so we should re-route the connections from this switch to the BROCADE switch at our convenience to take advantage of the faster connection.

  13276   Wed Aug 30 19:49:33 2017 gautamUpdateLSCREFL55 demod board debugging

Summary:

Today I tried debugging the mysterious increase in REFL55 signal levels in the DRMI configuration. I focused on the demod board, because last week, I had tried routing these signals through different channels on the whitening board, and saw the same effect. 

Based on my tests, everything on the Demod board seems to work as expected. I need to think more about what else could be happening here - specifically do a more direct test on the whitening board.

Details:

  • The demod board is a modified D990511 (marked up schematic + high-res photo to follow).
  • Initially, I tried probing the LO signal levels at various points with the board in the eurocrate itself, with the help of an extender card.
  • But this wasn't very convenient, so I pulled the board out to the office area for more testing.
  • The 55MHz LO signal going into the board is ~0dBm (measured with Agilent network analyzer)
  • I used the active probe to check the LO levels at various points along the signal chain, which mostly consists of attenuators, ERA-5SM amplifiers, and some splitters/phase rotators.
  • Everything seemed consistent with the expected levels based on "typical" numbers for gains and insertion losses cited in the datasheets for these devices.
  • I couldn't directly measure the level at the LO input to the mixer, but measuring the input to the ERA-5SM immediately before the mixer, barring problems with this amplifier, the LO input of the mixer is being driven at >17dBm which is what it wants.
  • Next, I decided to check the gain, gain imbalance and orthogonality of the demodulation.
  • For this purpose, I restored the board to the Eurocrate, reconnected the LO input to the board, and used a second Marconi at a slightly offset frequency to drive the PD input at ~0dBm.
  • Attachment #1 - The measured outputs look pretty balanced and orthogonal. The gain is consistent with an earlier measurement I made some months ago, when things were "normal". More bullets added after Rana's questions:
    • 300 MHz bandwidth oscilloscope used to acquire the data
    • I and Q outputs were from the daughter board
    • Data was acquired via ethernet data download utility
    • 20 MHz low-pass filter turned on on the Oscilloscope while downloading the data
Quote:

I did a quick check by switching the output of the REFL55 demod board to the inputs normally used by AS55 signals on the whitening board. Setting the whitening gain to +18dB for these channels had the same effect - ADC overflow galore. So looks like the whitening board isn't to blame. I will have to check the demod board out.

 


All connections have been restored untill further debugging later in the evening.

Attachment 1: REFL55_demod_check.pdf
REFL55_demod_check.pdf
  13280   Thu Aug 31 00:52:52 2017 gautamUpdateLSCREFL55 whitening board debugging

[rana,gautam]

We did an ingenious checkup of the whitening board tonight.

  • The board is D990694
  • We made use of a tip-tilt DAC channel for this test (specifically TT1 UL, which is channel 1 on the AI board). We disconnected the cable going from the AI board to the TT coil driver board.
    • as opposed to using a function generator to drive the whitening filter, this approach allows us to not have to worry the changing offsets as we switch the whitening gain.
    • By using the CDS system to generate the signal and also demodulate it, we also don't have to worry about the drive and demod frequencies falling out of sync with each other.
  • The test was done by injecting a low frequency (75.13 Hz, amplitude=0.1) excitation to this DAC channel, and using the LSC sensing matrix infrastructure to demodulate REFL55 I and Q at this frequency. Demod phases in these servos were adjusted such that the Q phase demodulated signal was minimized.
  • An excitation was injected using awggui into TT1 UL exc channel.
  • We then stepped the whitening gains for REFL55_I and REFL55_Q in 3dB steps, waiting 5 seconds for each step. Syntax is z step -s 5 C1:LSC-REFL55_I_WhiteGain +1.0,15 C1:LSC-REFL55_Q_WhiteGain +1.0,15
  • Attachment #1 suggests that the whitening filter board is working as expected (each step is indeed 3dB and all steps are equal to the eye).
  • Data + script used to generate this plot is in Attachment #2.

I've restored all connections at that we messed with at the LSC rack to their original positions.

The TT alignment seems to be drifting around more than usual after we disconnected one of the channels - when I came in today afternoon, the spot on the AS camera had drifted by ~1 spot diameter so I had to manually re-align TT1. 

Quote:
 

Based on my tests, everything on the Demod board seems to work as expected. I need to think more about what else could be happening here - specifically do a more direct test on the whitening board.

Attachment 1: REFL55_whtCheck.pdf
REFL55_whtCheck.pdf
Attachment 2: REFL55_whtChk.tar.gz
  13281   Thu Aug 31 03:31:15 2017 gautamUpdateLSCDRMI re-locked!

After our Demod/Whitening electronics investigations suggested nothing obviously wrong, I decided to give DRMI locking another go tonight.

Surprisingly, there was no evidence of REFL55 behaving weirdly tonight, and I was able to easily lock the DRMI on 1f error signals using the recipe I've been using in the last few months.

Not sure what to make of all this frown.

I got in a ~15 minute lock, but I wasn't prepared to do any sort of characterization/ sensing / attempt to turn on coil-dewhitening, and I'm too tired to try again tonight. I was however able to whiten the error signals, as I have been able to do in the past. There is a ~45Hz bump in MICH that I haven't seen in the past.

I'll try and do some characterization tomorrow eve, but it's encouraging to at least get back to the pre-FB-failure state of locking.

Attachment 1: DRMI_1f.png
DRMI_1f.png
Attachment 2: DRMI_relocked.pdf
DRMI_relocked.pdf
  13282   Thu Aug 31 18:36:23 2017 gautamUpdateCDSrevisiting Acromag

Current status:

  • There is a single Acromag ADC unit installed in 1X4
  • It is presently hooked up to the PSL NPRO diagnostic connector channels
  • I had (re)-started the acquisiton of these channels on August 16 - but for reasons unknown, the tmux session that was supposed to be running the EPICS server on megatron seems to have died on August 22 (judging by the trend plot of these channels, see Attachment #1)
  • I had not set up an upstart job that restarts the server automatically in such an event. I manually restarted it for now, following the same procedure as linked in my previous elog.
  • While I was at it, I also took the opportunity to edit the Acromag channel names to something more appropriate - all channels previously prefixed with C1:ACRO- have now been prefixed with C1:PSL-

Plan of action:

  1. Hardware - we have, in the lab, in addition to the installed ADC unit
    • 3x 8 channel differential input ADC units
    • 2x 8 channel differential output DAC units
    • 1x 16 channel BIO unit
    • 2U chassis + connectors + breakout boards + other misc hardware that I think Johannes and Lydia procured with the original plan to replace the EX slow controls.
    • Some relevant elogs: Panel designs, breakout design, sketch for proposed layout, preliminary channel list.
      So on the hardware side, it would seem that we have everything we need to go ahead with replacing the EX slow controls with an Acromag system, although Johannes probably knows more about our state of readiness from a hardware PoV.
  2. Software
    • We probably want to get a dedicated machine that will handle the EPICS channel serving for the Acromag system
    • Have to figure out the networking arrangement for such a machine
    • Have to figure out how to set up the EPICS server protocol in such a way that if it drops for whatever reason, it is automatically restarted

 

Attachment 1: Acromag_EPICS.png
Acromag_EPICS.png
  13283   Thu Aug 31 21:40:24 2017 gautamUpdateGeneralMC1 kicked again

There was a pretty large glitch in MC1 about an hour ago. The misalignment was so large that the autolocker wasn't able to lock the IMC. I manually re-aligned MC1 using the bias sliders, and now IMC locks fine. Attached is a 90 second plot of 2K data from the OSEMs showing the glitch. Judging from the wall StripTool, the IMC was well behaved for ~4 hours before this glitch - there is no evidence of any sort of misalignment building up, judging from the WFS control signals.

Attachment 1: MC1_glitch.png
MC1_glitch.png
  13286   Fri Sep 1 16:27:39 2017 gautamUpdateSUSMC1 glitching

I re-enabled the MC SUS damping and IMC locking for some IFO work just now.

Quote:

MC1, MC2 and MC3 damping turned off to see glitching action at 9:57am

 

  13287   Fri Sep 1 16:55:27 2017 gautamUpdateComputersTestpoints now accessible again

Thanks to Jonathan Hanks, it appears we can now access test-points again using dataviewer.

I haven't done an exhaustive check just yet, but I have loaded a few testpoints in dataviewer, and ran a script that use testpoint channels (specifically the ALS phase tracker UGF setting script), all seems good.

So if I remember correctly, the major CDS fix now required is to solve the model unloading issue.

Thanks to Jamie/Jonathan Hanks/KT for getting us back to this point! Here are the details:

After reading logs and code, it was a simple daqdrc config change.

The daqdrc should read something like this:

...
set master_config=".../master";
configure channels begin end;
tpconfig ".../testpoint.par";
...


What had happened was tpconfig was put before the configure channels
begin end.  So when daqd_rcv went to configure its test points it did
not have the channel list configured and could not match test points to
the right model & machine.  Dave and I suspect that this is so that it
can do an request directly to the correct front end instead of a general
broadcast to all awgtpman instances.

Simply reordering the config fixes it.

I tested by opening a test point in dataviewer and verifiying that
testpoints had opened/closed by using diag -l.  Xmgr/grace didn't seem
to be able to keep up with the test point data over a remote connection.

You can find this in the logs by looking for entries like the following
while the daqd is starting up.  When we looked we saw that there was an
entry for every model.

Unable to find GDS node 35 system c1daf in INI fiels
  13288   Fri Sep 1 19:15:40 2017 gautamUpdateALSFiber ALS noise measurement

Summary:

I did some work today to see if I could use the IR beat for ALS control. Initial tests were encouraging.

I will now embark on the noise budgeting.

Details:

  • For this test, I used the X arm
  • I hooked up the X-arm + PSL IR beat to the X-arm DFD channel, and used the Y-arm DFD channels to simultaneously monitor the X-arm green beat.
  • I then transitioned to ALS control and used POX as an out-of-loop sensor for the ALS noise.
  • Attachment #1 shows a comparison of the measurements. In red is the IR beat, while the green traces are from the test EricQ and I did a couple of nights ago using the green beat.
  • I also wanted to do some arm cavity scans with the arm under ALS control with the IR beat - but was unsucessful. The motivation was to fix the ALS model counts->Hz calibration factors.
  • I did however manage to do a 10 FSR scan using the green beatnote - however, towards the end of this scan, the green beat frequency (read off the control room analyzer) was ~140MHz, which I believe is outside (or at least on the edge) of the bandwidth of the Green BBPDs. The fiber coupled IR beat photodiodes have a much larger (1GHz) spec'd bandwidth.

I am leaving the green beat electronics on the PSL table in the switched state for further testing...

 

Attachment 1: IR_ALS_noise.pdf
IR_ALS_noise.pdf
  13289   Mon Sep 4 16:30:06 2017 gautamUpdateLSCOplev loop tweaking

Now that the DRMI locking seems to be repeatable again, I want to see if I can improve the measured MICH noise. Recall that the two dominant sources of noise were

  1. BS Oplev loop A2L - this was the main noise between 30-60Hz.
  2. DAC noise - this dominated between ~60-300Hz, since we were operating with the de-whitening filters off.

In preparation for some locking attempts today evening, I did the following:

  1. Added steeper elliptic roll-off filters for the ITMX and ITMY Oplevs. This is necessary to allow the de-whitening filters to be turned on without railing the DAC.
  2. Modified the BS Oplev loop to also have steeper high-frequency (>30Hz) roll off. The roll-off between 15-30Hz is slightly less steep as a result of this change.
  3. Measured all Oplev loop TFs - UGFs are between 4 Hz and 5 Hz, phase margin is ~30degrees. I did not do any systematic optimization of this for today.
  4. Went into the Foton filter banks for all the coil output filters, and modified the "Output" settings to be on "Input crossing", with a "Tolerance" of 10 and a "Timeout" of 3 seconds. These settings are to facilitate smooth transition between the two signal paths (without and with coil-dewhitening). The parameters chosen were arbitrary and not optimized in any systematic manner.
  5. After making the above changes, I tried engaging the de-whitening filters on ITMX, ITMY and BS with the arms locked. In the past, I was unable to do this because of a number of issues - Oplev loop shapes and Foton settings among them. But today, the switching was smooth, the single arm locks weren't disturbed when I engaged the coil de-whitening.

Hopefully, I can successfully engage a similar transition tonight with the DRMI locked. The main difference compared to this daytime test is going to be that the MICH control signal is also going to be routed to the BS.

Tasks for tonight, if all goes well:

  1. Lock DRMI.
  2. Use UGF servos to set the overall loop gains for DRMI DoFs.
  3. Reduce PRCL->MICH and SRCL->MICH coupling.
  4. Measure loop shapes of all DRMI DoFs.
  5. Make sensing matrix measurement.
  6. Engage coil-dewhitening, download data, make NB.

Unrelated to this work: the PMC was locked near the upper rail of the PZT, so I re-locked it closer to the middle of the range.

Quote:

Surprisingly, there was no evidence of REFL55 behaving weirdly tonight, and I was able to easily lock the DRMI on 1f error signals using the recipe I've been using in the last few months.

  13291   Tue Sep 5 02:07:49 2017 gautamUpdateLSCLow Noise DRMI attempt

Summary:

Tonight, I was able to lock the DRMI, turn on the whitening filters for the sensing PDs, and also turn on the coil de-whitening filters for ITMX, ITMY and BS. However, I didn't see the expected improvement in the MICH spectrum between ~50-300 Hz sad. Sad.

Details:

I basically went through the list of tasks I made in the previous elog. Some notes:

  • The UGF servos suggested that I had to lower the SRCL gain. I lowered it from -0.055 to -0.025. OLTF measurement using In1/In2 method suggested UGF ~120Hz. I don't know why this should be. Plot to be uploaded later.
  • Since we aren't actuating on the ITMs, I was able to leave their coils de-whitened all the time.
  • For the BS, it was trickier - I had to play around a little with the "Tolerance" setting in Foton while looking at transients (using DTT, not a scope for now) while switching the filters.
  • This transition isn't so robust yet - but eventually I found a setting that worked, and I was able to successfully turn on the de-whitening thrice tonight (but also failed about the same number of times). [GV Oct 6 2017: Remember that the PD whitening has to be turned on for this transition to be successful - otherwise the RMS from the high frequencies saturate the DAC.]
  • The locks were pretty stable. One was ~10mins, one was ~15mins, and I broke both deliberately because I was out of ideas as to why the part of the MICH error signal spectrum that I thought was due to DAC noise didn't improve.
  • I've made a bunch of shell scripts to help with the various switchings - but now that I think of it, I should make these python scripts.

Attachment #1: Comparison of MICH_ERR with and without the BS de-whitening. Note that the two ITMs have their coils de-whitened in both sets of traces.

Attachment #2: Spectra of MICH output and one of the BS coil outputs in both states. The DAC RMS increases by ~30x when the de-whitening is engaged, but is still well within limits.

So it looks like the switching of paths is happening correctly. The "CDS BIO STATUS" MEDM screen also shows the appropriate bits toggling when I turn the de-whitening on/off. There is no broadband coherence with MCF between 50-300 Hz so it seems unlikely that this could be frequency noise.

Clearly I am missing something. But anyways I have a good amount of data, may be useful to put together the post CDS/electronics modification DRMI noise budget. More analysis to follow.

 

Attachment 1: MICH_err_comp.pdf
MICH_err_comp.pdf
Attachment 2: deWhitenedCoil.pdf
deWhitenedCoil.pdf
  13293   Tue Sep 5 14:41:58 2017 gautamUpdateCDSNDS2 server restarted on megatron

I was unable to download data using nds2. Gabriele had reported similar problems a week ago but I hadn't followed up on this.

I repeated steps 5-7 from elog 13161, and now it seems that I can get data from the nds2 servers again. Unclear why the nds2 server had to be restarted. I wonder if this is somehow related to the mysterious acromag EPICS server tmux session dropout.

  13297   Tue Sep 5 23:02:37 2017 gautamUpdateCDSslow machine bootfest

MC autolocker was not working - PCdrive was railed at its upper rail for ~2 hours judging by the wall StripTool trace. I tried restarting the init processes on megatron, but that didn't fix the problem. The reason seems to have been related to c1iool0 failing - after keying the crate, autolocker came back fine and MC caught lock almost immediately.

Additionally, c1susaux, c1auxex,c1auxey and c1iscaux are also down. I'm not planning on using the IFO tonight so I am not going to reboot these now.

 

  13300   Wed Sep 6 23:06:30 2017 gautamUpdateLSCCoil de-whitening switching investigation

Summary:

Rana suggested checking if the coil de-whitening switching is actually happening in the analog path. I repeated the test detailed hereAttachments #1 and #2 suggest that all the coils for the BS and ITMs are indeed switching yes.

Details:

  • The motivation behind this test was the following - the analog path switching is done by applying some logic voltage to a switch, but if this voltage is common among many switches, the hypothesis was that perhaps individual switches were not getting the required voltage to engage the switching.
  • This time FM9 (simulated de-whitening) and FM10 (inverse de-whitening) in the coil output filter modules turned off, so as to maintain a flat TF in the digital domain, but engage the de-whitened analog path (turning off FM9 is supposed to do this).
  • There is poor coherence in the measurement above 40Hz so the data there should be neglected. It is hard to get a good measurement at higher frequencies because of the pendulum TF + heavy low pass filtering from the analog de-whitening path.
  • But between 10-40Hz, we already see the analog de-whitening TF in the measurement.
  • For comparison, I have plotted the measured pendulum TFs for one of the coils from an earlier test (all the coils were roughly at the same level).

So it would seem that there is some other noise which has a 1/f^2 shape and is at the same level we expected the DAC noise to be at. Rana suggested checking coherence with MC transmission to see if this could be laser intensity noise.

I also want to re-do the actuator calibrations for the vertex optics again before re-posting the revised noise budget.

Attachment 1: BScoils.pdf
BScoils.pdf
Attachment 2: ITMcoils.pdf
ITMcoils.pdf
  13312   Fri Sep 15 15:54:28 2017 gautamUpdateCDSFB wiper script

A wiper script is not yet set up for our new Frame-Builder. The disk usage is ~80% now, so I think we should start running a wiper script that manages overall disk usage and deletes old frame files to this end.

From what I could find on the elog, the way this was done was by running a cron job on FB. There is a perl script, /opt/rtcds/caltech/c1/target/fb/wiper.pl, which from what I could understand, runs a bunch of du commands on different directories to determine if there is a need to delete any files.

I copied this script over to /opt/rtcds/caltech/c1/target/daqd/wiper.pl. This is the directory in which all the new FB stuff resides. Conveniently, the script has a "dry-run" option, which I tried running on FB1. However, I get the following error message:

Fri Sep 15 15:44:45 PDT 2017
Dry run, will not remove any files!!!
You need to rerun this with --delete argument to really delete frame files
Directory disk usage:
 /frames/trend/minute_rawk
Combined 0k or 0m or 0Gb
Illegal division by zero at ./wiper.pl line 98.


So it would seem that for some reason, the du commands aren't working. From what I could tell, there aren't any directory paths specific to the old FB machine that need to be changed. I believe the script was working prior to the FB disk crash - unfortunately it doesn't look like this script was under version control but I don't think any changes have been made to this script.

Before I go down a Perl rabbit hole, has anyone seen such an error or is aware of some reason why this might not work on the new FB? Am I even using the correct scripts?

  13313   Fri Sep 15 16:00:33 2017 gautamUpdateLSCSensing measurement

I've been working on analyzing the data from the DRMI locks last week.

Here are the results of the sensing measurement.

Details:

  1. The sensing measurement is done by using the existing sensing matrix infrastructure to drive the actuators for the various DoFs at specific frequencies (notches at these frequencies are turned on in the control loops during the measurement).
  2. All the analysis is done offline - I just note down the times at which the sensing lines are turned on and then download the data later. The amplitudes of the oscillators are chosen by looking at the LSC PD error signal spectra "live" in DTT, and by increasing the amplitude until the peak height is ~10x above the nominal level around that frequency. This analysis was done on ~600seconds of data.
  3. The actual sensing elements in the various PDs are calculated as follows:
    • Calculate the Fourier coefficients at the excitation frequency using the definition of the complex DFT in both the LSC PD signal and the actuator signal (both are in counts). Windowing is "Tukey", and FFT length used is 1 second.
    • Take their ratio
    • Convert to suitable units (in this case V/m) knowing (i) The actuator discriminant in cts/m and (ii) the cts/V ADC calibration factor. Any whitening gain on the PD is taken into account as well.
    • If required, we can convert this to W/m as well, knowing (i) the PD responsivity and (ii) the demodulation chain gain.
    • Most of this stuff has been scripted by EricQ and is maintained in the pynoisesub git repo.

The plotting utility is a work in progress - I've basically adapted EricQs scripts and added a few features like plotting the uncertainties in magnitude and phase of the calculated sensing elements. Possible further stuff to implement:

  • Only plot those elements which have good coherence in the measurement data. At present, the scripts check the coherence and prompt the user if there is poor coherence in a particular channel, but no vetos are done.
  • The uncertainty calculation is done rather naively now - it is just the standard deviation in the fourier coefficient determined from various bins. I am told that Bendat and Piersol has the required math. It would be good to also incorporate the uncertainties in the actuator calibration. These are calculated using the python uncertainties package for now.
  • Print a summary of the parameters used in the calculation, as well as sensing elements + uncertainty in cts/m, V/m and W/m, on a separate page.
  • Some aesthetics can be improved - I've had some trouble getting the tick intervals to cooperate so I left it as is for the moment.

Also, the value I've used for the BS actuator calibration is not a measured one - rather, I estimated what it will be by scaling the old value by the same ratio which the ITMs have changed by post de-whitening board mods. The ITM actuator coefficients were recently  measured here. I will re-do the BS calibrations over the weekend.

Noise budgeting to follow - it looks like I didn't set the AS55 demod phase to the previously determined optimal value of -82degrees, I had left it at -42 degrees. To be fixed for the next round of locking.

Attachment 1: DRMI1f_Sep5.pdf
DRMI1f_Sep5.pdf
  13314   Fri Sep 15 17:08:58 2017 gautamUpdateLSCCoil de-whitening switching investigation

I downloaded a segment of data from the time when the DRMI was locked with the BS and ITM coil driver de-whitening switched on, and looked at coherence between MC transmission and the MICH error signal. Attachment #1 doesn't show any broadband high coherence between 60-300Hz, so it cannot explain the noise in the full range between 60-300Hz. 

The DQ channel for the MC transmission is recorded at 1024 kHz, so to calculate the coherence, I had to decimate the 16K MICH data. 

Since we have the AOM installed, I suppose we can actually measure the intensity noise coupling to MICH by driving a line in the AOM. 

I also checked for coherence in the 60-300Hz band between MICH/PRCL and MICH/SRCL, and didn't see any appreciable coherence. Need to think about this more.

Quote:

 Rana suggested checking coherence with MC transmission to see if this could be laser intensity noise.

Attachment 1: DRMI_IntensityNoise.pdf
DRMI_IntensityNoise.pdf
  13317   Mon Sep 18 17:17:49 2017 gautamUpdateCDSFB wiper script

After trying to debug this issue using the Perl debugger, I concluded that the problem is in the part of the code that splits the output of the "du" command into directory and disk usage. For whatever, reason, this isn't working. The version of perl running on the new FB1 machine is 5.20.2, whereas I suspect the version running on the old FB machine was 5.14.2 (which is the version on all the Ubuntu 12 workstations and megatron). Unclear whether downgrading the Perl version is the right way to go.

The FB1 disk is now getting close to full, the usage is up to 85% today.

Quote:

Before I go down a Perl rabbit hole, has anyone seen such an error or is aware of some reason why this might not work on the new FB? Am I even using the correct scripts?

 

  13319   Mon Sep 18 17:51:26 2017 gautamUpdateCDSFB wiper script

It is a little different - specifically, the way the splitting of the output of the "du" command into disk usage and directory is different (see Attachment #1). Apart from this, some of the parameters (e.g. what percentage to keep free) are different.

I changed the percentages to match what we had here, and edited a couple of other lines to print out the files that will be deleted. The dry run seemed to work okay, it produced the output below. Not sure why "df -h" reports a different use percentage though...

Since the script seems to be working now, I am going to set it up on FB1's crontab. Thanks Chris!.

controls@fb1:/opt/rtcds/caltech/c1/target/daqd 0$ ./wiper.pl
Mon Sep 18 17:47:06 PDT 2017
Dry run, will not remove any files!!!
You need to rerun this with --delete argument to really delete frame files
Directory disk usage:
/frames/trend/minute_raw 47126124k
/frames/trend/minute 22900668k
/frames/trend/second 760359168k
/frames/full 19337278516k
Combined 20167664476k or 19694984m or 19233Gb
/frames size 25097525144k at 80.36%
/frames is below keep value of 85.00%
Will not delete any files
df reported usage 80.36%
controls@fb1:/opt/rtcds/caltech/c1/target/daqd 0$ df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/sda4                         2.0T  1.7T  152G  92% /
udev                               10M     0   10M   0% /dev
tmpfs                              13G  177M   13G   2% /run
tmpfs                              32G     0   32G   0% /dev/shm
tmpfs                             5.0M     0  5.0M   0% /run/lock
tmpfs                              32G     0   32G   0% /sys/fs/cgroup
/dev/sda2                          19G  3.7G   14G  21% /var
/dev/sda1                         461M   65M  373M  15% /boot
/dev/sdb1                          24T   19T  3.5T  85% /frames
192.168.113.104:/home/cds/rtcds   2.0T  1.6T  291G  85% /opt/rtcds
192.168.113.104:/home/cds/rtapps  2.0T  1.6T  291G  85% /opt/rtapps
tmpfs                             6.3G     0  6.3G   0% /run/user/1001
Quote:

Attached is the version of the wiper script we use on the CryoLab cymac. It works with perl v5.20.2. Is this different from what you have?

 

Attachment 1: perlDiff.png
perlDiff.png
  13320   Mon Sep 18 18:40:34 2017 gautamUpdateCDSFB wiper script

I did a further check on the wiper script by changing the "percent_keep" from 85.0 to 75.0, and running the script in "dry_run" mode again. The script then output to console the names of all the files it would delete in order to free up the required amount of space (but didn't actually delete any files as it was a dry run). Seemed to be sensible.

To set up the cron job, I did the following on FB1:

  • crontab -e opened up the crontab
  • Copied over a script called "wiper.cron" from /opt/rtcds/caltech/c1/target/fb to /opt/rtcds/caltech/c1/target/daqd. This essentially contains a bunch of instructions to run the wiper script with the --delete flag, and write the console output to a log file.
  • Added the following line: 33 3 * * * /opt/rtcds/caltech/c1/target/daqd/wiper.cron. So the cron job should be executed at 3:33AM everyday.
  • The cron daemon seems to be running - sudo systemctl status cron.service yields the following output:
    controls@fb1:~ 0$ sudo systemctl status cron.service
    ● cron.service - Regular background program processing daemon
       Loaded: loaded (/lib/systemd/system/cron.service; enabled)
       Active: active (running) since Mon 2017-09-18 18:16:58 PDT; 27min ago
         Docs: man:cron(8)
     Main PID: 30183 (cron)
       CGroup: /system.slice/cron.service
               └─30183 /usr/sbin/cron -f
    Sep 18 18:16:58 fb1 cron[30183]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
    Sep 18 18:17:01 fb1 CRON[30205]: pam_unix(cron:session): session opened for user root by (uid=0)
    Sep 18 18:17:01 fb1 CRON[30206]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
    Sep 18 18:17:01 fb1 CRON[30205]: pam_unix(cron:session): session closed for user root
    Sep 18 18:25:01 fb1 CRON[30820]: pam_unix(cron:session): session opened for user root by (uid=0)
    Sep 18 18:25:01 fb1 CRON[30821]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
    Sep 18 18:25:01 fb1 CRON[30820]: pam_unix(cron:session): session closed for user root
    Sep 18 18:35:01 fb1 CRON[31515]: pam_unix(cron:session): session opened for user root by (uid=0)
    Sep 18 18:35:01 fb1 CRON[31516]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
    Sep 18 18:35:01 fb1 CRON[31515]: pam_unix(cron:session): session closed for user root

     

  • crontab -l on FB1 now shows the following:
    controls@fb1:~ 0$ crontab -l
    # Edit this file to introduce tasks to be run by cron.
    #
    # Each task to run has to be defined through a single line
    # indicating with different fields when the task will be run
    # and what command to run for the task
    #
    # To define the time you can provide concrete values for
    # minute (m), hour (h), day of month (dom), month (mon),
    # and day of week (dow) or use '*' in these fields (for 'any').#
    # Notice that tasks will be started based on the cron's system
    # daemon's notion of time and timezones.
    #
    # Output of the crontab jobs (including errors) is sent through
    # email to the user the crontab file belongs to (unless redirected).
    #
    # For example, you can run a backup of all your user accounts
    # at 5 a.m every week with:
    # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
    #
    # For more information see the manual pages of crontab(5) and cron(8)
    #
    # m h  dom mon dow   command
    33 3 * * * /opt/rtcds/caltech/c1/target/daqd/wiper.cron

Let's see if this works.

Quote:

Since the script seems to be working now, I am going to set it up on FB1's crontab. Thanks Chris!.

 

  13324   Wed Sep 20 16:14:17 2017 gautamUpdateEquipment loanImpedance test kit borrowed from Downs

I borrowed the HP impedance test kit from Rich Abbott today. The purpose is to profile the impedance of the NPRO PZTs, as part of the AUX PDH servo investigations. It is presently at the X-end. I will do the test in the coming days.
 

  13325   Thu Sep 21 01:32:00 2017 gautamUpdateALSAUX X Innolight AM measurement running

[rana,gautam]

We set up a measurement of the AUX X laser AM today. Some notes:

  • PDA 55 that was installed as a power monitor for the AUX X laser has been moved into the main green beam path - it is just upstream of the green shutter for this measurement.
  • AUX X laser power into the doubling crystal was adjusted by rotating HWP upstream of IR Faraday (original angle was 100, now it is 120), until the DC level of the PDA 55 output was ~2.5V on a scope (high impedance).
  • BNC-T was installed at the PZT input of the Innolight - one arm of the T is terminated to ground via 50 ohms. The purpose of this is to always have the output of the power splitter from the network analyzer RF source drive a 50 ohm load.
  • The output of the Green PDH servo to the Innolight PZT was disconnected downstream of the summing Pomona box - it is now connected to one output of a power splitter (borrowed from SR function generator used to drive the PZT) connected to the RF source output of the AG4395.
  • Other output of power splitter connected to input R of AG4395.
  • PDA55 output has been disconnected from CH5 of the AA board. It is connected to input A of the AG4395 via DC block.

Attachment #1 shows a preliminary scan from tonight - we looked at the region 10kHz-10MHz, with an IF bandwidth of 100Hz, 16 averages, and 801 log-spaced frequencies. The idea was to get an idea of where some promising notches in the AM lie, and do more fine-bandwidth scans around those points. Data + code used to generate this plot in Attachment #2.

Rana points out that some of the AM could also be coming from beam jitter - so to put this hypothesis to test, we will put a lens to focus the spot more tightly onto the PD, repeat the measurement, and see if we get different results.

There were a whole bunch of little illegal things Rana spotted on the EX table which he will make a separate post about.

I am running 40 more scans with the same params for some statistics - should be done by the morning.

Quote:

I borrowed the HP impedance test kit from Rich Abbott today. The purpose is to profile the impedance of the NPRO PZTs, as part of the AUX PDH servo investigations. It is presently at the X-end. I will do the test in the coming days.
 


Update 12:00 21 Sep: Attachment #3 shows schematically the arrangement we use for the AM measurement. A similar sketch for the proposed PM measurement strategy to follow. After lunch, Steve and I will lay out a longish BNC cable from the LSC rack to the IOO rack, from where there is already a long cable running to the X end. This is to facilitate the PM measurement.

Update 18:30 21 Sep: Attachment #4 was generated using Craig's nice plotting utility. The TF magnitude plot was converted to RIN/V by dividing by the DC voltage of the PDA 55 of ~2.3V (assumption is that there isn't significant difference between the DC gain and RF transimpedance gain of the PDA 55 in the measurement band) The right-hand columns are generated by calculating the deviation of individual measurements from the mean value. We're working on improving this utility and aesthetics - specifically use these statistics to compute coherence, this is a work in progress. Git repo details to follow.

There are only 23 measurements (I was aiming for 40) because of some network connectivity issue due to which the script stalled - this is also something to look into. But this sample already suggests that these measurement parameters give consistent results on repeated measurements above 100kHz.

TO CHECK: PDA 55 is in 0dB gain setting, at which it has a BW of 10MHz (claimed in datasheet).


Some math about relation between coherence \gamma_{xy}(f) and standard deviation of transfer function measurements:

\mathrm{SNR}(f) = \sqrt{\frac{\gamma_{xy}^{2}(f)}{1-\gamma_{xy}^{2}(f)}}

\sigma_{xy}^{2} = \frac{1-\gamma_{xy}^{2}(f)}{2N\gamma_{xy}^{2}(f)}|H(f)|^2  --- relation to variance in TF magnitude. We estimate the variance using the usual variance estimator, and can then back out the coherence using this relation.

\sigma_{\theta_{xy}} = \mathrm{tan}^{-1}\left [ \sqrt{\frac{1-\gamma_{xy}^{2}(f)}{2N\gamma_{xy}^{2}(f)}} \right ] --- relation to variance in TF phase. Should give a coherence profile that is consistent with that obtained using the preceeding equation.

It remains to code all of this up into Craig's plotting utility.

Attachment 1: Innolight_AM.pdf
Innolight_AM.pdf
Attachment 2: Innolight_AM.tar.gz
Attachment 3: IMG_7599.JPG
IMG_7599.JPG
Attachment 4: 20170921_203741_TFAG4395A_21-09-2017_115547_FourSquare.pdf
20170921_203741_TFAG4395A_21-09-2017_115547_FourSquare.pdf
  13327   Thu Sep 21 15:23:04 2017 gautamOmnistructureALSLong cable from LSC->IOO

[steve,gautam]

We laid out a 45m long BNC cable from the LSC rack to the IOO rack via overhead cable trays. There is ~5m excess length on either side, which have been coiled up and cable-tied for now. The ends are labelled "TO LSC RACK" and "TO IOO RACK" on the appropriate ends. This is to facilitate hooking up the output of the DFD for making a PM measurement of the AUX X laser. There is already a long cable that runs from the IOO rack to the X end.

  13328   Fri Sep 22 18:12:27 2017 gautamUpdateLSCDAC noise measurement (again)

I've been working on setting up some scripts for measuring the DAC noise.

In all the DRMI noise budgets I've posted, the coil-driver noise contribution has been based on this measurement, which could be improved in a couple of ways:

  • The measurement was made at the output of the AI board - we can make the measurement at the output of the coil driver board, which will be a closer reflection of the actual current noise at the OSEM coils.
  • The measurement was made by driving the DAC with shaped random noise - but we can record the signal to the coils during a lock and make the noise measurement by using awg to drive the coil with this signal, with elliptic bandstops at appropriate frequencies to reveal the electronics noise.
  1. The IN1 signals to the coils aren't DQ-ed, but ideally this is the signal we want to inject into the coil_EXC channel for this measurement - so I re-locked the DRMI a couple of nights ago and downloaded the coil IN1 channel data for ~5mins for the ITMs and BS.
  2. AWGGUI supposedly has a feature that allows you to drive an EXC channel with an arbitrary signal - but I couldn't figure out how to get this working. I did find some examples of this kind of application using the Python awg packages, so I cobbled together some scripts that allows me to drive some channels and place elliptic bandstop filters as I required. 
  3. I wasted quite a bit of time trying to implement these signals in Python using available scipy functions, on account of me being a DSP n00b frown. When trying to design discrete-time filters, of course numerical precision errors become important. Initially I was trying to do everything in the "Transfer function (numerator-denominator)" basis, but as Rana pointed out, the way to go is using SOSs. Fortunately, this is a simple additional argument to the relevant python functions, after which elliptic bandstop filter design was trivial.
  4. The actual test was done as follows:
    • Save EPICS PIT/YAW offsets, set them to 0, disable Oplev servos, and then shut down optic watchdog once the optic is somewhat damped. This is to avoid the optics getting a large kick when disconnecting the DB15 connector from the coil driver board output.
    • Disconnect above-mentioned DB15 connector from the appropriate coil driver board output.
    • Turn off inputs to coils in filter module EPICs screens. Since the full signal (local damping servo output + Oplev servo output + LSC servo output) to the coil during a DRMI lock will be injected as an excitation, we don't need any other input. 
    • Use scripts (which I will upload to a git repo soon) to set up the appropriate excitation.
    • To measure the spectrum, I used a DB15 breakout board with test-points soldered on and some mini-grabber-to-BNC adaptors, in order to interface the SR785 to the coil driver output. We can use the two input channels of the SR785 to simultaneously measure two coil driver board output channels to save some time.
    • Take a measurement of the SR785 noise (at the appropriate "Input Range" setting) with inputs terminated to estimate the analyzer noise floor.
    • Just for kicks, I made the measurement with the de-whitening both OFF/ON.

I only managed to get in measurements for the BS and ITMX today. ITMY to be measured later, and data/analysis to follow.

The ITMX and BS alignments have been restored after this work in case anyone else wants to work with the IFO.


Some slow machine reboots were required today - c1susaux was down, and later, the MC autolocker got stuck because of c1iool0 being unresponsive. I thought we had removed all dependency of the autolocker on c1iool0 when we moved the "IFO-STATE" EPICS variable to the c1ioo model, but clearly there is still some dependancy. To be investigated.

  13331   Tue Sep 26 13:40:45 2017 gautamUpdateCDSNDS2 server restarted on megatron

Gabriele reported problems with the nds2 server again. I restarted it again.

update: had to do it again at 1730 today - unclear why nds2 is so flaky. Log files don't suggest anything obvious to me...

Quote:

I was unable to download data using nds2. Gabriele had reported similar problems a week ago but I hadn't followed up on this.

I repeated steps 5-7 from elog 13161, and now it seems that I can get data from the nds2 servers again. Unclear why the nds2 server had to be restarted. I wonder if this is somehow related to the mysterious acromag EPICS server tmux session dropout.

 

  13332   Tue Sep 26 15:55:20 2017 gautamUpdateCDS40m files backup situation

Backups of the root filesystems of chiara and nodus are underway right now. I am backing them up to the 1 TB LaCie external hard drives we recently acquired.

I first initialized the drives by hooking them up to my computer and running the setup.app file. After this, plugging the drive into the respective machine and running lsblk, I was able to see the mount point of the external drive. To actually initialize the backup, I ran the following command from a tmux session called ddBackupLaCie:

sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync

Here, /dev/sda is the disk with the root filesystem, and /dev/sdb is the external hard-drive. The installed version of dd is 8.13, and from version 8.21 onwards, there is a progress flag available, but I didn't want to go through the exercise of upgrading coreutils on multiple machines, so we just have to wait till the backup finishes.

We also wanted to do a backup of the root of FB1 - but I'm not sure if dd will work with the external hard drive, because I think it requires the backup disk size (for us, 1TB) to be >= origin disk size (which on FB1, according to df -h, is 2TB). Unsure why the root filesystem of FB is so big, I'm checking with Jamie what we expect it to be. Anyways we have also acquired 2TB HGST SATA drives, which I will use if the LaCie disks aren't an option.

 

  13333   Tue Sep 26 19:10:13 2017 gautamUpdateALSFiber ALS setup neatened

[steve, gautam]

The Fiber ALS box has been installed on the existing shelf on the PSL table. We had to re-arrange some existing cabling to make this possible, but the end result seems okay (to me). The box lid was also re-installed.

Some stuff that still needs to be fixed:

  1. Power supply to ZHL amplifiers - it is coming from a table-top DC supply currently, we should hook these up to the Sorensens.
  2. We should probably extend the corrugated fiber protection tubing for the three fibers all the way up to the shelf. 

Beat spectrum post changes to follow.

Quote:

Is it better to mount the box in the PSL under the existing shelf, or in a nearby PSL rack?

Quote:

 

Further characterization needs to be done, but the results of this test are encouraging. If we are able to get this kind of out of loop ALS noise with the IR beat, perhaps we can avoid having to frequently fine-tune the green beat alignment on the PSL table. It would also be ideal to mount this whole 1U setup in an electronics rack instead of leaving it on the PSL table

 

 

Attachment 1: IMG_7605.JPG
IMG_7605.JPG
  13335   Wed Sep 27 00:20:19 2017 gautamUpdateALSMore AM sweeps

Attachment #1: Result of AM sweeps with EX laser crystal at nominal operating temperature ~ 31.75 C.

Attachment #2: Tarball of data for Attachment #1.

Attachment #3: Result of AM sweeps with EX laser crystal at higher operating temperature ~ 40.95 C.

Attachment #4: Tarball of data for Attachment #2.


Remarks:

  • Confirmed that PDA 55 is in the "0dB" setting - the actual dial is unmarked, and has 5 states. I guessed that the left-most one is 0dB, and checked that if I twiddled the dial by one state to the right, the DC level on the scope increased by 10dB as advertized. Didn't check all the states.
  • DC level is ~2.3V on a high-impedance scope. So it will be ~1.15V to a 50ohm load, which is what the DC block is. The inverse of this value is used to calibrate the vertical axis of the TF measurement to RIN/V.
  • Input R (split RF source signal) attenuation: 20dB. Input A (PDA55 output) attenuation: 0dB.
  • Main problem is still network hangups when trying to do many sweeps.
  • Seems to persist even when I connect the GPIB box to one of the network switches - so don't think we can blame the WiFi.
  • Need to explore possibility of speedup - takes >2hours to run ~50scans!

To-do:

  • Overlay median and uncertainty plots for the two temp. settings. There is a visible diference in both the locations and depths/heights of various notches/peaks in the AM profile.
  • Repeat test with a fast focusing lens to focus the beam more tightly on the PD active area to confirm that the measured AM is indeed due to the PZT drive and not from beam-jitter (presently, spot diameter is ~0.5x active area diameter, to eye).
  • Get the PM data.
  • Depending on what the PM data looks like, do a more fine-grained scan around some promising AM notches / PM peaks.
Attachment 1: TFAG4395A_26-09-2017_202344_FourSquare.pdf
TFAG4395A_26-09-2017_202344_FourSquare.pdf
Attachment 2: lowTemp.tgz
Attachment 3: TFAG4395A_26-09-2017_231630_FourSquare.pdf
TFAG4395A_26-09-2017_231630_FourSquare.pdf
Attachment 4: highTemp.tgz
  13336   Wed Sep 27 22:25:21 2017 gautamUpdateLSCDAC noise measurement (again)

Attachment #1: Summary of results of measurements made on Friday. There is a lot in this plot, here is a breakdown:

  • I drove the excitation points of the coil output filter banks with raw time-series data downloaded during a DRMI lock with pyawg. Today during the meeting, Rana pointed out that we could just acquire median (as opposed to mean since the former is more immune to glitches during the averaging process) spectra during a lock, and then do the ifft in python to generate time series data for pyawg. Another advantage of doing it this way is that we don't need to store a large (~200MB in my case) file of 16k data for numerous channels. But since I already had this file, I decided against changing the methodology for this round of tests. Time series plots of the signals do not show any large glitches.
  • The SR785 was used in dual channel mode to acquire spectra from 2 coil driver outputs simultaneously, in the interest of saving time. Input range was set to -32dbVpk, AC coupled, which was the smallest value that worked for the given signal profile. Spectra were taken from DC-200Hz, with 801 points, 25 averages. The DB15 output of the coil driver board was connected to a DB15 breakout board, and then a BNC->Pomona mini-grabber adapter was used to connect to the SR785 input. The newly acquired linear power supplies for the GPIB box mean that spurious 60Hz harmonics were not present. 
  • Initially, I had planned to enable various bandstops from 20Hz-200Hz, to get a more complete profile of the noise. But in the end, I only used two elliptic bandstops (6th order, 60dB stopband attenuation): 60-90Hz, for which data is plotted in red and 90-200Hz, for which data is plotted in green
  • I've used the same noise model as I used here, plotted in dashed grey (summed with SR785 noise at the above input range, with input terminated via 50ohm terminator) - but had to tweak the parameters to get the curve to line up with the measurement. It looks like there is considerable variation between DAC channels, and certainly between the ITMX channels and the BS channels as groups.
  • I took the measurement in two conditions - with the coil de-whitening off (left column) and coil de-whitening on (right column). Note that the input to the excitation was acquired at the IN1 of the relevant filter bank, and since the de-whitening happens downstream of this, we don't have to do anything special.
  • In the right column, I have also plotted the LISO modelled noise, which was shown to match well with the measured curve, admittedly only for one channel (for the coil driver alone, so I am not taking into account the noise of the de-whitening board - I will fix this once I dig up that data).

Some remarks:

  1. According to this measurement, the de-whitening filters are the same on the ITMX channels and BS channels. So I don't understand the difference in the right column for BS and ITMX channels.
  2. While there is considerable variation between channels and also between ITMX and BS, there is certainly >6dB of reduction in the DAC noise when the de-whitening is engaged. However, no improvement was seen in the MICH error signal spectrum between 60-300Hz. So we have to continue to investigate other noises that can explain the noise in that band.
  3. Also, the realized improvement in DAC noise by turning on the coil de-whitening seems marginal - the low pass has gain of ~-80dB at 100Hz, but we seem to be hitting some sort of electronics noise in all channels at the level of ~100nV/rtHz (assuming the actual DAC noise doesn't degrade significantly when the digital simulated de-whitening filter is engaged).
  4. It remains to do the test for the ITMY channels.
  5. It would be useful to visualize the incoherent sum of all these channels - this is what should go into the MICH displacement NB. To be added.
  6. I'm currently loading pyawg from my user directory. Need to figure out a place to put this and add it to $PATH.

Data + code for this plot will be attached later.

Attachment 1: coilNoises.pdf
coilNoises.pdf
  13337   Wed Sep 27 23:44:45 2017 gautamUpdateALSProposed PM measurement setup

Attachment #1 is a sketch of the proposed setup to measure the PM response of the EX NPRO. Previously, this measurement was done via PLL. In this approach, we will need to calibrate the DFD output into units of phase, in order to calibrate the transfer function measurement into rad/V. The idea is to repeat the same measurement technique used for the AM - take ~50 1 average measurements with the AG4395, and look at the statistics. 

Some more notes:

  • Delay line box is passive, just contains a length of cable.
  • IQ Demodulation is done using an aLIGO 1U chassis unit, with the actual demod board electronics being D0902745
  • The RF beatnote amplitude out of the IR beat PD is ~ -8dBm.
  • The ZHL-3A amplifiers have gain of 24dB, so the amplified beat should be ~16dBm
  • At the LSC rack, the amplified beat is split into two - one path goes to the LO input of D0902745 (so at most 13dBm), the other goes through the delay line.
  • On the demod board, the LO signal is amplified with a AP1053, rated at 10dB gain, max output of 26dBm, so the signal levels should be fine for us, even though the schematic says the nominal LO level is 10dBm - moreover, I've ignored cable losses, insertion losses etc so we should be well within spec.
  • The mixer is PE4140. The datasheet quotes LO levels of 17dBm for all the "nominal" tests, we should be within a couple of dBm of this number.
  • There is no maximum value specified for the RF input signal level to the mixer on the datasheet, but I expect it to be <10dBm.
  • We should park the beatnote around 30MHz as this should be well within the operational ranges for the various components in the signal chain.
Attachment 1: IMG_7609.JPG
IMG_7609.JPG
  13339   Thu Sep 28 10:33:46 2017 gautamUpdateCDS40m files backup situation

After consulting with Jamie, we reached the conclusion that the reason why the root of FB1 is so huge is because of the way the RAID for /frames is setup. Based on my googling, I couldn't find a way to exclude the nfs stuff while doing a backup using dd, which isn't all that surprising because dd is supposed to make an exact replica of the disk being cloned, including any empty space. So we don't have that flexibility with dd. The advantage of using dd is that if it works, we have a plug-and-play clone of the boot disk and root filesystem which we can use in the event of a hard-disk failure.

  1. One option would be to stop all the daqd processes, unmount /frames, and then do a dd backup of the true boot disk and root filesystem.
  2. Another option would be to use rsync to do the backup - this way we can selectively copy the files we want and ignore the nfs stuff. I suspect this is what we will have to do for the second layer of backup we have planned, which will be run as a daily cron job. But I don't think this approach will give us a plug-and-play replacement disk in the event of a disk failure.
  3. Third option is to use one of the 2TB HGST drives, and just do a dd backup - some of this will be /frames, but that's okay I guess.

I am trying option 3 now. dd however does requrie that the destination drive size be >= source drive size - I'm not sure if this is true for the HGST drives. lsblk suggests that the drive size is 1.8TB, while the boot disk, /dev/sda, is 2TB. Let's see if it works.

Backup of chiara is done. I checked that I could mount the external drive at /mnt and access the files. We should still do a check of trying to boot from the LaCie backup disk, need another computer for that.

nodus backup is still not complete according to the console - there is no progress indicator so we just have to wait I guess.

Quote:

Backups of the root filesystems of chiara and nodus are underway right now. I am backing them up to the 1 TB LaCie external hard drives we recently acquired.

We also wanted to do a backup of the root of FB1 - but I'm not sure if dd will work with the external hard drive, because I think it requires the backup disk size (for us, 1TB) to be >= origin disk size (which on FB1, according to df -h, is 2TB). Unsure why the root filesystem of FB is so big, I'm checking with Jamie what we expect it to be. Anyways we have also acquired 2TB HGST SATA drives, which I will use if the LaCie disks aren't an option.

 

 

  13341   Thu Sep 28 23:32:38 2017 gautamHowToCDSpyawg

I've modified the __init.py__ file located at /ligo/apps/linux-x86_64/cdsutils-480/lib/python2.7/site-packages/cdsutils/__init__.py so that you can now simply import pyawg from cdsutils. On the control room workstations, iPython is set up such that cdsutils is automatically imported as "cds". Now this import also includes the pyawg stuff. So to use some pyawg function, you would just do (for example):

exc=cds.awg.ArbitraryLoop(excChan,excit,rate=fs)

One could also explicitly do the import if cdsutils isn't automatically imported:

from cdsutils import awg

pyawg-away!


Linking this useful instructional elog from Chris here: https://nodus.ligo.caltech.edu:8081/Cryo_Lab/1748

  13342   Thu Sep 28 23:47:38 2017 gautamUpdateCDS40m files backup situation

The nodus backup too is now complete - however, I am unable to mount the backup disk anywhere. I tried on a couple of different machines (optimus, chiara and pianosa), but always get the same error:

mount: unknown filesystem type 'LVM2_member'

The disk itself is being recognized, and I can see the partitions when I run lsblk, but I can't get the disk to actually mount.

Doing a web-search, I came across a few blog posts that look like the problem can be resolved using the vgchange utility - but I am not sure what exactly this does so I am holding off on trying.

To clarify, I performed the cloning by running

sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync

in a tmux session on nodus (as I did for chiara and FB1, latter backup is still running). 

  13343   Thu Sep 28 23:50:04 2017 gautamUpdateLSCDAC noise measurement (again)

I am running some more measurements of the DAC noise, for which I've shut down the BS watchdog. Some of the cables on the coil driver side have been disconnected.

I will restore these tomorrow.


As Rana pointed out to me, one important fact to keep in mind w.r.t. DAC noise is that it can be non-linear. So the RMS of the DAC noise in a higher frequency band (say 60-100Hz) can be affected by the RMS of the requested DAC signal in some lower frequency band (say 10-20Hz). One test to see if this hypothesis can explain the difference @100Hz between the ITMX channels and BS channels I observed a couple of days ago is to see if the noise around 100Hz becomes lower when I enable a 20-40Hz bandstop in the digital signal chain.

  13345   Fri Sep 29 11:07:16 2017 gautamUpdateCDS40m files backup situation

The FB1 dd backup process seems to have finished too - but I got the following message:

dd: error writing ‘/dev/sdc’: No space left on device
30523666+0 records in
30523665+0 records out
2000398934016 bytes (2.0 TB) copied, 50865.1 s, 39.3 MB/s

Running lsblk shows the following:

controls@fb1:~ 32$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb      8:16   0 23.5T  0 disk
└─sdb1   8:17   0 23.5T  0 part /frames
sda      8:0    0    2T  0 disk
├─sda1   8:1    0  476M  0 part /boot
├─sda2   8:2    0 18.6G  0 part /var
├─sda3   8:3    0  8.4G  0 part [SWAP]
└─sda4   8:4    0    2T  0 part /
sdc      8:32   0  1.8T  0 disk
├─sdc1   8:33   0  476M  0 part
├─sdc2   8:34   0 18.6G  0 part
├─sdc3   8:35   0  8.4G  0 part
└─sdc4   8:36   0  1.8T  0 part

While I am able to mount /dev/sdc1, I can't mount /dev/sdc4, for which I get the error message

controls@fb1:~ 0$ sudo mount /dev/sdc4 /mnt/HGSTbackup/
mount: wrong fs type, bad option, bad superblock on /dev/sdc4,
       missing codepage or helper program, or other error
       In some cases useful info is found in syslog - try
       dmesg | tail or so.

Looking at dmesg, it looks like this error is related to the fact that we are trying to clone a 2TB disk onto a 1.8TB disk - it complains about block size exceeding device size.

So if we either have to get a larger disk (4TB?) to do the dd backup, or do the backing up some other way (e.g. unmount /frames RAID, delete everything in /frames, and then do dd, as Jamie suggested). If I understand correctly, unmounting /frames RAID will require that we stop all the daqd processes for the duration of the dd backup

Quote:
 
sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync

in a tmux session on nodus (as I did for chiara and FB1, latter backup is still running). 


Edit: unmounting /frames won't help, since dd makes a bit for bit copy of the drive being cloned. So we need a drive with size that is >= that of the drive we are trying to clone. On FB1, this is /dev/sda, which has a size of 2TB. The HGST drive we got has an advertised size of 2TB, but looks like actually only 1.8TB is available. So I think we need to order a 4TB drive.

  13347   Fri Sep 29 18:36:25 2017 gautamUpdateLSCDAC noise measurement (again)

BS connections and damping restored.

Quote:

I am running some more measurements of the DAC noise, for which I've shut down the BS watchdog. Some of the cables on the coil driver side have been disconnected.

I will restore these tomorrow.


 

  13349   Mon Oct 2 18:08:10 2017 gautamUpdateCDSc1ioo DC errors

I was trying to set up a DAC channel to interface with the AOM driver on the PSL table.

  • It would have been most convenient to use channels from c1ioo given proximity to the PSL table.
  • Looking at the 1X2 rack, it looked like there were indeed some spare DAC channels available.
  • So I thought I'd run a test by adding some TPs to the c1als model (because it seems to have the most head room in terms of CPU time used).
  • I added the DAC_0 block from CDS_PARTS library to c1als model (after confirming that the same part existed in the IOP model, c1x03).
  • Model recompiled fine (I ran rtcds make c1als and rtcds install c1als on c1ioo).
  • However, I got a bunch of errors when I tried to restart the model with rtcds restart c1als. The model itself never came up.
  • Looking at dmesg, I saw stuff like
    [4072817.132040] c1als: Failed to allocate DAC channel.
    [4072817.132040] c1als: DAC local 0 global 16 channel 4 is already allocated.
    [4072817.132040] c1als: Failed to allocate DAC channel.
    [4072817.132040] c1als: DAC local 0 global 16 channel 5 is already allocated.
    [4072817.132040] c1als: Failed to allocate DAC channel.
    [4072817.132040] c1als: DAC local 0 global 16 channel 6 is already allocated.
    [4072817.132040] c1als: Failed to allocate DAC channel.
    [4072817.132040] c1als: DAC local 0 global 16 channel 7 is already allocated.
    [4073325.317369] c1als: Setting stop_working_threads to 1
  • Looking more closely at the log messages, it seemed like rtcds could not find any DAC cards on c1ioo.
  • I went back to 1X2 and looked inside the expansion chassis. I could only find two ADC cards and 1 BIO card installed. The SCSI cable labelled ("DAC 0") running from the rear of the expansion chassis to the 1U SCSI->40pin IDE breakout chassis wasn't actually connected to anything inside the expansion chassis.
  • I then undid my changes (i.e. deleted all parts I added in the simulink diagram), and recompiled c1als.
  • This time the model came back up but I saw a "0x2000" error in the GDS overview MEDM screen.
  • Since there are no DACs installed in the c1ioo expansion chassis, I thought perhaps the problem had to do with the fact that there was a "DAC_0" block in the c1x03 simulink diagram - so I deleted this block, recompiled c1x03, and for good measure, restarted all (three) models on c1ioo.
  • Now, however, I get the same 0x2000 error on both the c1x03 and c1als GDS overview MEDM screens (see Attachment #1).
  • An elog search revealed that perhaps this error is related to DAQ channels being specified without recording rates (e.g. 16384, 2048 etc). There were a few DAQ channels inside c1als which didn't have recording rates specified, so I added the rates, and restarted the models, but the errors persist.
  • According to the RCG runtime diagnostics document, T1100625 (which admittedly is for RCG v 2.7 while we are running v3.4), this error has to do with a mismatch between the DAQ config files read by the RTS and the DAQD system, but I'm not sure how to debug this further.
  • I also suspect there is something wrong with the mx processes:
    controls@c1ioo:~ 130$ sudo systemctl status mx
    ● open-mx.service - LSB: starts Open-MX driver
       Loaded: loaded (/etc/init.d/open-mx)
       Active: failed (Result: exit-code) since Tue 2017-10-03 00:27:32 UTC; 34min ago
      Process: 29572 ExecStop=/etc/init.d/open-mx stop (code=exited, status=1/FAILURE)
      Process: 32507 ExecStart=/etc/init.d/open-mx start (code=exited, status=1/FAILURE)
    Oct 03 00:27:32 c1ioo systemd[1]: Starting LSB: starts Open-MX driver...
    Oct 03 00:27:32 c1ioo open-mx[32507]: Loading Open-MX driver (with  ifnames=eth1 )
    Oct 03 00:27:32 c1ioo open-mx[32507]: insmod: ERROR: could not insert module /opt/3.2.88-csp/open-mx-1.5.4/modules/3.2.88-csp/open-mx.ko: File exists
    Oct 03 00:27:32 c1ioo systemd[1]: open-mx.service: control process exited, code=exited status=1
    Oct 03 00:27:32 c1ioo systemd[1]: Failed to start LSB: starts Open-MX driver.
    Oct 03 00:27:32 c1ioo systemd[1]: Unit open-mx.service entered failed state.
  • Not sure if this is related to the DC error though.
Attachment 1: c1ioo_CDS_errors.png
c1ioo_CDS_errors.png
  13351   Mon Oct 2 19:03:49 2017 gautamUpdateCDS[Solved] c1ioo DC errors

This did the trick - I simply ran

sudo systemctl restart daqd_*

on FB1, and now all the CDS overview lights are green again.

I thought I had done this already, but I realize that I was supposed to restart the daqd processes on FB1 (which is where they are running) and not on c1ioo frown.

Thanks Jamie for the speedy resolution!

Quote:
Quote:

 

  • This time the model came back up but I saw a "0x2000" error in the GDS overview MEDM screen.
  • Since there are no DACs installed in the c1ioo expansion chassis, I thought perhaps the problem had to do with the fact that there was a "DAC_0" block in the c1x03 simulink diagram - so I deleted this block, recompiled c1x03, and for good measure, restarted all (three) models on c1ioo.
  • Now, however, I get the same 0x2000 error on both the c1x03 and c1als GDS overview MEDM screens (see Attachment #1).

From page 21 of T1100625, DAQ status "0x2000" means that the channel list is out of sync between the front end and the daqd.  This usually happens when you add channels to the model and don't restart the daqd processes, which sounds like it might be applicable here.

It looks like open-mx is loaded fine (via "rtcds lsmod"), even though the systemd unit is complaining.  I think this is because the open-mx service is old style and is not intended for module loading/unloading with the new style systemd stuff.

 

Attachment 1: CDSoverview.png
CDSoverview.png
  13352   Mon Oct 2 23:16:05 2017 gautamHowToCamerasCCD calibration

Going through some astronomy CCD calibration resources ([1]-[3]), I gather that there are in general 3 distinct types of correction that are applied:

  1. Dark frames --- this would be what we get with a "zero duration" capture, some documents further subdivide this into various categories like thermal noise in the CCD / readout electronics, poissonian offsets on individual pixels etc.
  2. Bias frames --- this effect is attributed to the charge applied to the CCD array prior to the readout.
  3. Flat-field calibration --- this effect accounts for the non-uniform responsivity of individual pixels on the CCDs. 

The flat-field calibration seems to be the most complicated - the idea is to use a source of known radiance, and capture an image of this known radiance with the CCD. Then assuming we know the source radiance well enough, we can use some math to back out what the actual response function of individual pixels are. Then, for an actual image, we would divide by this response-map to get the actual image. There are a number of assumptions that go into this, such as: 

  • We know the source radiance perfectly (I guess we are assuming that the white paper is a Lambertian scatterer so we know its BRDF, and hence the radiance, perfectly, although the work that Jigyas and Amani did this summer suggest that white paper isn't really a Lambertian scatterer). 
  • There is only one wavelength incident on the CCD.
  • We can neglect the effects of dust on the telescope/CCD array itself, which would obviously modify the responsivity of the CCD, and is presumably not stationary. Best we can do is try and keep the setup as clean as possible during installation.

I am not sure what error is incurred by ignoring 2 and 3 in the list at the beginning of this elog, perhaps this won't affect our ability to estimate the scattered power from the test-masses to within a factor of 2. But it may be worth it to do these additional calibration steps. 

I also wonder what the uncertainty in the 1.5V/A number for the photodiode is (i.e. how much do we trust the Ophir power meter at low power levels?). The datasheet for the PDA100A says the transimpedance gain at 60dB gain is 1.5 MV/A (into high impedance load), and the Si responsivity at 1064nm is ~0.25A/W, so naively I would expect 0.375 V/uW which is ~factor of 4 lower. Is there a reason to trust one method over the other?  

Also, are the calibration factor units correct? Jigyasa reported something like 0.5nW s / ct in her report.

Camera IP Calibration Factor CF
192.168.113.152 8.58 W*s
192.168.113.153 7.83 W*s

The incident power can be calculated as Pin =CF*Total(Counts-DarkCounts)/ExposureTime.

References:

[1] http://www.astrophoto.net/calibration.php

[2] https://www.eso.org/~ohainaut/ccd/

[3] http://www.astro.ufl.edu/~lee/ast325/handouts/ccd.pdf

  13353   Tue Oct 3 01:32:39 2017 gautamUpdateLSCLaser intensity noise coupling to MICH (simulated)

GV Oct 6: This coupling is probably not correct - Finesse outputs TF magnitude in units of W/W, and not W/RIN

Since I was foiled (by lack of DAC) in my attempt to measure the coupling of laser intensity noise to MICH in the DRMI (no arms) configuration, I decided to try understanding the effect with a simulation.

For this purpose, I used my DRMI Finesse model - this had mirror positions tuned for locking and photodiode demod phases tuned to give a sensing matrix model that wasn't too far from an actual measurement (within factor of a few). So the model seems okay for a first pass at estimating this coupling.

Measuring transfer functions in Finesse is straightforward - use the fsig command to modulate some quantity (in this case the input beam intensity), and use the pd2 detector to demodulate the effect of this modulation at the port of interest (in this case AS55_Q).

**Note that to apply a modulation to an input beam (i.e. Laser) in Finesse, the keyword for the "type" argument given to fsig is "amp" and not "amplitude" as the manual would had me believe. In fact, there seem to be quite a few such caveats. The best way to figure this out is to go to the pykat installation directory, find the file components.py, and look for the fsig_name for the component of interest. It is also indicated in the same file, via the canFsig argument, if that property of the component can be modulated for transfer function measurements.  

Attachment #1 shows the result of such a sweep.

To estimate what the actual contribution to the displacement noise is, I used the DQ-ed MC transmission (recorded at 1024Hz) from the DRMI lock, computed the ASD using scipy.signal.welch, divided by the nominal MC transmission of ~15,000 counts to convert to RIN/rtHz. The RIN was then multiplied by the above calculated coupling function, and divided by the sensing matrix element for AS55_Q (in units of W/m) to give the curve shown in Attachment #2. If we believe the simulation, then Laser Intensity Noise shouldn't be the limiting noise between 10Hz-1kHz. 

I will of course measure the actual coupling and see how it lines up with Attachment #1 - would be a nice additional validation of the Finesse model. I will also try using the Finesse model to estimate some other coupling transfer functions (e.g. Laser Frequency Noise, Oscillator Noise).

Quote:

The absence of evidence is not evidence of absence.

 

Attachment 1: MICH_intensityNoiseCoupling.pdf
MICH_intensityNoiseCoupling.pdf
Attachment 2: MICH_intensityNoiseASD.pdf
MICH_intensityNoiseASD.pdf
  13355   Tue Oct 3 19:39:10 2017 gautamUpdateCDSslow machine bootfest

Eurocrate key turning reboots for c1susaux, c1auxex,c1auxey, c1iscaux, c1iscaux2 and c1aux. Usual precautions were taken for ITMX. Did burtrestore for c1iscaux andc1iscaux2  in order to restore the LSC PD whitening gains.


Un-related to this work: input pointing into PMC was tweaked as the PMC_REFL spot was pretty bright.

  13356   Wed Oct 4 17:18:15 2017 gautamUpdateCDSFree DAC channels in c1lsc

There are at least 5 free DAC channels (4 if you discount the one channel from these that I am hijacking) available in the 1Y2 electronics rack.

Jamie's nice wiring diagram shows the topology - the actual DAC card sits in 1Y3 inside the c1lsc expansion chassis (while the c1lsc frontend itself is in 1X4). The output of the DAC goes via SCSI to an interface box (D080303) and then to some dewhitening/AI boards (D000316). There are a total of 16 DAC channels available, out of which 8 are used for the TTs, 2 are used for the DAFI model, and one is labeleld "From c1ioo 1X2" (I don't know what this one is for). So I'm going to use some of these channels for measuring the coupling of oscillator noise and intensity noise to MICH in the DRMI lock.

The de-whitening/AI board seems to be old - it has 2x 800Hz Butterworth LPFs and no notch for the clock frequency, but maybe this doesn't matter for the tests I have in mind. The AI board available on 1X2 is more modern but routing the DAC channels from 1Y2 to it is going to be some work.

I'm going to add my testpoint to c1daf given that it seems to be the least critical model on c1lsc.

EDIT: testpoints added to c1daf don't show up in the list of available channels - there was some issue with this model while we were getting the new RTCDS going. So I'm moving my temporary testpoint to c1cal instead.

  13357   Wed Oct 4 17:38:25 2017 gautamUpdateLSCFS725 for Marconi stabilization

I've located the Stanford Research FS725 Rb reference unit. The question is where to put it. This afternoon Steve and I put it inside the little electronics rack next to 1X3, but in hindsight, this probably isn't such a great place for a timing reference as there are a bunch of Sorensen power supplies in there (and presumably the accompanying harmonics from these switching supplies). 

The unit itself was repaired in 2015, and powering it on, it locked to the internal reference within a few minutes as prescribed in the manual. 

  13359   Thu Oct 5 02:14:51 2017 gautamUpdateLSCMore DRMI coupling measurements - setup

In the end I decided to access the available spare DAC channels via the C1ASS model - for this purpose, I added a namespace block "TEST" in the C1ASS simulink model, which is a SISO block. Inside is just a single CDS filter module. My idea is to use the EXC of this filter module to inject excitations for measuring various couplings. Rather than have a simple testpoint, we also have the option of adding in some filter shapes in the filter module which could possibly allow a more direct read-off of some coupling TF. Recompiling the model went smooth - there was a crash earlier in the day which required me to hard-reboot c1lsc (and also restart all models on c1sus and c1ioo but no reboots necessary for those machines).

Note that to get the newly added channels to show up in the channel lists in DTT/AWGGUI etc, you need to ssh into fb1 and restart the daqd processes via sudo systemctl restart daqd_*. If I remember right, it used to be enough to do telnet fb 8088 followed by shutdown. This is no longer sufficient.

It took me a while to get the DRMI locking going again. The model restarts earlier in the evening had changed a bunch of EPICS channel settings (and out config scripts don't catch all of these settings). In particular, I forgot to re-enable the x3 digital gain for the ITMs, BS and SRM (necessitated by removing an analog x3 gain on the de-whitening boards). I was hesitant to spend time re-adjusting all damping / oplev loop gains because if we change the series resistor on the coil driver board, we will have to do this again. I also didn't want this arbitrary FM to be enabled in the SDF safe.snap. But maybe it's worth doing it anyways - if nothing it'll be good practise.

Once I hunted down all the setting diffs and tweaked alignment, the DRMI locks were pretty robust.

I had hoped to make some of these TF measurements tonight. But I realized I needed to look up a bunch of stuff in manuals/datasheets, and think about these measurements a little. I wasn't sure if the DW/AI board could drive a signal over 40m of BNC cabling so I added an SR560 (DC coupled, gain=1, low noise mode, 50ohm output used) to buffer the output. The Marconi's external modulation input is high impedance (100k) but for the AOM driver we want 50ohm. For the Marconi, the external input accepts 1Vrms max, while for the AOM driver, we want to drive a signal between 0V and 1V at most.

The general measurement setup is schematically shown in Fig 1. Questions to address:

  • What happens if we apply a negative voltage to the input of the AOM driver? What is the damage threshold? Do we have to worry about SR560 offset level?
  • Is there a way to dynamically adjust the offset in DTT such that we can have different amplitude signals at different frequencies (usually done by specifying an envelope in DTT) but still satisfy the requirement that the entire signal lie between 0-1V?
  • For the Laser Intensity noise -> MICH coupling TF measurement, I guess we can use the AOM to inject an excitation, and measure the ratio of the response in MC_TRANS and in MICH_IN1. Then we multiply the in-loop MC_TRANS spectrum by the magnitude of this TF to get the Laser Intensity Noise contribution to MICH.
  • The Laser Frequency Noise coupling should be negligible in MICH - but the measurement principle should be the same. Drive the AO input of the Mode Cleaner Servo board from the DAC, look at ratio of response in MICH_IN1 and MC_F. Multiply the DRMI in-lock MC_F spectrum by this TF.
  • The oscillator noise seems more tricky to me (also Finesse modeling suggests this may be the most significant of the 3 couplings described in this elog, though I may just be computing the coupling in Finesse wrongly)
    • I don't understand all the External Modulation options specified in the manual.
    • DC? AC? FM? PM? AM? Need to figure out what is the right settings to use.
    • I'm not sure how independent the various modulations will be - i.e. if I select PM, how much AM is induced as a result of me driving the EXT MOD input?
    • What is the right level of excitation drive? I tried this a bunch of times tonight - set the PM range to 0.1rad (for the full scale 1Vrms sine wave input), but with an excitation of just a few counts, already saw non-lineaer coupling in MICH_IN1 which probably means I'm driving this too hard.
    • This measurement needs a bit more algebra. We have an estimate of the Marconi phase noise from Rana (is this the right one to use?). But the "Transfer Function" we'd measure is cts in MICH_IN1 in response to counts to Marconi via the signal chain in Attachment #1. So we'd need to know (and divide out) the AI/DW board TF, and the Marconi's TF, which the datasheet suggests has a lower 3dB frequency of 100Hz (assuming SR560 and cable can be treated as flat).
    • A simpler test may be to just hook up the Marconi to the Rb standard, and the Rb to 1pps from GPS, and look for a change in the MICH noise.

Am I missing something?

Attachment 1: CB4709D0-3FA7-43E3-BC25-3CF4164E6C6A.jpeg
CB4709D0-3FA7-43E3-BC25-3CF4164E6C6A.jpeg
  13360   Thu Oct 5 11:46:15 2017 gautamUpdateCDSslow machine bootfest

MC Autolocker was umnhappy because c1iool0 was unresponsive and hence it couldn't write to the "C1:IOO-MC_AUTOLOCK_BEAT" channel. I keyed the crate and IMC locked almost immediately. I'm moving this channel into the RTCDS model as we did for the IFO_STATE EPICS channel so that the autolocker isn't dependant on c1iool0 (which was the whole point of migrating the IFO-STATE variable anyways). I also commented out all of these channels in /cvs/cds/caltech/target/c1iool0/autolocker.db so that there aren't duplicate channels.

Quote:

Eurocrate key turning reboots for c1susaux, c1auxex,c1auxey, c1iscaux, c1iscaux2 and c1aux. Usual precautions were taken for ITMX. Did burtrestore for c1iscaux andc1iscaux2  in order to restore the LSC PD whitening gains.


Un-related to this work: input pointing into PMC was tweaked as the PMC_REFL spot was pretty bright.

 

  13361   Thu Oct 5 13:58:26 2017 gautamUpdateCDS40m files backup situation

The 4TB HGST drives have arrived. I've started the FB1 dd backup process. Should take a day or so.

Quote:
Edit: unmounting /frames won't help, since dd makes a bit for bit copy of the drive being cloned. So we need a drive with size that is >= that of the drive we are trying to clone. On FB1, this is /dev/sda, which has a size of 2TB. The HGST drive we got has an advertised size of 2TB, but looks like actually only 1.8TB is available. So I think we need to order a 4TB drive.

 

  13362   Thu Oct 5 18:40:27 2017 gautamUpdateLSCFS725 for Marconi stabilization

[steve, gautam]

  1. We installed the FS725 on the shelf inside the PSL enclosure - see Attachment #1.
  2. We ran a long BNC cable (labelled "GPS 1pps" on both ends) from 1X7 to the PSL enclosure - this was to pipe the 1PPS signal from the GPS timing unit (EndRun Technologies Tempus LX) rear panel (50 ohm output according to the datasheet) to the 1PPS input of the FS725 (high impedance). See Attachments #2. Note that the 1pps output was already tee'd on the rear panel. One port of the tee was unused (this now goes to the FS725) while the other was going to the 1PPS input of the Master Timing Sequencer (D050239), so I decided that there was no need to tee the 1pps input of the FS725 with a 50ohm terminator. In a few minutes, the Rb standard indicated that it was locked to its internal reference, and also to the external 1pps input (see Attachment #1). 
  3. We ran a long BNC cable (labelled "Rb 10MHz" on both ends) from the 10MHz output of the FS725 (50 ohm output impedance),  in the PSL enclosure to the rear BNC "FREQ_STD IN/OUT" BNC connector of the Marconi (1kohm input impedance). Changed the frequency reference setting on the Marconi to "External Direct". The FS725 datasheet recommends terminating the load with a 50ohm inline terminator, I have not yet done this (see Attachment #3). Is it appropriate to use a Balun (FTB-1-1) here? This would avoid ground loops between the Marconi and the FS725, and also make the load seen by the FS725 50ohms
  4. Found that there was an unused long cable from the PSL enclosure to the 1X2 electronics rack. We re-purposed this to drive the AOM driver via the DAC output in 1Y2. The cable is labelled "AOM driver" on both ends. This was to facilitate measurement of the coupling of laser intensity noise to AS55_Q in a DRMI lock.
  5. Removed 2 long cables between 1X7 and 1X2 that weren't connected to anything.
  6. Re-arranged the DC bench supply on the shelf in the PSL enclosure, whose only purpose seems to be to supply 12V to a fan attached to the rear of the PSL NPRO controller. Seems to be a waste of space! The fan was momentarily disconnected but has since been reconnected and is spinning again.
  7. Removed a couple of unused power cables from the mess on the shelf in the PSL enclosure. Also removed an unused Sony Video Squential Switcher YS-S6 from the PSL enclosure. 
Quote:

I've located the Stanford Research FS725 Rb reference unit. The question is where to put it. This afternoon Steve and I put it inside the little electronics rack next to 1X3, but in hindsight, this probably isn't such a great place for a timing reference as there are a bunch of Sorensen power supplies in there (and presumably the accompanying harmonics from these switching supplies). 

The unit itself was repaired in 2015, and powering it on, it locked to the internal reference within a few minutes as prescribed in the manual. 

 

Attachment 1: IMG_7617.JPG
IMG_7617.JPG
Attachment 2: IMG_7619.JPG
IMG_7619.JPG
Attachment 3: IMG_7618.JPG
IMG_7618.JPG
ELOG V3.1.3-