40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 210 of 341  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  13221   Wed Aug 16 20:01:03 2017 gautamUpdateGeneralSUS model ASC input weirdness

I'm not sure if this has something to do with the model restarts / new RCG, but while I was re-enabling the MC watchdogs, I noticed the RMS sensor voltage channels on ITMX hovering around ~100mV, even though local damping was on (in which configuration I would expect <1mV if everything is working normally).  I was confused by this behaviour, and after staring at the ITMX suspension screen for a while, I noticed that the input to the "ASCP" and "ASCY" servos were "-nan", and the outputs were 10^20 cts frown (see Attachment #1).

Digging a little deeper, I found that the same problem existed on ITMY, ETMX, ETMY, PRM (but not BS or SRM) - reasons unknown for now.

I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible.

After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.

gedit 28 Oct 0026: Seems like this problem is seen at the sites as well. I wonder if the problem is related.

Attachment 1: ITMX_ASC.png
ITMX_ASC.png
  13222   Wed Aug 16 20:24:23 2017 gautamUpdateALSFiber ALS

Today, with Johannes' help, I cleaned the fiber tips of the photodiodes. The effect of the cleaning was dramatic - see Attachments #1-4, which are X Beat PD, axial illumination, X Beat PD, oblique illumination, Y beat PD, axial illumination, Y beat PD, oblique illumination. They look much cleaner now, and the feature that looked like a scratch has vanished.

The cleaning procedure followed was:

  • Blow clean air over the fiber tip
  • First, we tried cleaning with the Q-tip like tool, but the results weren't great. The way to use it is to dip the tip in the cleaning solvent for a few seconds, hold the tip to the fiber taking into account the angled cut, and apply 10 gentle quarter turns.
  • Next, we tried cleaning with the wipes. We peeled out an approximately 5" section of the wipe, and laid it out on the table. We then applied cleaning solvent liberally on the central area where we were sure we hadn't touched the wipe. Then you just drag the fiber tip along the soaked part of the wipe. If you get the angle exactly right, the fiber glides smoothly along the surface, but if you are a little misaligned, you get a scratchy sensation. 
  • Blow dry and inspect.

I will repeat this procedure for all fiber connections once I start putting the box back together - I'm almost done with the new box, just waiting on some hardware to arrive.

 

Quote:

Today, I borrowed the fiber microscope from Johannes and took a look at the fibers coupled to the PDs. The PD labelled "BEAT PD AUX Y" has an end that seems scratched (Attachments #1 and #2). The scratch seems to be on (or at least very close to) the core. The other PD (Attachments #3 and #4) doesn't look very clean either, but at least the area near the core seems undamaged. The two attachments for each PD corresponds to the two available lighting settings on the fiber microscope.

I have not attempted to clean them yet, though I have also borrowed the cleaning supplies to facilitate this from Johannes. I also plan to inspect the ends of all other fiber connections before re-installing them.

 

Attachment 1: IMG_7476.JPG
IMG_7476.JPG
Attachment 2: IMG_7477.JPG
IMG_7477.JPG
Attachment 3: IMG_7478.JPG
IMG_7478.JPG
Attachment 4: IMG_7479.JPG
IMG_7479.JPG
  13225   Thu Aug 17 11:17:49 2017 gautamUpdateSUSMC1 <--> MC3 switched back

Seems like this modification didn't really work. There were several large MC1 glitches, and one of them misaligned MC1 so much that the IMC didn't relock for the last ~6 hours. I re-aligned MC1 manually, and now it is locked fine.

Quote:

Now that all the CDS overview lights are green, I decided to switch back the coil driver outputs to their original state so that the MC optics could be damped and the IMC relocked. I also restored the static PIT/YAW bias values to their original values.

MC1 has been quiet over the last couple of days, lets see how it behaves in the next few days. In all the glitches I have observed, if the IMC is locked and WFS loops are enabled, the loops are able to correct for the DC misalignment caused by the glitch. But the mcwfs off script is currently set up in such a way that the output history is cleared between IMC locks. I made two copies of the mcwfson/mcwfsoff scripts, called mcwfsunhold/mcwfshold respectively. They live in /opt/rtcds/caltech/c1/scripts/MC/WFS. I've also modified the autolocker script to call these modified scripts, such that when the IMC loses lock, the WFS servo outputs are held, while the input is turned off. The hope is that in this configuration, the autolocker can catch a lock even if there is a glitch on MC1.

I haven't tried locking the arms yet, but I think other IFO work discussed at the meeting (like arm loss estimation / cavity scans etc) can proceed.

 

 

Attachment 1: MC1_misaligned.png
MC1_misaligned.png
Attachment 2: MC1_glitch.png
MC1_glitch.png
  13226   Thu Aug 17 17:33:01 2017 gautamUpdateSUSMC1 <--> MC3 switched back

that's why the Autolocker clears the outputs; we don't want to be holding the offsets from the last ms of lock when it was all messed up; instead it would be best to have a slow (~mHz) relief script that takes the WFS controls and puts them onto the MC SUS sliders. This would then re-align the MC to the input beam rather than the input to the MC. Which is not the best idea.

Quote:

Seems like this modification didn't really work.

 

  13228   Fri Aug 18 21:58:35 2017 gautamUpdateGeneralSUS model ASC input weirdness

I spent some time today trying to debug this issue.

Jamie and I had opened up the c1sus frontend to try and replace the RFM card before we realized that the problem was in the RCG code generator. During this process, we had disconnected all of the back-panel cabling to this machine (2 ethernet cables, dolphin cable, and RFM cables/fibers). I thought I may have accidentally returned the cables to the wrong positions - but all the status indicator lights indicate that everything is working as it should, and I also confirmed that the cabling is as it is in the pictures of the rack on the wiki page.

Looking at the SimuLink model diagram (see Attachment #1 for example), it looks like (at least some of) these channels are actually on the dolphin network, and not the RFM network (with which we were experiencing problems). This suggests that the problem is something deeper. Although I did see nans in some of the ETMX ASC channels as well, for which the channels are piped over the RFM network. Even more puzzling is that the ASC MEDM screen (Attachment #3) and the SimuLink diagram (Attachment #2) suggest that there is an output matrix in between the input signals and the output angular control signals to the suspensions. As Attachment #4 shows, the rows corresponding to ITMX PIT and YAW are zero (I confirmed using z read <matrixElement>). Attachment #3 shows that the output of all the servo banks except CARM_YAW is zero, but CARM_YAW has no matrix element going to the ITMs (also confirmed with z read <servoOutputChannel>). So 0 x 0 should be 0, but for some reason the model doesn't give this output?

GV Edit: As EricQ just pointed out to me, nan x 0 is still nan, which probably explains the whole issue. Poking a little further, it seems like this is an SDF issue - the SDF table isn't able to catch differences for this hold output channel.


As I was writing this elog, I noticed that, as mentioned above, the CARM_YAW output was "nan". When I restart the model (thankfully this didn't crash c1lsc!), it seems to default to this state. Opening up the filter module, I saw that the "hold output" was enabled.

Toggling that switch made the nans in all the SUS ASC channels disappear. Mysterious indecision.

All the points above stand - CARM_YAW output shouldn't have been going anywhere as per the output matrix, but it seems to have been responsible? Seems like a bug in any case if a model restarts with a field as "nan".

Anyways the problem seems to have been resolved so I'm going to try locking and dither aligning the arms now.

Rolf mentioned that a simple update could fix several of the CDS issues we are facing (e.g. inability to open up testpoints), but he didn't seem to have any insight into this particular issue. Jamie will try and recompile all the models and then we have to see if that fixes the remaining problems.

Quote:
 

I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible.

After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.

 

Attachment 1: ITMXP.png
ITMXP.png
Attachment 2: ASC_model_outmatrix.png
ASC_model_outmatrix.png
Attachment 3: ASC_medm.png
ASC_medm.png
Attachment 4: ASC_outMat.png
ASC_outMat.png
  13229   Fri Aug 18 23:59:53 2017 gautamUpdateALSX Arm ALS lock

[ericq, gautam]

  • I was just getting the IFO aligned, and single arm lock going, when EricQ came in and asked if we could get some ALS data.
  • ALS beats seemed fine, in particular the X-Arm. The broad hump around ~70Hz that was present in my previous ALS update was nowhere to be seen - reasons unknown.
  • Copied over /opt/rtcds/caltech/c1/scripts/YARM/Lock_ALS_YARM.py to /opt/rtcds/caltech/c1/scripts/XARM/Lock_ALS_XARM.py. Could be useful when we want to do arm cavity scans.
  • Made appropriate changes to allow ALS locking of Xarm - the testpoint inaccessibility makes things a little annoying but for tonight we just used DQ channels in place (or slow channels when DQ chans were not available)
  • Calibration of X arm error signal seemed off - so we fixed it by driving a line in ETMX and matching up the peaks in the ALS error signal and POX11. We then updated the gain of the filter in the CINV filter bank accordingly.
  • Got some decent data - X arm stayed locked on ALS for >60mins, during which time the Y arm stayed locked on POY11, and the Y green also reained locked yes. There was no evidence of the X arm 00 mode randomly dropping out of lock tonight.
  • EQ will update with a sick comparison plot - today we looked at the ALS noise from the perspective of the Green Locking Izumi et. al. paper.
  • Y arm ALS noise didn't look so hot tonight - to be investigated...

Leaving LSC mode OFF for now while CDS is still under investigation


Not really related to this work: We saw that the safe.snap file for c1oaf seems to have gotten overwritten at some point. I restored the EPICS values from a known good time, and over-wrote the safe.snap file.

  13233   Mon Aug 21 14:53:32 2017 gautamUpdateVACRGA reset

[gautam, steve]

In the aftermath of the accidental vent, it looks like the RGA was shutdown.

We followed the instructions in this elog to restart the RGA.

Seems to be working now, Steve says we just need to wait for it to warm up before we can collect a reliable scan.

Quote:

We have good RGA scan now. There was no scan for 3 months.

 

  13234   Mon Aug 21 16:35:48 2017 gautamUpdateVACUPS checkup

[steve, gautam]

At Rolf/Rich Abbott's request, we performed a check of the UPS today.

Steve believed that the UPS was functioning as it should, and the recent accidental vent was because the UPS batteries were insufficiently charged when the test was performed. Today, we decided to try testing the UPS.

We first closed V1, VM1 and VA6 using the MEDM screen. We prepared to pull power on all these valves by loosening the power connections (but not detaching them). [During this process, I lost the screw holding the power cord fixed to the gate valve V1 - we are looking for a replacement right now but it seems to be an odd size. It is cable tied for now.]

The battery charge indicator LEDs on the UPS indicated that the batteries were fully charged.

Next, we hit the "Test" button on the UPS - it has to be held down for ~3 seconds for the test to be actually initiated, seems to be a safety feature of the UPS. Once the test is underway, the LED indicators on the UPS will indicate that the loading is on the UPS batteries. The test itself lasts for ~5seconds, after which the UPS automatically reverts to the nominal configuration of supplying power from the main line (no additional user input is required).

In this test, one of the five battery charge indicator LEDs went off (5 ON LEDs indicate full charge).

So on the basis of this test, it would seem that the UPS is functioning as expected. It remains to be investigated if the various hardware/software interlocks in place will initiate the right sequence of valve closures when required.


Quote:
 

Never hit O on the Vacuum UPS !

Note: the " all off " configuration should be all valves closed ! This should be fixed now.

In case of  emergency  you can close V1 with disconnecting it's actuating power as shown on Atm3 if you have peumatic pressure 60 PSI 

 

  13236   Mon Aug 21 21:26:41 2017 gautamSummaryGeneralLoss measurements plan

In case you want to use it, I had profiled the Lightwave NPRO sometime back, and we were even using it as the AUX X laser for a short period of time. 

As for using the AS laser for mode spectroscopy: don't we want to match the beam into the cavity as best as possible, and then use some technique to disturb the input mode (like the dental tooth scraper technique from Chris Mueller's thesis)? 

Johannes and I did an arm scan of the X arm today (arm controlled with ALS, monitoring IR transmission) - only 2 IR FSRs were scanned, but there should be sufficient information in there to extract the modulation depth and mode matching - can we use Kaustubh's/Naomi's code?. The Y arm ALS needs to be touched up so I don't have a Y arm scan yet. Note that to get a good arm scan measurement, the High Gain Thorlabs PD should be used as the transmission PD.

Quote:
 

Week Aug 21 - Aug 27:

  • Update mode-overlap estimates
  • Obtain current DC refl estimates
  • Spatial profile of auxiliary NPRO
  • Fiber setup concept; purchasing
  • CCD software prep work

 

  13237   Mon Aug 21 23:38:55 2017 gautamUpdateALSALS out-of-loop noise

I worked a little bit on the Y arm ALS today. 

  • Started by locking the Y arm to IR with POY, and then ran the dither alignment script to maximize Y arm transmission.
  • Green TRY DC monitor was around 0.16, whereas I have seen ~0.45 when we were doing DRFPMI locking.
  • So I went to the Y end table and tweaked the steering mirrors a little. I was able to get GTRY to ~0.42. I think this can be tweaked a little further but I decided to push on for tonight.
  • The beat amplitude on the network analyzer in the control room is comparable to the X arm beat now.
  • Adjusted the gain of the phase tracker servos, cleared phase history.
  • Looking at the ALS beat noise with the arms locked to IR and the slow ALS temperature control loops ON (see Attachment #1), the current measurements line up quite well with the reference traces.

I am now going to measure the OLTFs of both green PDH loops to check that the overall loop gain is okay, and also check the measurement against EricQ's LISO model of the (modified) AUX green PDH servos. Results to follow.


Some weeks ago, I had moved some of the Green steering optics on the PSL table around, in order to flip some mirror mounts and try and get angles of incidence closer to ~45deg on some of the steering mirrors. As a result of this work, I can see some light on the GTRY CCD when the X green shutter is open. It is unclear if there is also some scattered light on the RFPDs. I will post pictures + a more detailed investigation of the situation on the PSL table later, there are multiple stray green beams on the PSL table which should probably be dumped.


As I was writing this elog, I saw the X green lock drop abruptly. During this time, the X arm stayed locked to the IR, and the Y arm beat on the control room network analyzer did not jump (at least not by an amount visible to the eye). Toggling the X end shutter a few times, the green TEM00 lock was re-acquired, but the beatnote has moved on the control room analyzer by ~40MHz. On Friday evening however, the X green lock held for >1 hour. Need to keep an eye on this.

Attachment 1: ALS_21082017.pdf
ALS_21082017.pdf
  13238   Tue Aug 22 02:19:11 2017 gautamUpdateALSALS OLTFs

Attachment #1 shows the results of my measurements tonight (SR785 data in Attachment #2). Both loops have a UGF of ~10kHz, with ~55 degrees of phase margin.

Excitation was injected via SR560 at the PDH error point, amplitude was 35mV. According to the LED indicators on these boxes, the low frequency boost stages were ON. Gain knob of the X end PDH box was at 6.5, that of the Y end PDH box was at 4.9. I need to check the schematics to interpret these numbers. GV Edit: According to this elog, these numbers mean that the overall gain of the X end PDH box is approx. 25dB, while that of the Y end PDH box is approx. 15dB. I believe the Y end Lightwave NPRO has an actuator discriminant ~5MHz/V, while the X end Innolight is more like 1MHz/V.

Not sure what to make of the X PDH loop measurement being so much noisier than the Y end, I need to think about this.

More detailed analysis to follow.

Quote:

 

I am now going to measure the OLTFs of both green PDH loops to check that the overall loop gain is okay, and also check the measurement against EricQ's LISO model of the (modified) AUX green PDH servos. Results to follow.

 

Attachment 1: ALS_OLTFs.pdf
ALS_OLTFs.pdf
Attachment 2: ALS_OLTF_Aug2017.zip
  13240   Tue Aug 22 15:40:06 2017 gautamUpdateComputersOld frames accessible again

[jamie, gautam]

I had some trouble getting the daqd processes up and running again using Jamie's instructions.

With Jamie's help however, they are back up and running now. The problem was that the mx infrastructure didn't come back up on its own. So prior to running sudo systemctl restart daqd_*, Jamie ran sudo systemctl start mx. This seems to have done the trick.

c1iscey was still showing red fields on the CDS overview screen so Jamie did a soft reboot. The machine came back up cleanly, so I restarted all the models. But the indicator lights were still red. Apparently the mx processes weren't running on c1iscey. The way to fix this is to run sudo systemctl start mx_stream. Now everything is green.

Now we are going to work on trying the fix Rolf suggested on c1iscex.

Quote:

It turns out the problem was just a bent pin on the SCSI cable, likely from having to stretch things a bit to reach optimus from the RAID unit.frown

I hooked it up to megatron, and it was automatically recognized and mounted. yes

I had to turn off the new FB machine and remove it from the rack to be able to access megatron though, since it was just sitting on top. FB needs a rail to sit on!

At a cursory glance, the filesystem appears intact. I have copied over the achived DRFPMI frame files to my user directory for now, and Gautam is going to look into getting those permanently stored on the LDAS copy of 40m frames, so that we can have some redundancy.

Also, during this time, one of the HDDs in the RAID unit failed its SMART tests, so the RAID unit wanted it replaced. There were some spare drives in a little box directly under the unit, so I've installed one and am currently incorporating it back into the RAID.

There are two more backup drives in the box. We're running a RAID 5 configuration, so we can only lose one drive at a time before data is lost.

 

  13242   Tue Aug 22 17:11:15 2017 gautamUpdateComputersc1iscex model restarts

[jamie, gautam]

We tried to implement the fix that Rolf suggested in order to solve (perhaps among other things) the inability of some utilities like dataviewer to open testpoints. The problem isn't wholly solved yet - we can access actual testpoint data (not just zeros, as was the case) using DTT, and if DTT is used to open a testpoint first, then dataviewer, but DV itself can't seem to open testpoints.

Here is what was done (Jamie will correct me if I am mistaken).

  1. Jamie checked out branch 3.4 of the RCG from the SVN.
  2. Jamie recompiled all the models on c1iscex against this version of RCG.
  3. I shutdown ETMX watchdog, then ran rtcds stop all on c1iscex to stop all the models, and then restarted them using rtcds start <model> in the order c1x01, c1scx and c1asx. 
  4. Models came back up cleanly. I then restarted the daqd_dc process on FB1. At this point all indicators on the CDS overview screen were green.
  5. Tried getting testpoint data with DTT and DV for ETMX Oplev Pitch and Yaw IN1 testpoints. Conclusion as above.

So while we are in a better state now, the problem isn't fully solved. 

Comment: seems like there is an in-built timeout for testpoints opened with DTT - if the measurement is inactive for some time (unsure how much exactly but something like 5mins), the testpoint is automatically closed.

  13243   Tue Aug 22 18:36:46 2017 gautamUpdateComputersAll FE models compiled against RCG3.4

After getting the go ahead from Jamie, I recompiled all the FE models against the same version of RCG that we tested on the c1iscex models.

To do so:

  • I did rtcds make and rtcds install for all the models.
  • Then I ssh-ed into the FEs and did rtcds stop all, followed by rtcds start <model> in the order they are listed on the CDS overview MEDM screen (top to bottom).
  • During the compilation process (i.e. rtcds make), for some of the models, I got some compilation warnings. I believe these are related to models that have custom C code blocks in them. Jamie tells me that it is okay to ignore these warnings at that they will be fixed at some point.
  • c1lsc FE crashed when I ran rtcds stop all - had to go and do a manual reboot.
  • Doing so took down the models on c1sus and c1ioo that were running - but these FEs themselves did not have to be robooted.
  • Once c1lsc came back up, I restarted all the models on the vertex FEs. They all came back online fine.
  • Then I ssh-ed into FB1, and restarted the daqd processes - but c1lsc and c1ioo CDS indicators were still red.
  • Looks like the mx_stream processes weren't started automatically on these two machines. Reasons unknown. Earlier today, the same was observed for c1iscey.
  • I manually restarted the mx_stream processes, at which point all CDS indicator lights became green (see Attachment #1).

IFO alignment needs to be redone, but at least we now have a (admittedly rounabout way) of getting testpoints. Did a quick check for "nan-s" on the ASC screen, saw none. So I am re-enabling watchdogs for all optics.

GV 23 August 9am: Last night, I re-aligned the TMs for single arm locks. Before the model restarts, I had saved the good alignment on the EPICs sliders, but the gain of x3 on the coil driver filter banks have to be manually turned on at the moment (i.e. the safe.snap file has them off). ALS noise looked good for both arms, so just for fun, I tried transitioning control of both arms to ALS (in the CARM/DARM basis as we do when we lock DRFPMI, using the Transition_IR_ALS.py script), and was successful.

Quote:

[jamie, gautam]

We tried to implement the fix that Rolf suggested in order to solve (perhaps among other things) the inability of some utilities like dataviewer to open testpoints. The problem isn't wholly solved yet - we can access actual testpoint data (not just zeros, as was the case) using DTT, and if DTT is used to open a testpoint first, then dataviewer, but DV itself can't seem to open testpoints.

Here is what was done (Jamie will correct me if I am mistaken).

  1. Jamie checked out branch 3.4 of the RCG from the SVN.
  2. Jamie recompiled all the models on c1iscex against this version of RCG.
  3. I shutdown ETMX watchdog, then ran rtcds stop all on c1iscex to stop all the models, and then restarted them using rtcds start <model> in the order c1x01, c1scx and c1asx. 
  4. Models came back up cleanly. I then restarted the daqd_dc process on FB1. At this point all indicators on the CDS overview screen were green.
  5. Tried getting testpoint data with DTT and DV for ETMX Oplev Pitch and Yaw IN1 testpoints. Conclusion as above.

So while we are in a better state now, the problem isn't fully solved. 

Comment: seems like there is an in-built timeout for testpoints opened with DTT - if the measurement is inactive for some time (unsure how much exactly but something like 5mins), the testpoint is automatically closed.

 

Attachment 1: CDS_Aug22.png
CDS_Aug22.png
  13246   Wed Aug 23 17:22:36 2017 gautamUpdateALSFiber ALS - reinstalled

I completed the revamp of the box, and re-installed the box on the PSL table today. I think it would be ideal to install this on one of the electronic racks, perhaps 1X2 would be best. We would have to re-route the fibers from the PSL table to 1X2, but I think they have sufficient length, and this way, the whole arrangement is much cleaner.

Did a quick check to make sure I could see beat notes for both arms. I will now attempt to measure the ALS noise with this revamped box, to see if the improved power supply and grounding arrangement, as well as fiber cleaning, has had any effect.

Photos + power budget + plan of action for using this box to characterize the green PDH locking to follow. 

For quick reference: here is the AM/PM measurement done when we re-installed the repaired Innolight NPRO on the new X endtable.

  13248   Thu Aug 24 00:39:47 2017 gautamUpdateLSCDRMI locking attempt

Since the single arm locking and dither alignment seemed to work alright after the CDS overhaul, I decided to try some recycling cavity locking tonight.

  • First, I locked single arms, ran dither alignment servos, and centered all test mass Oplevs. Note: the X arm dither alignment doesn't seem to work if we use the High-Gain Thorlabs PD as the Transmission PD. The BS loops just seem to pick up large offsets and the alignment actually degrades over a couple of minutes. This needs to be investigated.
  • Next, to get good PRM alignment, I manually moved the EPICS sliders till the REFL spot became roughly centered on the CCD screen.
  • Then I tried locking PRMI on carrier using the usual C1IFOConfigure script - the lock was caught within ~30 seconds.
  • The PRCL and MICH dither servo scripts also ran fine.
    • Centered PRM Oplev.
  • Next, I tried enabling the PRC angular feedforward.
    • OAF model does not automatically revert to its safe.snap configuration on model reboot, so I first manually did this such that the correct filter banks were enabled.
    • I was able to turn on the angular feedforward without disturbing the PRMI carrier lock. The angular motion of the POP spot on the CCD monitor was visibly reduced.
  • At this point I decided to try DRMI locking.
    • I centered the beam on the AS PDs with the simple Michelson.
    • Centered the beam on the REFL PDs with PRM aligned and PRC flashing through resonances.
    • Restored SRM alignment by eye with EPICS sliders.
    • Cavity alignment seemed alright - so I tried to lock DRMI with the old settings (i.e. from DRMI 1f locking a couple of months ago). But I had no success.
    • The behaviour of REFL55 (used for SRCL control) has changed dramatically - the analog whitening gain for this PD used to be +18dB, but at this setting, there are frequent ADC overflows. I had to reduce the whitening gain to +6dB to stop the ADC overflows. I also checked to make sure that the whitening setting was "manual" and not triggered.

Why should this have changed? I was just on the AS table and did re-center the beam onto the REFL 55 RFPD, but I had also done this in April/May when I was last doing DRMI locking. But I can't explain the apparent factor of ~4 increase in light level. I think I have some measurements of the light levels at various PDs from April 2017, I will see how the present levels line up.

Of course dataviever won't cooperate when I am trying to monitor testpoints.

I may be missing something obvious, but I am quitting for tonight, will look into this more tomorrow.


Unrelated to this work: looking at the GTRY spot on the CCD monitor, there seems to be some excess angular motion. Not sure where this is coming from. In the past, this sort of problem has been symptomatic of something going wonky with the Oplev loops. But I took loop measurements for ITMY and ETMY PIT and YAW, they look normal. I will investigate further when I am doing some more ALS work.

  13249   Thu Aug 24 17:36:11 2017 gautamUpdateCDSFSS Slow Python maintenance

A couple of weeks ago, I was trying to modernize the python version of the FSS Slow temperature control loops, when I accidentally ended up deleting it frown. There was no svn backup. So the old Perl PID script has been running for the last few days.

Today, I checked out the latest version that Andrew and co. have running in the PSL lab. I had to make some important modifications for the script to work for the 40m setup.

  1. The script is conveniently setup in a way that the channels it needs to read from / write to are read in from an .ini file. I renamed all the channels to match the appropriate 40m ones.
  2. We don't have a soft epics channel in which to define the setpoint for our PID servo (which is 0). Rather than poke around with slow machine EPICS records, I simply commented out this line in the script and included the hard-coded value of 0. When we modernize to the Acromag era, we can setup an EPICS channel + MEDM slider for the setpoint.
  3. The way the Perl script was setup, the error signal was pre-scaled by a factor of 0.01, supposedly to make the PID gains be of order 1. For consistency, I re-inserted this scaling, which awade and co. had removed.
  4. Modified the FSSslowPy.init file to call the script in accordance with the new syntax:
python FSSSlow.py -i FSSSlowPy.ini

Then I stopped the Perl process on megatron by running

sudo initctl stop FSSslow

and started the Python process by running

sudo initctl start FSSslowPy

I have now committed the files FSSSlow.py and FSSSlowPy.ini to the 40m svn.  Things seem to be stable for the last 20 mins or so, let's keep an eye on this though - although we had been running the Python PID loop for some months, this version is a slightly modified one. 

The initctl stuff still isn't very robust - I think both the Autolocker and the FSS slow servos have to be manually restarted if megatron is shutdown/restarted for whatever reason. It doesn't seem to be a problem with the initctl routine itself - looking at the logs, I can see that init is trying to start both processes, but is failing to do so each time. To be investigated. The wiki procedure to restart this process is up to date.

GV Edit 0000 25 Aug 2017: I had to add a line to the script that checks MC transmission before enabling the PID loop. Change has been committed to svn. Now, when the MC loses lock or if the PSL shutter is kept closed for an extended period of time, the temperature loop doesn't rail.

  13252   Fri Aug 25 01:20:52 2017 gautamUpdateLSCDRMI locking attempt

I tried some DRMI locking again tonight, but had no success. Here is the story.

  • I started out by going to the AS table and measuring the light level on the REFL55 photodiode (with PRM aligned and the PRC flashing, but LSC disabled).
    • The Ophir power meter reads 13mW
    • The DC output of the photodiode shows ~500mV on an oscilloscope.
    • Both of these numbers line up well with measurements I made in April/May.
  • Returned to the control room and aligned the IFO for DRMI locking - but LSC servos remained disabled.
    • At the nominal REFL55 whitening level of +18dB, the REFL 55 signals saturated the ADC (confirmed by looking at the traces on dataviewer).
    • But the signals still looked like PDH error signals.
    • Lowering the whitening gain to 6dB makes the PDH error signal horns peak around 20,000 counts.
    • Could this be indicative of problems with either the analog whitening gain switching or the LSC Demod Boards? To be investigated.
  • Tried enabling LSC servos with same settings with which I had success right up till a couple of months ago, but had no success.
    • If it is true that the REFL55 signal is getting amplified because of some gain stage not being switched correctly, I should still have been able to lock the SRC with a lowered loop gain - but even lowering the gain by a factor of 10 had no effect on the locking success rate.

Looks like I will have to embark on the REFL55 LSC electronics investigation. I was able to successfully lock the PRC on carrier and sideband, and the Michelson lock also seems to work fine, all of which seem to point to a hardware problem with the REFL55 signal chain.

I did a quick check by switching the output of the REFL55 demod board to the inputs normally used by AS55 signals on the whitening board. Setting the whitening gain to +18dB for these channels had the same effect - ADC overflow galore. So looks like the whitening board isn't to blame. I will have to check the demod board out.

 

  13253   Fri Aug 25 11:11:26 2017 gautamUpdateGeneralMC1 kicked again

Looks like MC1 got another big kick just under 4 hours ago. None of the other optics show any evidence of a glitch so it seems unlikely that this was some sort of global event. It's been well behaved for ~2weeks now. IMC was unlocked. I manually re-aligned MC1, at which point the autolocker was able to lock the IMC.

Looking at this plot, it seems that LR and UL coils seem to have the largest kicks. UR barely saw it. Not sure what (if anything) to make of this - apparently the optic moved by ~20urad with the UR magnet approximately the pivot.

Attachment 1: MC1_glitch.png
MC1_glitch.png
  13254   Fri Aug 25 15:54:14 2017 gautamUpdateALSFiber ALS noise measurement

[Kira, gautam]

Attachment #1 - Photo of the revamped beat setup. The top panel has to be installed. New features include:

  • Regulated power supply via D1000217.
  • Single power switch for both PDs.
  • Power indicator LED.
  • Chassis ground isolated from all other electronic grounds. For this purpose, I installed all the elctronics on a metal plate which is only connected to the chassis via nylon screws. The TO220 package power regulator ICs have been mounted with the TO220 mounting kits that provide a thin piece of plastic that electrically insulates its ground from the chassis ground.
  • PD outputs routed through 20dB coupler on front panel for diagnostic purposes.
  • Fiber routing has been cleaned up a little. I installed a winding fixture I got from Johannes, but perhaps we can install another one of these on top of the existing one to neaten up the fiber layout further.
  • 90-10 light splitter (meant for diagnostic purposes) has been removed because of space constraints. 

Attachment #2 - Power budget inside the box. Some of these FC/APC connectors seem to not offer good coupling between the two fibers. Specifically, the one on the front panel meant to accept the PSL light input fiber seems particularly bad. Right now, the PSL light is entering the box through one of the front panel connectors marked "PSL + X out". I've also indicated the beat amplitude measured with an RF analyzer. Need to do the math now to confirm if these match the expected amplitudes based on the power levels measured.

Attachment #3 - We repeated the measurement detailed here. The X arm (locked to IR) was used for this test. The "X" delay line electronics were connected to the X green beat PD, while the "Y" delay line electronics were connected to the X IR beat PD. I divided the phase tracker Hz calibration factor by 2 to get IR Hz for the Y arm channels. IR beat was at ~38MHz, green beat was at ~76MHz. The broadband excess noise seen in the previous test is no longer present. Indeed, below ~20Hz, the IR beat seems less noisy. So seems like the cleaning / electronics revamp did some good. 

Further characterization needs to be done, but the results of this test are encouraging. If we are able to get this kind of out of loop ALS noise with the IR beat, perhaps we can avoid having to frequently fine-tune the green beat alignment on the PSL table. It would also be ideal to mount this whole 1U setup in an electronics rack instead of leaving it on the PSL table.

Quote:

Photos + power budget + plan of action for using this box to characterize the green PDH locking to follow. 

GV Edit: I've added better photos to the 40m Google Photos page. I've also started a wiki page for this box / the proposed IR ALS  system. For the moment, all that is there is the datasheet to the Fiber Couplers used, I will populate this more as I further characterize the setup.

Attachment 1: IMG_7497.JPG
IMG_7497.JPG
Attachment 2: FOL_schematic.pdf
FOL_schematic.pdf
Attachment 3: 20170825_IR_ALS.pdf
20170825_IR_ALS.pdf
  13262   Mon Aug 28 16:20:00 2017 gautamUpdateCDS40m files backup situation

This elog is meant to summarize the current backup situation of critical 40m files.

What are the critical filesystems? I've also indicated the size of these disks and the volume currently used, and the current backup situation. 

Name

Disk Usage

Description / remarks

Current backup status

FB1 root filesystem 1.7TB / 2TB
  • FB1 is the machine that hosts the diskless root for the front end machines
  • Additionally, it runs the daqd processes which write data from realtime models into frame files
Not backed up
/frames up to 24TB
  • This is where the frame files are written to 
  • Need to setup a wiper script that periodically clears older data so that the disk doesn't overflow.

Not backed up 

LDAS pulls files from nodus daily via rsync, so there's no cron job for us to manage. We just allow incoming rsync.

Shared user area 1.6TB / 2TB
  • /home/cds on chiara
  • This is exported over NFS to 40m workstations, FB1 etc.
  • Contains user directories, scripts, realtime models etc.

Local backup on /media/40mBackup on chiara via daily cronjob

Remote backup to ldas-cit.ligo.caltech.edu::40m/cvs via daily cronjob on nodus

Chiara root filesystem 11GB / 440GB
  • This is the root filesystem for chiara
  • Contains nameserver stuff for the martian network, responsible for rsyncing /home/cds
Not backed up
Megatron root filesystem 39GB / 130GB
  • Boot disk for megatron, which is our scripts machine
  • Runs MC autolocker, FSS loops etc.
  • Also is the nds server for facilitating data access from outside the martian network
Not backed up
Nodus root filesystem 77GB / 355GB
  • This is the boot disk for our gateway machine
  • Hosts Elog, svn, wikis
  • Supposed to be responsible for sending email alerts for NFS disk usage and vacuum system N2 pressure
Not backed up
JETSTOR RAID Array 12TB / 13TB
  • Old /frames
  • Archived frames from DRFPMI locks
  • Long term trends

Currently mounted on Megatron, not backed up.

Then there is Optimus, but I don't think there is anything critical on it. 

So, based on my understanding, we need to back up a whole bunch of stuff, particularly the boot disks and root filesystems for Chiara, Megatron and Nodus. We should also test that the backups we make are useful (i.e. we can recover current operating state in the event of a disk failure).

Please edit this elog if I have made a mistake. I also don't have any idea about whether there is any sort of backup for the slow computing system code.

  13265   Tue Aug 29 01:52:22 2017 gautamUpdateSUSTest mass actuator calibration

[ericq, gautam]

Tonight, we decided to double-check the POX counts-to-meters conversion.

It is unclear when this was last done, and since I modified the coil driver electronics for the ITMs and BS recently, I figured it would be useful to get this calibration done. The primary motivation was to see if we could resolve the discrepancy between the current ALS noise (using POX as a sensor) compared to the Izumi et. al. plot.

Because we are planning to change the coil driver electronics further soon anyways, we decided to do the calibration at a single frequency for tonight. For future reference, the extension of this method to calibrate the actuator over a wider range of frequencies is here. The procedure followed, and the relevant numbers from tonight, are as follows.

Procedure:

  1. Set dark offsets on all DCPDs and LSC PDs.
  2. Look at the free swinging Michelson signal on ASDC.
    • For tonights test, ASDC was derived from the AS55 photodiode.
    • The AS110 photodiode actually has more light on it, but we think that the ADC that the DCPD board is interfaced to is running on 0-2V rather than 0-10V, as the signal seemed to saturate around 2000 counts. It is unclear whether the actual photodiode is saturating, to be investigated.
    • So we decided to use ASDC from AS55 photodiode with 15dB whitening gain.
    • There is also some issue with the whitening filter (not whitening gain) on ASDC - engaging the whitening shifts the DC offset. This has to be investigated while we get stuck into the LSC electronics.
  3. Look at the peak-to-peak swing of ASDC. Use algebraic expression for reflected power from Michelson interferometer to calibrate the ASDC slope at Michelson half-fringe. For the test tonight, ASDC_max = 1026 counts, ASDC_min = 2 counts.
  4. Lock the Michelson at half-fringe, with ASDC as the error signal.
    • Zero out the MICH elements in the RFPD input matrix.
    • Set the matrix element from ASDC to MICH in the DCPD LSC input matrix to 1.
    • The servo gain used was +0.005 on the MICH_A servo path.
    • A low-frequency boost was turned on.
  5. Use the sensing matrix infrastructure to drive a line in the optic of interest.
    • Tonight, we looked at ITMX and ITMY.
    • The line was driven at 311.1Hz, and the amplitude was 300 counts.
    • Download 60secs of ASDC data, demodulate at the driven frequency to find the peak height in counts, and using the slope of ASDC (in cts/m) at the Michelson half-fringe, calculate the actuator gain in m/cts.
    • ITMY: 2.55e-9 / f^2 m/count
    • ITMX: 2.65e-9 / f^2 m/count
    • These numbers kind of make sense - the previous numbers were ~5nm/f^2 /ct, but I removed an analog gain of x3 in this path. Presumably there has been some change in the N/A conversion factor - perhaps because of a change in the interaction between the optics' face magnets and the static magnetic field in the OSEMs?
  6. Lock the arms with POX/POY, and drive the newly calibrated ITMs.
    • So we know how many meters we are driving the ITMs by.
    • Looking at POX/POY, we can calibrate these into meters/count.
    • Both POX and POY were whitened.
    • POX whitening gain = +30dB, POY whitening gain = +18dB.
    • ITMX and ITMY were driven at 311.1Hz, with amplitude = 2counts.
    • Download 60 secs of data, demodulate at the drive frequency to find the peak height, and use the known ITM actuator gains to calibrate POX and POY.
    • POX: 7.34e-13 m / count (approx. 5 times less than the number in the Foton filter bank in the C1:CAL-CINV model).
    • POY: 1.325e-13 m / count
    • We did not optimize the demod phases for POX/POY tonight. 

Once these calibrations were updated, we decided to control the arms with ALS, and look at the POX spectrum. Y-arm ALS wasn't so stellar tonight, especially at low frequencies. I can see the GTRY spot moving on the CCD monitor, so something is wonky. To be investigated. But the X arm ALS noise looked pretty good.

Seems like updating the calibration did the job; see the attached comparison plot.

Attachment 1: ALS_comparison.pdf
ALS_comparison.pdf
  13266   Tue Aug 29 02:08:39 2017 gautamUpdateALSFiber ALS noise measurement

I was having a chat with EricQ about this today, just noting some points from our discussion down here so that I remember to look into this tomorrow.

  • I believe that currently, the channels C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ and the Y arm analog read out the frequency of the green beat, in Hz.
  • In the comparison I plotted, I WRONGLY divided the spectrum of the IR beat by 2, instead of multiplying in by 2, which is what should actually be done for an apples-to-apples comparison.
  • The deeper question is, what should this channel actually readout?
  • Looking at my codes from past arm scans etc, I see that I am dividing the downloaded data by 2 in order to convert the X-axis of these scans to "IR Hz". But this should really be all we care about.
  • So I think I will have to re-do the cts-to-Hz calibration in the ALS models. It should be possible to do ~10FSR scans with the IR beat, and then we can use the sideband resonances (presumably the sideband frequencies are known with better precision than the arm length, and hence the FSR) to calibrate the phase tracker.
  • I don't think this changes the fact that the Fiber ALS situation has been improved - but I will have to repeat the measurement to be sure. The improvement may not be as stellar as I tried to sell in my previous elog sad.

    Other thoughts: 

  • Can we make use of the Jetstor raid array for some kind of consolidated 40m CDS backup system? Once we've gotten everything of interest out of it...

  13267   Tue Aug 29 15:04:59 2017 gautamUpdateSUSETMY Oplev PIT loop gain changed

Last night, while we were working on the ALS, I noticed the GTRY spot moving around (in PITCH) on the CCD monitor in the control room at ~1-2Hz. The operating condition was that the arm was locked to the IR, and the PSL green shutter was closed, so that only the arm transmissions were visible on the CCD screens. There was no such noticable movement of the GTRX spot. When looking at the out-of-loop ALS nosie in this configuration (but now with the PSL green shutter open of course), the Y arm ALS noise at low frequencies was much worse than the X arm.

Today, I looked into this a little more. I first checked that the Y-endtable enclosure was closed off as usual (as I had done some tweaking to the green input pointing some days ago). There are various green ghost beams on the Y-endtable. When time permits, we should make an effort to cleanly dump these. But the enclosure was closed as usual.

Then I looked at the in-loop Oplev error signal spectra for the ITMY and ETMY Oplev loops. There was high coherence between ETMYP Oplev error signal and GTRY. So I took a loop transfer function measurement - the upper UGF was around 3.5Hz. I increased the loop gain such that the upper UGF was around 4.5Hz, with phase margin ~30degrees. Doing so visibly reduced the angular movement of the GTRY spot on the CCD. Attachment #1 shows the Oplev loop TF after the gain increase, while Attachment #2 compares the GTRX and GTRY spectra (DC value is approximately the same for both, around 0.4). GTRY still seems a bit noisier at low frequencies, but the out-of-loop ALS noise for the Y arm now lines up much more closely with its reference trace from a known good time. 

Quote:
 

Y-arm ALS wasn't so stellar tonight, especially at low frequencies. I can see the GTRY spot moving on the CCD monitor, so something is wonky. To be investigated.

 

Attachment 1: ETMY_OLPIT.pdf
ETMY_OLPIT.pdf
Attachment 2: GTR_comparison.pdf
GTR_comparison.pdf
  13273   Wed Aug 30 10:54:26 2017 gautamUpdateCDSslow machine bootfest

MC autolocker and FSS loops were stuck because c1psl was unresponsive. I rebooted it and did a burtrestore to enable PSL locking. Then the IMC locked fine.

c1susaux and c1iscaux were also unresponsive so I keyed those crates as well, after taking the usual steps to avoid ITMX getting stuck - but it still got stuck when the Sat. Box. connectors were reconnected after the reboot, so I had to shake it loose with bias slider jiggling. This is annoying and also not very robust. I am afraid we are going to knock the ITMX magnets off at some point. Is this problem indicative of the fact that the ITMX magnets were somehow glued on in a skewed way? Or can we make the situation better by just tweaking the OSEM-holding fixtures on the cage?

In any case, I've started listing stuff down here for things we may want to do when we vent next.

 

  13275   Wed Aug 30 15:00:06 2017 gautamUpdateGeneralEdgeswitch fiber swap

A couple of minutes ago, Larry W swapped the fibers to our 40m Edgeswitch (BROCADE FWS 648G) to a faster connection. This is the switch to which our gateway machine, NODUS, is connected. The actual swap itself happened at the core router in Bridge, and took only a few seconds. After the switch, I double checked that I was able to ssh into nodus from my laptop, and Larry informed me that everything is working as expected on his end.

Larry also tells us that the other edgeswitch at the 40m (Foundry Networks), to which most of our GC network machines are connected, is a 100MBPS switch, and so we should re-route the connections from this switch to the BROCADE switch at our convenience to take advantage of the faster connection.

  13276   Wed Aug 30 19:49:33 2017 gautamUpdateLSCREFL55 demod board debugging

Summary:

Today I tried debugging the mysterious increase in REFL55 signal levels in the DRMI configuration. I focused on the demod board, because last week, I had tried routing these signals through different channels on the whitening board, and saw the same effect. 

Based on my tests, everything on the Demod board seems to work as expected. I need to think more about what else could be happening here - specifically do a more direct test on the whitening board.

Details:

  • The demod board is a modified D990511 (marked up schematic + high-res photo to follow).
  • Initially, I tried probing the LO signal levels at various points with the board in the eurocrate itself, with the help of an extender card.
  • But this wasn't very convenient, so I pulled the board out to the office area for more testing.
  • The 55MHz LO signal going into the board is ~0dBm (measured with Agilent network analyzer)
  • I used the active probe to check the LO levels at various points along the signal chain, which mostly consists of attenuators, ERA-5SM amplifiers, and some splitters/phase rotators.
  • Everything seemed consistent with the expected levels based on "typical" numbers for gains and insertion losses cited in the datasheets for these devices.
  • I couldn't directly measure the level at the LO input to the mixer, but measuring the input to the ERA-5SM immediately before the mixer, barring problems with this amplifier, the LO input of the mixer is being driven at >17dBm which is what it wants.
  • Next, I decided to check the gain, gain imbalance and orthogonality of the demodulation.
  • For this purpose, I restored the board to the Eurocrate, reconnected the LO input to the board, and used a second Marconi at a slightly offset frequency to drive the PD input at ~0dBm.
  • Attachment #1 - The measured outputs look pretty balanced and orthogonal. The gain is consistent with an earlier measurement I made some months ago, when things were "normal". More bullets added after Rana's questions:
    • 300 MHz bandwidth oscilloscope used to acquire the data
    • I and Q outputs were from the daughter board
    • Data was acquired via ethernet data download utility
    • 20 MHz low-pass filter turned on on the Oscilloscope while downloading the data
Quote:

I did a quick check by switching the output of the REFL55 demod board to the inputs normally used by AS55 signals on the whitening board. Setting the whitening gain to +18dB for these channels had the same effect - ADC overflow galore. So looks like the whitening board isn't to blame. I will have to check the demod board out.

 


All connections have been restored untill further debugging later in the evening.

Attachment 1: REFL55_demod_check.pdf
REFL55_demod_check.pdf
  13280   Thu Aug 31 00:52:52 2017 gautamUpdateLSCREFL55 whitening board debugging

[rana,gautam]

We did an ingenious checkup of the whitening board tonight.

  • The board is D990694
  • We made use of a tip-tilt DAC channel for this test (specifically TT1 UL, which is channel 1 on the AI board). We disconnected the cable going from the AI board to the TT coil driver board.
    • as opposed to using a function generator to drive the whitening filter, this approach allows us to not have to worry the changing offsets as we switch the whitening gain.
    • By using the CDS system to generate the signal and also demodulate it, we also don't have to worry about the drive and demod frequencies falling out of sync with each other.
  • The test was done by injecting a low frequency (75.13 Hz, amplitude=0.1) excitation to this DAC channel, and using the LSC sensing matrix infrastructure to demodulate REFL55 I and Q at this frequency. Demod phases in these servos were adjusted such that the Q phase demodulated signal was minimized.
  • An excitation was injected using awggui into TT1 UL exc channel.
  • We then stepped the whitening gains for REFL55_I and REFL55_Q in 3dB steps, waiting 5 seconds for each step. Syntax is z step -s 5 C1:LSC-REFL55_I_WhiteGain +1.0,15 C1:LSC-REFL55_Q_WhiteGain +1.0,15
  • Attachment #1 suggests that the whitening filter board is working as expected (each step is indeed 3dB and all steps are equal to the eye).
  • Data + script used to generate this plot is in Attachment #2.

I've restored all connections at that we messed with at the LSC rack to their original positions.

The TT alignment seems to be drifting around more than usual after we disconnected one of the channels - when I came in today afternoon, the spot on the AS camera had drifted by ~1 spot diameter so I had to manually re-align TT1. 

Quote:
 

Based on my tests, everything on the Demod board seems to work as expected. I need to think more about what else could be happening here - specifically do a more direct test on the whitening board.

Attachment 1: REFL55_whtCheck.pdf
REFL55_whtCheck.pdf
Attachment 2: REFL55_whtChk.tar.gz
  13281   Thu Aug 31 03:31:15 2017 gautamUpdateLSCDRMI re-locked!

After our Demod/Whitening electronics investigations suggested nothing obviously wrong, I decided to give DRMI locking another go tonight.

Surprisingly, there was no evidence of REFL55 behaving weirdly tonight, and I was able to easily lock the DRMI on 1f error signals using the recipe I've been using in the last few months.

Not sure what to make of all this frown.

I got in a ~15 minute lock, but I wasn't prepared to do any sort of characterization/ sensing / attempt to turn on coil-dewhitening, and I'm too tired to try again tonight. I was however able to whiten the error signals, as I have been able to do in the past. There is a ~45Hz bump in MICH that I haven't seen in the past.

I'll try and do some characterization tomorrow eve, but it's encouraging to at least get back to the pre-FB-failure state of locking.

Attachment 1: DRMI_1f.png
DRMI_1f.png
Attachment 2: DRMI_relocked.pdf
DRMI_relocked.pdf
  13282   Thu Aug 31 18:36:23 2017 gautamUpdateCDSrevisiting Acromag

Current status:

  • There is a single Acromag ADC unit installed in 1X4
  • It is presently hooked up to the PSL NPRO diagnostic connector channels
  • I had (re)-started the acquisiton of these channels on August 16 - but for reasons unknown, the tmux session that was supposed to be running the EPICS server on megatron seems to have died on August 22 (judging by the trend plot of these channels, see Attachment #1)
  • I had not set up an upstart job that restarts the server automatically in such an event. I manually restarted it for now, following the same procedure as linked in my previous elog.
  • While I was at it, I also took the opportunity to edit the Acromag channel names to something more appropriate - all channels previously prefixed with C1:ACRO- have now been prefixed with C1:PSL-

Plan of action:

  1. Hardware - we have, in the lab, in addition to the installed ADC unit
    • 3x 8 channel differential input ADC units
    • 2x 8 channel differential output DAC units
    • 1x 16 channel BIO unit
    • 2U chassis + connectors + breakout boards + other misc hardware that I think Johannes and Lydia procured with the original plan to replace the EX slow controls.
    • Some relevant elogs: Panel designs, breakout design, sketch for proposed layout, preliminary channel list.
      So on the hardware side, it would seem that we have everything we need to go ahead with replacing the EX slow controls with an Acromag system, although Johannes probably knows more about our state of readiness from a hardware PoV.
  2. Software
    • We probably want to get a dedicated machine that will handle the EPICS channel serving for the Acromag system
    • Have to figure out the networking arrangement for such a machine
    • Have to figure out how to set up the EPICS server protocol in such a way that if it drops for whatever reason, it is automatically restarted

 

Attachment 1: Acromag_EPICS.png
Acromag_EPICS.png
  13283   Thu Aug 31 21:40:24 2017 gautamUpdateGeneralMC1 kicked again

There was a pretty large glitch in MC1 about an hour ago. The misalignment was so large that the autolocker wasn't able to lock the IMC. I manually re-aligned MC1 using the bias sliders, and now IMC locks fine. Attached is a 90 second plot of 2K data from the OSEMs showing the glitch. Judging from the wall StripTool, the IMC was well behaved for ~4 hours before this glitch - there is no evidence of any sort of misalignment building up, judging from the WFS control signals.

Attachment 1: MC1_glitch.png
MC1_glitch.png
  13286   Fri Sep 1 16:27:39 2017 gautamUpdateSUSMC1 glitching

I re-enabled the MC SUS damping and IMC locking for some IFO work just now.

Quote:

MC1, MC2 and MC3 damping turned off to see glitching action at 9:57am

 

  13287   Fri Sep 1 16:55:27 2017 gautamUpdateComputersTestpoints now accessible again

Thanks to Jonathan Hanks, it appears we can now access test-points again using dataviewer.

I haven't done an exhaustive check just yet, but I have loaded a few testpoints in dataviewer, and ran a script that use testpoint channels (specifically the ALS phase tracker UGF setting script), all seems good.

So if I remember correctly, the major CDS fix now required is to solve the model unloading issue.

Thanks to Jamie/Jonathan Hanks/KT for getting us back to this point! Here are the details:

After reading logs and code, it was a simple daqdrc config change.

The daqdrc should read something like this:

...
set master_config=".../master";
configure channels begin end;
tpconfig ".../testpoint.par";
...


What had happened was tpconfig was put before the configure channels
begin end.  So when daqd_rcv went to configure its test points it did
not have the channel list configured and could not match test points to
the right model & machine.  Dave and I suspect that this is so that it
can do an request directly to the correct front end instead of a general
broadcast to all awgtpman instances.

Simply reordering the config fixes it.

I tested by opening a test point in dataviewer and verifiying that
testpoints had opened/closed by using diag -l.  Xmgr/grace didn't seem
to be able to keep up with the test point data over a remote connection.

You can find this in the logs by looking for entries like the following
while the daqd is starting up.  When we looked we saw that there was an
entry for every model.

Unable to find GDS node 35 system c1daf in INI fiels
  13288   Fri Sep 1 19:15:40 2017 gautamUpdateALSFiber ALS noise measurement

Summary:

I did some work today to see if I could use the IR beat for ALS control. Initial tests were encouraging.

I will now embark on the noise budgeting.

Details:

  • For this test, I used the X arm
  • I hooked up the X-arm + PSL IR beat to the X-arm DFD channel, and used the Y-arm DFD channels to simultaneously monitor the X-arm green beat.
  • I then transitioned to ALS control and used POX as an out-of-loop sensor for the ALS noise.
  • Attachment #1 shows a comparison of the measurements. In red is the IR beat, while the green traces are from the test EricQ and I did a couple of nights ago using the green beat.
  • I also wanted to do some arm cavity scans with the arm under ALS control with the IR beat - but was unsucessful. The motivation was to fix the ALS model counts->Hz calibration factors.
  • I did however manage to do a 10 FSR scan using the green beatnote - however, towards the end of this scan, the green beat frequency (read off the control room analyzer) was ~140MHz, which I believe is outside (or at least on the edge) of the bandwidth of the Green BBPDs. The fiber coupled IR beat photodiodes have a much larger (1GHz) spec'd bandwidth.

I am leaving the green beat electronics on the PSL table in the switched state for further testing...

 

Attachment 1: IR_ALS_noise.pdf
IR_ALS_noise.pdf
  13289   Mon Sep 4 16:30:06 2017 gautamUpdateLSCOplev loop tweaking

Now that the DRMI locking seems to be repeatable again, I want to see if I can improve the measured MICH noise. Recall that the two dominant sources of noise were

  1. BS Oplev loop A2L - this was the main noise between 30-60Hz.
  2. DAC noise - this dominated between ~60-300Hz, since we were operating with the de-whitening filters off.

In preparation for some locking attempts today evening, I did the following:

  1. Added steeper elliptic roll-off filters for the ITMX and ITMY Oplevs. This is necessary to allow the de-whitening filters to be turned on without railing the DAC.
  2. Modified the BS Oplev loop to also have steeper high-frequency (>30Hz) roll off. The roll-off between 15-30Hz is slightly less steep as a result of this change.
  3. Measured all Oplev loop TFs - UGFs are between 4 Hz and 5 Hz, phase margin is ~30degrees. I did not do any systematic optimization of this for today.
  4. Went into the Foton filter banks for all the coil output filters, and modified the "Output" settings to be on "Input crossing", with a "Tolerance" of 10 and a "Timeout" of 3 seconds. These settings are to facilitate smooth transition between the two signal paths (without and with coil-dewhitening). The parameters chosen were arbitrary and not optimized in any systematic manner.
  5. After making the above changes, I tried engaging the de-whitening filters on ITMX, ITMY and BS with the arms locked. In the past, I was unable to do this because of a number of issues - Oplev loop shapes and Foton settings among them. But today, the switching was smooth, the single arm locks weren't disturbed when I engaged the coil de-whitening.

Hopefully, I can successfully engage a similar transition tonight with the DRMI locked. The main difference compared to this daytime test is going to be that the MICH control signal is also going to be routed to the BS.

Tasks for tonight, if all goes well:

  1. Lock DRMI.
  2. Use UGF servos to set the overall loop gains for DRMI DoFs.
  3. Reduce PRCL->MICH and SRCL->MICH coupling.
  4. Measure loop shapes of all DRMI DoFs.
  5. Make sensing matrix measurement.
  6. Engage coil-dewhitening, download data, make NB.

Unrelated to this work: the PMC was locked near the upper rail of the PZT, so I re-locked it closer to the middle of the range.

Quote:

Surprisingly, there was no evidence of REFL55 behaving weirdly tonight, and I was able to easily lock the DRMI on 1f error signals using the recipe I've been using in the last few months.

  13291   Tue Sep 5 02:07:49 2017 gautamUpdateLSCLow Noise DRMI attempt

Summary:

Tonight, I was able to lock the DRMI, turn on the whitening filters for the sensing PDs, and also turn on the coil de-whitening filters for ITMX, ITMY and BS. However, I didn't see the expected improvement in the MICH spectrum between ~50-300 Hz sad. Sad.

Details:

I basically went through the list of tasks I made in the previous elog. Some notes:

  • The UGF servos suggested that I had to lower the SRCL gain. I lowered it from -0.055 to -0.025. OLTF measurement using In1/In2 method suggested UGF ~120Hz. I don't know why this should be. Plot to be uploaded later.
  • Since we aren't actuating on the ITMs, I was able to leave their coils de-whitened all the time.
  • For the BS, it was trickier - I had to play around a little with the "Tolerance" setting in Foton while looking at transients (using DTT, not a scope for now) while switching the filters.
  • This transition isn't so robust yet - but eventually I found a setting that worked, and I was able to successfully turn on the de-whitening thrice tonight (but also failed about the same number of times). [GV Oct 6 2017: Remember that the PD whitening has to be turned on for this transition to be successful - otherwise the RMS from the high frequencies saturate the DAC.]
  • The locks were pretty stable. One was ~10mins, one was ~15mins, and I broke both deliberately because I was out of ideas as to why the part of the MICH error signal spectrum that I thought was due to DAC noise didn't improve.
  • I've made a bunch of shell scripts to help with the various switchings - but now that I think of it, I should make these python scripts.

Attachment #1: Comparison of MICH_ERR with and without the BS de-whitening. Note that the two ITMs have their coils de-whitened in both sets of traces.

Attachment #2: Spectra of MICH output and one of the BS coil outputs in both states. The DAC RMS increases by ~30x when the de-whitening is engaged, but is still well within limits.

So it looks like the switching of paths is happening correctly. The "CDS BIO STATUS" MEDM screen also shows the appropriate bits toggling when I turn the de-whitening on/off. There is no broadband coherence with MCF between 50-300 Hz so it seems unlikely that this could be frequency noise.

Clearly I am missing something. But anyways I have a good amount of data, may be useful to put together the post CDS/electronics modification DRMI noise budget. More analysis to follow.

 

Attachment 1: MICH_err_comp.pdf
MICH_err_comp.pdf
Attachment 2: deWhitenedCoil.pdf
deWhitenedCoil.pdf
  13293   Tue Sep 5 14:41:58 2017 gautamUpdateCDSNDS2 server restarted on megatron

I was unable to download data using nds2. Gabriele had reported similar problems a week ago but I hadn't followed up on this.

I repeated steps 5-7 from elog 13161, and now it seems that I can get data from the nds2 servers again. Unclear why the nds2 server had to be restarted. I wonder if this is somehow related to the mysterious acromag EPICS server tmux session dropout.

  13297   Tue Sep 5 23:02:37 2017 gautamUpdateCDSslow machine bootfest

MC autolocker was not working - PCdrive was railed at its upper rail for ~2 hours judging by the wall StripTool trace. I tried restarting the init processes on megatron, but that didn't fix the problem. The reason seems to have been related to c1iool0 failing - after keying the crate, autolocker came back fine and MC caught lock almost immediately.

Additionally, c1susaux, c1auxex,c1auxey and c1iscaux are also down. I'm not planning on using the IFO tonight so I am not going to reboot these now.

 

  13300   Wed Sep 6 23:06:30 2017 gautamUpdateLSCCoil de-whitening switching investigation

Summary:

Rana suggested checking if the coil de-whitening switching is actually happening in the analog path. I repeated the test detailed hereAttachments #1 and #2 suggest that all the coils for the BS and ITMs are indeed switching yes.

Details:

  • The motivation behind this test was the following - the analog path switching is done by applying some logic voltage to a switch, but if this voltage is common among many switches, the hypothesis was that perhaps individual switches were not getting the required voltage to engage the switching.
  • This time FM9 (simulated de-whitening) and FM10 (inverse de-whitening) in the coil output filter modules turned off, so as to maintain a flat TF in the digital domain, but engage the de-whitened analog path (turning off FM9 is supposed to do this).
  • There is poor coherence in the measurement above 40Hz so the data there should be neglected. It is hard to get a good measurement at higher frequencies because of the pendulum TF + heavy low pass filtering from the analog de-whitening path.
  • But between 10-40Hz, we already see the analog de-whitening TF in the measurement.
  • For comparison, I have plotted the measured pendulum TFs for one of the coils from an earlier test (all the coils were roughly at the same level).

So it would seem that there is some other noise which has a 1/f^2 shape and is at the same level we expected the DAC noise to be at. Rana suggested checking coherence with MC transmission to see if this could be laser intensity noise.

I also want to re-do the actuator calibrations for the vertex optics again before re-posting the revised noise budget.

Attachment 1: BScoils.pdf
BScoils.pdf
Attachment 2: ITMcoils.pdf
ITMcoils.pdf
  13312   Fri Sep 15 15:54:28 2017 gautamUpdateCDSFB wiper script

A wiper script is not yet set up for our new Frame-Builder. The disk usage is ~80% now, so I think we should start running a wiper script that manages overall disk usage and deletes old frame files to this end.

From what I could find on the elog, the way this was done was by running a cron job on FB. There is a perl script, /opt/rtcds/caltech/c1/target/fb/wiper.pl, which from what I could understand, runs a bunch of du commands on different directories to determine if there is a need to delete any files.

I copied this script over to /opt/rtcds/caltech/c1/target/daqd/wiper.pl. This is the directory in which all the new FB stuff resides. Conveniently, the script has a "dry-run" option, which I tried running on FB1. However, I get the following error message:

Fri Sep 15 15:44:45 PDT 2017
Dry run, will not remove any files!!!
You need to rerun this with --delete argument to really delete frame files
Directory disk usage:
 /frames/trend/minute_rawk
Combined 0k or 0m or 0Gb
Illegal division by zero at ./wiper.pl line 98.


So it would seem that for some reason, the du commands aren't working. From what I could tell, there aren't any directory paths specific to the old FB machine that need to be changed. I believe the script was working prior to the FB disk crash - unfortunately it doesn't look like this script was under version control but I don't think any changes have been made to this script.

Before I go down a Perl rabbit hole, has anyone seen such an error or is aware of some reason why this might not work on the new FB? Am I even using the correct scripts?

  13313   Fri Sep 15 16:00:33 2017 gautamUpdateLSCSensing measurement

I've been working on analyzing the data from the DRMI locks last week.

Here are the results of the sensing measurement.

Details:

  1. The sensing measurement is done by using the existing sensing matrix infrastructure to drive the actuators for the various DoFs at specific frequencies (notches at these frequencies are turned on in the control loops during the measurement).
  2. All the analysis is done offline - I just note down the times at which the sensing lines are turned on and then download the data later. The amplitudes of the oscillators are chosen by looking at the LSC PD error signal spectra "live" in DTT, and by increasing the amplitude until the peak height is ~10x above the nominal level around that frequency. This analysis was done on ~600seconds of data.
  3. The actual sensing elements in the various PDs are calculated as follows:
    • Calculate the Fourier coefficients at the excitation frequency using the definition of the complex DFT in both the LSC PD signal and the actuator signal (both are in counts). Windowing is "Tukey", and FFT length used is 1 second.
    • Take their ratio
    • Convert to suitable units (in this case V/m) knowing (i) The actuator discriminant in cts/m and (ii) the cts/V ADC calibration factor. Any whitening gain on the PD is taken into account as well.
    • If required, we can convert this to W/m as well, knowing (i) the PD responsivity and (ii) the demodulation chain gain.
    • Most of this stuff has been scripted by EricQ and is maintained in the pynoisesub git repo.

The plotting utility is a work in progress - I've basically adapted EricQs scripts and added a few features like plotting the uncertainties in magnitude and phase of the calculated sensing elements. Possible further stuff to implement:

  • Only plot those elements which have good coherence in the measurement data. At present, the scripts check the coherence and prompt the user if there is poor coherence in a particular channel, but no vetos are done.
  • The uncertainty calculation is done rather naively now - it is just the standard deviation in the fourier coefficient determined from various bins. I am told that Bendat and Piersol has the required math. It would be good to also incorporate the uncertainties in the actuator calibration. These are calculated using the python uncertainties package for now.
  • Print a summary of the parameters used in the calculation, as well as sensing elements + uncertainty in cts/m, V/m and W/m, on a separate page.
  • Some aesthetics can be improved - I've had some trouble getting the tick intervals to cooperate so I left it as is for the moment.

Also, the value I've used for the BS actuator calibration is not a measured one - rather, I estimated what it will be by scaling the old value by the same ratio which the ITMs have changed by post de-whitening board mods. The ITM actuator coefficients were recently  measured here. I will re-do the BS calibrations over the weekend.

Noise budgeting to follow - it looks like I didn't set the AS55 demod phase to the previously determined optimal value of -82degrees, I had left it at -42 degrees. To be fixed for the next round of locking.

Attachment 1: DRMI1f_Sep5.pdf
DRMI1f_Sep5.pdf
  13314   Fri Sep 15 17:08:58 2017 gautamUpdateLSCCoil de-whitening switching investigation

I downloaded a segment of data from the time when the DRMI was locked with the BS and ITM coil driver de-whitening switched on, and looked at coherence between MC transmission and the MICH error signal. Attachment #1 doesn't show any broadband high coherence between 60-300Hz, so it cannot explain the noise in the full range between 60-300Hz. 

The DQ channel for the MC transmission is recorded at 1024 kHz, so to calculate the coherence, I had to decimate the 16K MICH data. 

Since we have the AOM installed, I suppose we can actually measure the intensity noise coupling to MICH by driving a line in the AOM. 

I also checked for coherence in the 60-300Hz band between MICH/PRCL and MICH/SRCL, and didn't see any appreciable coherence. Need to think about this more.

Quote:

 Rana suggested checking coherence with MC transmission to see if this could be laser intensity noise.

Attachment 1: DRMI_IntensityNoise.pdf
DRMI_IntensityNoise.pdf
  13317   Mon Sep 18 17:17:49 2017 gautamUpdateCDSFB wiper script

After trying to debug this issue using the Perl debugger, I concluded that the problem is in the part of the code that splits the output of the "du" command into directory and disk usage. For whatever, reason, this isn't working. The version of perl running on the new FB1 machine is 5.20.2, whereas I suspect the version running on the old FB machine was 5.14.2 (which is the version on all the Ubuntu 12 workstations and megatron). Unclear whether downgrading the Perl version is the right way to go.

The FB1 disk is now getting close to full, the usage is up to 85% today.

Quote:

Before I go down a Perl rabbit hole, has anyone seen such an error or is aware of some reason why this might not work on the new FB? Am I even using the correct scripts?

 

  13319   Mon Sep 18 17:51:26 2017 gautamUpdateCDSFB wiper script

It is a little different - specifically, the way the splitting of the output of the "du" command into disk usage and directory is different (see Attachment #1). Apart from this, some of the parameters (e.g. what percentage to keep free) are different.

I changed the percentages to match what we had here, and edited a couple of other lines to print out the files that will be deleted. The dry run seemed to work okay, it produced the output below. Not sure why "df -h" reports a different use percentage though...

Since the script seems to be working now, I am going to set it up on FB1's crontab. Thanks Chris!.

controls@fb1:/opt/rtcds/caltech/c1/target/daqd 0$ ./wiper.pl
Mon Sep 18 17:47:06 PDT 2017
Dry run, will not remove any files!!!
You need to rerun this with --delete argument to really delete frame files
Directory disk usage:
/frames/trend/minute_raw 47126124k
/frames/trend/minute 22900668k
/frames/trend/second 760359168k
/frames/full 19337278516k
Combined 20167664476k or 19694984m or 19233Gb
/frames size 25097525144k at 80.36%
/frames is below keep value of 85.00%
Will not delete any files
df reported usage 80.36%
controls@fb1:/opt/rtcds/caltech/c1/target/daqd 0$ df -h
Filesystem                        Size  Used Avail Use% Mounted on
/dev/sda4                         2.0T  1.7T  152G  92% /
udev                               10M     0   10M   0% /dev
tmpfs                              13G  177M   13G   2% /run
tmpfs                              32G     0   32G   0% /dev/shm
tmpfs                             5.0M     0  5.0M   0% /run/lock
tmpfs                              32G     0   32G   0% /sys/fs/cgroup
/dev/sda2                          19G  3.7G   14G  21% /var
/dev/sda1                         461M   65M  373M  15% /boot
/dev/sdb1                          24T   19T  3.5T  85% /frames
192.168.113.104:/home/cds/rtcds   2.0T  1.6T  291G  85% /opt/rtcds
192.168.113.104:/home/cds/rtapps  2.0T  1.6T  291G  85% /opt/rtapps
tmpfs                             6.3G     0  6.3G   0% /run/user/1001
Quote:

Attached is the version of the wiper script we use on the CryoLab cymac. It works with perl v5.20.2. Is this different from what you have?

 

Attachment 1: perlDiff.png
perlDiff.png
  13320   Mon Sep 18 18:40:34 2017 gautamUpdateCDSFB wiper script

I did a further check on the wiper script by changing the "percent_keep" from 85.0 to 75.0, and running the script in "dry_run" mode again. The script then output to console the names of all the files it would delete in order to free up the required amount of space (but didn't actually delete any files as it was a dry run). Seemed to be sensible.

To set up the cron job, I did the following on FB1:

  • crontab -e opened up the crontab
  • Copied over a script called "wiper.cron" from /opt/rtcds/caltech/c1/target/fb to /opt/rtcds/caltech/c1/target/daqd. This essentially contains a bunch of instructions to run the wiper script with the --delete flag, and write the console output to a log file.
  • Added the following line: 33 3 * * * /opt/rtcds/caltech/c1/target/daqd/wiper.cron. So the cron job should be executed at 3:33AM everyday.
  • The cron daemon seems to be running - sudo systemctl status cron.service yields the following output:
    controls@fb1:~ 0$ sudo systemctl status cron.service
    ● cron.service - Regular background program processing daemon
       Loaded: loaded (/lib/systemd/system/cron.service; enabled)
       Active: active (running) since Mon 2017-09-18 18:16:58 PDT; 27min ago
         Docs: man:cron(8)
     Main PID: 30183 (cron)
       CGroup: /system.slice/cron.service
               └─30183 /usr/sbin/cron -f
    Sep 18 18:16:58 fb1 cron[30183]: (CRON) INFO (Skipping @reboot jobs -- not system startup)
    Sep 18 18:17:01 fb1 CRON[30205]: pam_unix(cron:session): session opened for user root by (uid=0)
    Sep 18 18:17:01 fb1 CRON[30206]: (root) CMD (   cd / && run-parts --report /etc/cron.hourly)
    Sep 18 18:17:01 fb1 CRON[30205]: pam_unix(cron:session): session closed for user root
    Sep 18 18:25:01 fb1 CRON[30820]: pam_unix(cron:session): session opened for user root by (uid=0)
    Sep 18 18:25:01 fb1 CRON[30821]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
    Sep 18 18:25:01 fb1 CRON[30820]: pam_unix(cron:session): session closed for user root
    Sep 18 18:35:01 fb1 CRON[31515]: pam_unix(cron:session): session opened for user root by (uid=0)
    Sep 18 18:35:01 fb1 CRON[31516]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
    Sep 18 18:35:01 fb1 CRON[31515]: pam_unix(cron:session): session closed for user root

     

  • crontab -l on FB1 now shows the following:
    controls@fb1:~ 0$ crontab -l
    # Edit this file to introduce tasks to be run by cron.
    #
    # Each task to run has to be defined through a single line
    # indicating with different fields when the task will be run
    # and what command to run for the task
    #
    # To define the time you can provide concrete values for
    # minute (m), hour (h), day of month (dom), month (mon),
    # and day of week (dow) or use '*' in these fields (for 'any').#
    # Notice that tasks will be started based on the cron's system
    # daemon's notion of time and timezones.
    #
    # Output of the crontab jobs (including errors) is sent through
    # email to the user the crontab file belongs to (unless redirected).
    #
    # For example, you can run a backup of all your user accounts
    # at 5 a.m every week with:
    # 0 5 * * 1 tar -zcf /var/backups/home.tgz /home/
    #
    # For more information see the manual pages of crontab(5) and cron(8)
    #
    # m h  dom mon dow   command
    33 3 * * * /opt/rtcds/caltech/c1/target/daqd/wiper.cron

Let's see if this works.

Quote:

Since the script seems to be working now, I am going to set it up on FB1's crontab. Thanks Chris!.

 

  13324   Wed Sep 20 16:14:17 2017 gautamUpdateEquipment loanImpedance test kit borrowed from Downs

I borrowed the HP impedance test kit from Rich Abbott today. The purpose is to profile the impedance of the NPRO PZTs, as part of the AUX PDH servo investigations. It is presently at the X-end. I will do the test in the coming days.
 

  13325   Thu Sep 21 01:32:00 2017 gautamUpdateALSAUX X Innolight AM measurement running

[rana,gautam]

We set up a measurement of the AUX X laser AM today. Some notes:

  • PDA 55 that was installed as a power monitor for the AUX X laser has been moved into the main green beam path - it is just upstream of the green shutter for this measurement.
  • AUX X laser power into the doubling crystal was adjusted by rotating HWP upstream of IR Faraday (original angle was 100, now it is 120), until the DC level of the PDA 55 output was ~2.5V on a scope (high impedance).
  • BNC-T was installed at the PZT input of the Innolight - one arm of the T is terminated to ground via 50 ohms. The purpose of this is to always have the output of the power splitter from the network analyzer RF source drive a 50 ohm load.
  • The output of the Green PDH servo to the Innolight PZT was disconnected downstream of the summing Pomona box - it is now connected to one output of a power splitter (borrowed from SR function generator used to drive the PZT) connected to the RF source output of the AG4395.
  • Other output of power splitter connected to input R of AG4395.
  • PDA55 output has been disconnected from CH5 of the AA board. It is connected to input A of the AG4395 via DC block.

Attachment #1 shows a preliminary scan from tonight - we looked at the region 10kHz-10MHz, with an IF bandwidth of 100Hz, 16 averages, and 801 log-spaced frequencies. The idea was to get an idea of where some promising notches in the AM lie, and do more fine-bandwidth scans around those points. Data + code used to generate this plot in Attachment #2.

Rana points out that some of the AM could also be coming from beam jitter - so to put this hypothesis to test, we will put a lens to focus the spot more tightly onto the PD, repeat the measurement, and see if we get different results.

There were a whole bunch of little illegal things Rana spotted on the EX table which he will make a separate post about.

I am running 40 more scans with the same params for some statistics - should be done by the morning.

Quote:

I borrowed the HP impedance test kit from Rich Abbott today. The purpose is to profile the impedance of the NPRO PZTs, as part of the AUX PDH servo investigations. It is presently at the X-end. I will do the test in the coming days.
 


Update 12:00 21 Sep: Attachment #3 shows schematically the arrangement we use for the AM measurement. A similar sketch for the proposed PM measurement strategy to follow. After lunch, Steve and I will lay out a longish BNC cable from the LSC rack to the IOO rack, from where there is already a long cable running to the X end. This is to facilitate the PM measurement.

Update 18:30 21 Sep: Attachment #4 was generated using Craig's nice plotting utility. The TF magnitude plot was converted to RIN/V by dividing by the DC voltage of the PDA 55 of ~2.3V (assumption is that there isn't significant difference between the DC gain and RF transimpedance gain of the PDA 55 in the measurement band) The right-hand columns are generated by calculating the deviation of individual measurements from the mean value. We're working on improving this utility and aesthetics - specifically use these statistics to compute coherence, this is a work in progress. Git repo details to follow.

There are only 23 measurements (I was aiming for 40) because of some network connectivity issue due to which the script stalled - this is also something to look into. But this sample already suggests that these measurement parameters give consistent results on repeated measurements above 100kHz.

TO CHECK: PDA 55 is in 0dB gain setting, at which it has a BW of 10MHz (claimed in datasheet).


Some math about relation between coherence \gamma_{xy}(f) and standard deviation of transfer function measurements:

\mathrm{SNR}(f) = \sqrt{\frac{\gamma_{xy}^{2}(f)}{1-\gamma_{xy}^{2}(f)}}

\sigma_{xy}^{2} = \frac{1-\gamma_{xy}^{2}(f)}{2N\gamma_{xy}^{2}(f)}|H(f)|^2  --- relation to variance in TF magnitude. We estimate the variance using the usual variance estimator, and can then back out the coherence using this relation.

\sigma_{\theta_{xy}} = \mathrm{tan}^{-1}\left [ \sqrt{\frac{1-\gamma_{xy}^{2}(f)}{2N\gamma_{xy}^{2}(f)}} \right ] --- relation to variance in TF phase. Should give a coherence profile that is consistent with that obtained using the preceeding equation.

It remains to code all of this up into Craig's plotting utility.

Attachment 1: Innolight_AM.pdf
Innolight_AM.pdf
Attachment 2: Innolight_AM.tar.gz
Attachment 3: IMG_7599.JPG
IMG_7599.JPG
Attachment 4: 20170921_203741_TFAG4395A_21-09-2017_115547_FourSquare.pdf
20170921_203741_TFAG4395A_21-09-2017_115547_FourSquare.pdf
  13327   Thu Sep 21 15:23:04 2017 gautamOmnistructureALSLong cable from LSC->IOO

[steve,gautam]

We laid out a 45m long BNC cable from the LSC rack to the IOO rack via overhead cable trays. There is ~5m excess length on either side, which have been coiled up and cable-tied for now. The ends are labelled "TO LSC RACK" and "TO IOO RACK" on the appropriate ends. This is to facilitate hooking up the output of the DFD for making a PM measurement of the AUX X laser. There is already a long cable that runs from the IOO rack to the X end.

  13328   Fri Sep 22 18:12:27 2017 gautamUpdateLSCDAC noise measurement (again)

I've been working on setting up some scripts for measuring the DAC noise.

In all the DRMI noise budgets I've posted, the coil-driver noise contribution has been based on this measurement, which could be improved in a couple of ways:

  • The measurement was made at the output of the AI board - we can make the measurement at the output of the coil driver board, which will be a closer reflection of the actual current noise at the OSEM coils.
  • The measurement was made by driving the DAC with shaped random noise - but we can record the signal to the coils during a lock and make the noise measurement by using awg to drive the coil with this signal, with elliptic bandstops at appropriate frequencies to reveal the electronics noise.
  1. The IN1 signals to the coils aren't DQ-ed, but ideally this is the signal we want to inject into the coil_EXC channel for this measurement - so I re-locked the DRMI a couple of nights ago and downloaded the coil IN1 channel data for ~5mins for the ITMs and BS.
  2. AWGGUI supposedly has a feature that allows you to drive an EXC channel with an arbitrary signal - but I couldn't figure out how to get this working. I did find some examples of this kind of application using the Python awg packages, so I cobbled together some scripts that allows me to drive some channels and place elliptic bandstop filters as I required. 
  3. I wasted quite a bit of time trying to implement these signals in Python using available scipy functions, on account of me being a DSP n00b frown. When trying to design discrete-time filters, of course numerical precision errors become important. Initially I was trying to do everything in the "Transfer function (numerator-denominator)" basis, but as Rana pointed out, the way to go is using SOSs. Fortunately, this is a simple additional argument to the relevant python functions, after which elliptic bandstop filter design was trivial.
  4. The actual test was done as follows:
    • Save EPICS PIT/YAW offsets, set them to 0, disable Oplev servos, and then shut down optic watchdog once the optic is somewhat damped. This is to avoid the optics getting a large kick when disconnecting the DB15 connector from the coil driver board output.
    • Disconnect above-mentioned DB15 connector from the appropriate coil driver board output.
    • Turn off inputs to coils in filter module EPICs screens. Since the full signal (local damping servo output + Oplev servo output + LSC servo output) to the coil during a DRMI lock will be injected as an excitation, we don't need any other input. 
    • Use scripts (which I will upload to a git repo soon) to set up the appropriate excitation.
    • To measure the spectrum, I used a DB15 breakout board with test-points soldered on and some mini-grabber-to-BNC adaptors, in order to interface the SR785 to the coil driver output. We can use the two input channels of the SR785 to simultaneously measure two coil driver board output channels to save some time.
    • Take a measurement of the SR785 noise (at the appropriate "Input Range" setting) with inputs terminated to estimate the analyzer noise floor.
    • Just for kicks, I made the measurement with the de-whitening both OFF/ON.

I only managed to get in measurements for the BS and ITMX today. ITMY to be measured later, and data/analysis to follow.

The ITMX and BS alignments have been restored after this work in case anyone else wants to work with the IFO.


Some slow machine reboots were required today - c1susaux was down, and later, the MC autolocker got stuck because of c1iool0 being unresponsive. I thought we had removed all dependency of the autolocker on c1iool0 when we moved the "IFO-STATE" EPICS variable to the c1ioo model, but clearly there is still some dependancy. To be investigated.

  13331   Tue Sep 26 13:40:45 2017 gautamUpdateCDSNDS2 server restarted on megatron

Gabriele reported problems with the nds2 server again. I restarted it again.

update: had to do it again at 1730 today - unclear why nds2 is so flaky. Log files don't suggest anything obvious to me...

Quote:

I was unable to download data using nds2. Gabriele had reported similar problems a week ago but I hadn't followed up on this.

I repeated steps 5-7 from elog 13161, and now it seems that I can get data from the nds2 servers again. Unclear why the nds2 server had to be restarted. I wonder if this is somehow related to the mysterious acromag EPICS server tmux session dropout.

 

ELOG V3.1.3-