40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m elog, Page 223 of 357  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  13135   Mon Jul 24 10:45:23 2017 gautamUpdateCDSc1iscex models died

This morning, all the c1iscex models were dead. Attachment #1 shows the state of the cds overview screen when I came in. The machine itself was ssh-able, so I just restarted all the models and they came back online without fuss.

Quote:

All front ends and model are (mostly) running now

Attachment 1: c1iscexFailure.png
c1iscexFailure.png
  13137   Mon Jul 24 12:00:21 2017 gautamUpdatePSLPSL NPRO mysteriously shut off

Summary:

At around 10:30AM today morning, the PSL mysteriously shut off. Steve and I confirmed that the NPRO controller had the RED "OFF" LED lit up. It is unknown why this happened. We manually turned the NPRO back on and hte PMC has been stably locked for the last hour or so.

Details:

There are so many changes to lab hardware/software that have been happening recently, it's not entirely clear to me what exactly was the problem here. But here are the observations:

  1. Yesterday, when I came into the lab, the MC REFL trace on the wall StripTool was 0 for the full 8 hour history - since we don't have data records, I can't go back further than this. I remember the PMC TRANS and REFL cameras looked normal, but there was no MC REFL spot on the CCD monitors. This is consistent with the PSL operating normally, the PMC being locked, and the PSL shutter being closed. Isn't the emergency vacuum interlock also responsible for automatically closing the PSL shutter? Perhaps if the turbo controller failure happened prior to Jamie/me coming in yesterday, maybe this was just the interlock doing its job. On Friday evening, the PSL shutter was certainly open and the MC REFL spot was visible on the camera. I also confirmed with Jamie that he didn't close the shutter.
  2. Attachment #1 shows the wall StripTool traces from earlier this morning. It looks like ~7.40AM, the MC REFL level went back up. Steve says he didn't manually open the shutter, and in any case, this was before the turbo pump controller failure was diagnosed. So why did the shutter open again
  3. When I came in at ~10AM, the CCD monitor showed that the PMC was locked, and the MC REFL spot was visible. 
  4. Also on attachment #1, there is a ~10min dip in the MC REFL level. This corresponds to ~10:30AM this morning. Both Steve and I were sitting in the control room at this time. We noticed that the PMC TRANS and REFL CCDs were dark. When we went in to check on the laser, we saw that it was indeed off. There was no one inside the lab area at this time to our knowledge, and as far as I know, the only direct emergency shutoff for the PSL is on the North-West corner of the PSL enclosure. So it is unclear why the laser just suddenly went off.

Steve says that this kind of behaviour is characteristic of a power glitch/surge, but nothing else seems to have been affected (I confirmed that the X and Y end lasers are ON). 

Attachment 1: IMG_7454.JPG
IMG_7454.JPG
  13139   Mon Jul 24 19:57:54 2017 gautamUpdateCDSIMC locked, Autolocker re-enabled

Now that all the front end models are running, I re-aligned the IMC, locked it manually, and then tweaked the alignment some more. The IMC transmission now is hovering around 15300 counts. I re-enabled the Autolocker and FSS Slow loops on Megatron as well.

Quote:

MX/OpenMX network running

Today I got the mx/open-mx networking working for the front ends.  This required some tweaking to the network interface configuration for the diskless front ends, and recompiling mx and open-mx for the newer kernel.  Again, this will all be documented.

 

  13141   Tue Jul 25 02:03:59 2017 gautamUpdateOptical LeversOptical lever tuning thoughts

Summary:

Currently, I am unable to engage the coil-dewhitening filters without destroying cavity locks. One reason why this is so is because the present Oplev servos have a roll-off at high frequencies that is not steep enough - engaging the digital whitening + analog de-whitening just causes the DAC output to saturate. Today, Rana and I discussed some ideas about how to approach this problem. This elog collects these thoughts. As I flesh out these ideas, I will update them in a more complete writeup in T1700363 (placeholder for now). Past relevant elogs: 5376, 9680

  1. Why do we need optical levers?
    • ​​To stabilize the low-frequency seismic driven angular motion of the optics.
  2.  In what frequency range can we / do we need to stabilize the angular motion of the optics? How much error signal suppression do we need in the control band? How much is achievable given the current Oplev setup?
    • ​​To answer these questions, we need to build a detailed Oplev noise budget.
    • Ultimately, the Oplev error signal is sensing the differential motion between the suspended optic and the incident laser beam.
    • What frequency range does laser beam jitter dominate the actual optic motion? What about mechanical drifts of the optical tables the HeNes sit on? And for many of the vertex optics, the Oplev beam has multiple bounces on steering mirrors on the stack. What is the contribution of the stack motion to the error signal?
    • The answers to the above will tell us what lower and upper UGFs we should and can pick. It will also be instructive to investigate if we can come up with a telescope design near the Oplev QPD that significantly reduces beam jitter effects (see elog 10732). Also, can we launch/extract the beam into/from the vacuum chamber in such a way that we aren't so susceptible to motion of the stack?
  3. What are some noises that have to be measured and quantified?
    • Seismic noise
    • ​Shot noise
    • Electronics noise of the QPD readout chain
    • HeNe intensity noise (does this matter since we are normalizing by QPD sum?)
    • HeNe beam pointing / jitter noise (How? N-corner hat method?)
    • Stack motion contribution to the Oplev error signal
  4. How do we design the Oplev controller?
    • ​The main problem is to frame the right cost function for this problem. Once this cost function is made, we can use MATLAB's PSO tool (which is what was used for the PR3 coating design optimization, and also successfully for this kind of loop shaping problems by Rana for aLIGO) to find a minimum by moving the controller poles and zeros around within bounds we define.
  5. What terms should enter the cost function?

    • ​In addition to those listed in elog 5376
    • We need the >10Hz roll-off to be steep enough that turning on the digital whitening will not significantly increase the DAC output RMS or drive it to saturation.
    • We'd like for the controller to be insensitive to 5% (?) errors in the assumed optical plant and noise models i.e. the closed loop shouldn't become unstable if we made a small error in some assumed parameters.
    • Some penalty for using excessive numbers of poles/zeros? Penalty for having too many high-frequency features.
  6. Other things to verify / look into
    • ​Verify if the counts -> urad calibration is still valid for all the Oplevs. We have the arm-cavity power quadratic dependance method, and the geometry method to do this.
    •  Check if the Oplev error signals are normalized by the quadrant sum.
    • How important is it to balance the individual quadrant gains?
    • Check with Koji / Rich about new QPDs. If we can get some, perhaps we can use these in the setup that Steve is going to prepare, as part of the temperature vs HeNe noise invenstigations.

Before the CDS went down, I had taken error signal spectra for the ITMs. I will update this elog tomorrow with these measurements, as well as some noise estimates, to get started.

  13146   Thu Jul 27 22:42:24 2017 gautamUpdateSUSSeismic noise, DAC noise, and Coil Driver electronics noise

Summary:

Yesterday at the meeting, we talked about how the analog de-whitening filters in the coil driver path may be more aggressive than necessary. I think Attachment #1 shows that this is indeed the case.

Details:

I had done some modeling and measurement of some of these noises while I was putting together the initial DRMI noise budget, but I had never put things together in one plot. In Attachment #1, I've plotted the following:

  1. Quadrature sum of seismic noise (from GWINC calculations) for 3 suspended optics (I'm sticking to the case of 3 optics since I've been doing all the noise-budgeting for MICH - for DARM, it will be 4 suspended optics).
  2. The unfiltered DAC noise estimate. The voltage noise was measured in this elog. To convert this to displacement noise for 3 suspended optics, I've used the value of 1.55e-9/f^2 m/ct as the actuator coefficient. This number should be accurate under the assumption that the series resistance on the coil driver board output is 400 ohms (we could increase this - by how much depends on how much actuation range is needed).  
  3. Coil driver board and de-whitening board electronics noises (added in quadrature). I've used the LISO model noises, which line up well with the measured noises in elogs 13010 and 13015.
  4. The DAC noise filtered by the de-whitening transfer function, separately for the cases of using one or both of the available biquad stages. This cannot be lower than the preceeding trace (electronics noise of de-whitening and coil driver boards), so should be disregarded where it dips below it. 

It would seem that the coil driver + de-whitening board electronic noises dominate above ~150Hz. The electronics noise is ~10nV/rtHz at the output of the coil driver board, which is only a factor of 100 below the DAC noise - so the stopband attenuation of ~70dB on the de-whitening boards seems excessive.

We can lower this noise by a factor of 2.5 if we up the series resistance on the coil driver boards from 400ohm to 1kohm, but even so, the displacement noise is ~1e-18 m/rtHz. I need to investigate the electronics noises a little more carefully - I only measured it for the case when both biquad stages were engaged, I will need to do the model for all permutations - to be updated. 

Attachment #2 has an iPython notebook used to generate this plot along with all the data.


Edit 28 Jul 2.30pm: I've added Attachment #3 with traces for different assumed values of the series resistance on the coil driver board - although I have not re-computed the Johnson noise contribution for the various resistances. If we can afford to reduce the actuation range by a factor of 25, then it looks like we get to within a factor of ~5 of the seismic noise at ~150Hz. 

Attachment 1: noiseComparison.pdf
noiseComparison.pdf
Attachment 2: deWhiteConfigs.zip
Attachment 3: noiseComparison_resistances.pdf
noiseComparison_resistances.pdf
  13147   Fri Jul 28 15:36:32 2017 gautamUpdateOptical LeversOptical lever tuning thoughts

Attachment #1 - Measured error signal spectrum with the Oplev loop disabled, measured at the IN1 input for ITMY. The y-axis calibration into urad/rtHz may not be exact (I don't know when this was last calibrated).

From this measurement, I've attempted to disentangle what is the seismic noise contribution to the measured plant output.

  • To do so, I first modelled the plant as a pair of complex poles @0.95 Hz, Q=3. This gave the best agreement with measurement by eye, I didn't try and optimize this too carefully. 
  • Next, I assumed all the noise between DC-10Hz comes from only seismic disturbance. So dividing the measured PSD by the plant transfer function gives the spectrum of the seismic disturbance. I further assumed this to be flat, and so I averaged it between DC-10Hz.
  • This will be a first seismic noise model to the loop shape optimizer. I can probably get a better model using the GWINC calculations but for a start, this should be good enough.

It remains to characterize various other noise sources.

Quote:

Before the CDS went down, I had taken error signal spectra for the ITMs. I will update this elog tomorrow with these measurements, as well as some noise estimates, to get started.


I have also confirmed that the "QPD" Simulink block, which is what is used for Oplevs, does indeed have the PIT and YAW outputs normalized by the SUM (see Attachment #2). This was not clear to me from the MEDM screen.


GV 30 Jul 5pm: I've included in Attachment #3 the block diagram of the general linear feedback topology, along with the specific "disturbances" and "noises" w.r.t. the Oplev loop. The measured (open loop) error signal spectrum of Attachment #1 (call it y) is given by:

y_{meas}(s) = P(s)\sum_{i=1}^{3}d_{i}(s) + \sum_{k=1}^{4}n_{k}(s)

If it turns out that one (or more) term(s) in each of the summations above dominates in all frequency bands of interest, then I guess we can drop the others. An elog with a first pass at a mathematical formulation of the cost-function for controller optimization to follow shortly.

Attachment 1: errSig.pdf
errSig.pdf
Attachment 2: QPD_simulink.png
QPD_simulink.png
Attachment 3: feedbackTopology.pdf
feedbackTopology.pdf
  13148   Fri Jul 28 16:47:16 2017 gautamUpdateGeneralPSL StripTool flatlined

About 3.5 hours ago, all the PSL wall StripTool traces "flatlined", as happens when we had the EPICS freezes in the past - except that all these traces were flat for more than 3 hours. I checked that the c1psl slow machine responded to ping, and I could also telnet into it. I tried opening the StripTool on pianosa and all the traces were responsive. So I simply re-started the PSL StripTool on zita. All traces look responsive now.

  13150   Sat Jul 29 14:05:19 2017 gautamUpdateGeneralPSL StripTool flatlined

The PMC was unlocked when I came in ~10mins ago. The wall StripTool traces suggest it has been this way for > 8hours. I was unable to get the PMC to re-lock by using the PMC MEDM screen. The c1psl slow machine responded to ping, and I could also telnet into it. But despite burt-restoring c1psl, I could not get the PMC to lock. So I re-started c1psl by keying the crate, and then burt-restored the EPICS values again. This seems to have done the trick. Both the PMC and IMC are now locked.


Unrelated to this work: It looks like some/all of the FE models were re-started. The x3 gain on the coil outputs of the 2 ITMs and BS, which I had manually engaged when I re-aligned the IFO on Monday, were off, and in general, the IMC and IFO alignment seem much worse now than it was yesterday. I will do the re-alignment later as I'm not planning to use the IFO today.

  13152   Mon Jul 31 15:13:24 2017 gautamUpdateCDSFB ---> FB1

[jamie, gautam]

In order to test the new daqd config that Jamie has been working on, we felt it would be most convenient for the host name "fb" (martian network IP 192.168.113.202) to point to the physical machine "fb1" (martian network IP 192.168.113.201).

I made this change in /var/lib/bind/martian.hosts on chiara, and then ran sudo service bind9 restart. It seems to have done the job. So as things stand, both hostnames "fb" and "fb1" point to 192.168.113.201.

Now, when starting up DTT or dataviewer, the NDS server is automatically found.

More details to follow.

  13156   Tue Aug 1 16:05:01 2017 gautamUpdateOptical LeversOptical lever tuning - cost function construction

Summary:

I've been trying to put together the cost-function that will be used to optimize the Oplev loop shape. Here is what I have so far.

Details:

All of the terms that we want to include in the cost function can be derived from:

  1. A measurement of the open-loop error signal [using DTT, calibrated to urad/rtHz]. We may want a breakdown of this in terms of "sensing noises" and "disturbances" (see the previous elog in this thread), but just a spectrum will suffice for the optimal controller given the current noises.
  2. A model of the optical plant, P(s) [validated with a DTT swept-sine measurement]. 
  3. A model of the controller, C(s). Some/all of the poles and zeros of this transfer function is what the optimization algorithm will tune to satisfy the design objectives.

From these, we can derive, for a given controller, C(s):

  1. Closed-loop stability (i.e. all poles should be in the left-half of the complex plane), and exactly 2 UGFs. We can use MATLAB's allmargin function for this. An unstable controller can be rejected by assigning it an extremely high cost.
  2. RMS rrror signal suppression in the frequency band (0.5Hz - 2Hz). We can require this to be >= 15dB (say).
  3. Minimize gain peaking and noise injection - this information will be in the sensitivity function, \left | \frac{1}{1+P(s)C(s)} \right |. We can require this to be <= 10dB (say).
  4. RMS of the control signal between 10 Hz and 200 Hz, multiplied by the digital suspension whitening filter, should be <10% of the DAC range (so that we don't have problems engaging the coil de-whitening).
  5. Smallest gain margin (there will be multiple because of the various notches we have) should be > 10dB (say). Phase margin at both UGFs should be >30 degrees.
  6. Terms 1-5 should not change by more than 10% for perturbations in the plant model parameters (f0 and Q of the pendulum) at the 10% (?) level. 

We can add more terms to the cost function if necessary, but I want to get some minimal set working first. All the "requirements" I've quoted above are just numbers out of my head at the moment, I will refine them once I get some feeling for how feasible a solution is for these requirements.

Quote:

An elog with a first pass at a mathematical formulation of the cost-function for controller optimization to follow shortly.


For a start, I attempted to model the current Oplev loop. The modeling of the plant and open-loop error signal spectrum have been described in the previous elogs in this thread.

I am, however, confused by the controller - the MEDM screen (see Attachment #2) would have me believe that the digital transfer function is FM2*FM5*FM7*FM8*gain(10). However, I get much better agreement between the measured and modelled in-loop error signal if I exclude the overall gain of 10 (see Attachments #1 for the models and #3 for measurements).

What am I missing? Getting this right will be important in specifying Term #4 in the cost function...

GV Edit 2 Aug 0030: As another sanity check, I computed the whitened Oplev control signal given the current loop shape (with sub-optimal high-frequency roll-off). In Attachment #4, I converted the y-axis from urad/rtHz to cts/rtHz using the approximate calibration of 240urad/ct (and the fact that the Oplev error signal is normalized by the QPD sum of ~13000 cts), and divided by 4 to account for the fact that the control signal is sent to 4 coils. It is clear that attempting to whiten the coil driver signals with the present Oplev loop shapes causes DAC saturation. I'm going to use this formulation for Term #4 in the cost function, and to solve a simpler optimization problem first - given the existing loop shape, what is the optimal elliptic low-pass filter to implement such that the cost function is minimized? 


There is also the question of how to go about doing the optimization, given that our cost function is a vector rather than a scalar. In the coating optimization code, we converted the vector cost function to a scalar one by taking a weighted sum of the individual components. This worked adequately well.

But there are techniques for vector cost-function optimization as well, which may work better. Specifically, the question is  if we can find the (infinite) solution set for which no one term in the error function can be made better without making another worse (the so-called Pareto front). Then we still have to make a choice as to which point along this curve we want to operate at.

Attachment 1: loopPerformance.pdf
loopPerformance.pdf
Attachment 2: OplevLoop.png
OplevLoop.png
Attachment 3: OL_errSigs.pdf
OL_errSigs.pdf
Attachment 4: DAC_saturation.pdf
DAC_saturation.pdf
  13160   Wed Aug 2 15:04:15 2017 gautamConfigurationComputerscontrol room workstation power distribution

The 4 control room workstation CPUs (Rossa, Pianosa, Donatella and Allegra) are now connected to the UPS.

The 5 monitors are connected to the recently acquired surge-protecting power strips.

Rack-mountable power strip + spare APC Surge Arrest power strip have been stored in the electronics cabinet.

Quote:

this is not the right one; this Ethernet controlled strip we want in the racks for remote control.

Buy some of these for the MONITORS.

 

  13161   Thu Aug 3 00:59:33 2017 gautamUpdateCDSNDS2 server restarted, /frames mounted on megatron

[Koji, Nikhil, Gautam]

We couldn't get data using python nds2. There seems to have been many problems.

  1. /frames wasn't mounted on megatron, which was the nds2 server. Solution: added /frames 192.168.113.209(sync,ro,no_root_squash,no_all_squash,no_subtree_check) to /etc/exportfs on fb1, followed by sudo exportfs -ra. Using showmount -e, we confirmed that /frames was being exported.
  2. Edited /etc/fstab on megatron to be fb1:/frames/ /frames nfs ro,bg,soft 0 0. Tried to run mount -a, but console stalled.
  3. Used nfsstat -m on megatron. Found out that megatron was trying to mount /frames from old FB (192.168.113.202). Used sudo umount -f /frames to force unmount /frames/ (force was required).
  4. Re-ran mount -a on megatron.
  5. Killed nds2 using /etc/init.d/nds2 stop - didn't work, so we manually kill -9'ed it.
  6. Restarted nds2 server using /etc/init.d/nds2 start.
  7. Waited for ~10mins before everything started working again. Now usual nds2 data getting methods work.

I have yet to check about getting trend data via nds2, can't find the syntax. EDIT: As Jamie mentioned in his elog, the second trend data is being written but is inaccessible over nds (either with dataviewer, which uses fb as the ndsserver, or with python NDS, which uses megatron as the ndsserver). So as of now, we cannot read any kind of trends directly, although the full data can be downloaded from the past either with dataviewer or python nds2. On the control room workstations, this can also be done with cds.getdata.

  13163   Thu Aug 3 11:11:29 2017 gautamUpdateCDSNDS2 server restarted, /frames mounted on nodus

I added nodus' eth0 IP (192.168.113.200) to the list of allowed nfs clients in /etc/exportfs on fb1, and then ran sudo mount -a on nodus. Now /frames is mounted.

Quote:

needs more debugging - this is the machine that allows us to have backed up frames in LDAS. Permissions issues from fb1 ?

 

  13167   Fri Aug 4 18:25:15 2017 gautamUpdateGeneralBilinear noise coupling

[Nikhil, gautam]

We repeated the test that EricQ detailed here today. We have downloaded ~10min of data (between GPS times 11885925523 - 11885926117), and Nikhil will analyze it.

Attachment 1: bilinearTest.pdf
bilinearTest.pdf
  13168   Sat Aug 5 11:04:07 2017 gautamUpdateSUSMC1 glitches return

See Attachment #1, which is full (2048Hz) data for a 3 minute stretch around when I saw the MC1 glitch. At the time of the glitch, WFS loops were disabled, so the only actuation on MC1 was via the local damping loops. The oscillations in the MC2 channels are the autolocker turning on the MC2 length tickle.

Nikhil and I tried the usual techniques of squishing cables at the satellite box, and also at 1X4/1X5, but the glitching persists. I will try and localize the problem this weekend. This thread details investigations the last time something like this happened. In the past, I was able to fix this kind of glitching by replacing the (high speed) current buffer IC LM6321M. These are present in a two places: Satellite box (for the shadow sensor LED current drive), and on the coil driver boards. I think we can rule out the slow machine ADCs that supply the static PIT and YAW bias voltages to the optic, as that path is low-passed with a 4th order filter @1Hz, while the glitches that show up in the OSEM sensor channels do not appear to be low-passed, as seen in the zoomed in view of the glitch in Attachment #2 (but there is an LM6321 in this path as well).

Attachment 1: MC1_glitch_Aug42017.png
MC1_glitch_Aug42017.png
Attachment 2: MC1_glitch_zoomed.png
MC1_glitch_zoomed.png
  13173   Tue Aug 8 20:48:06 2017 gautamUpdateSUSITMX stuck

Somewhere between CDS model restarts and the IFO venting, ITMX got stuck.

I shook it loose using the usual bias slider technique. It appears to be free now, I was able to lock the green beam on a TEM00 mode without touching the green input pointing. The ITMX Oplev spot has also returned to within its MEDM display bounds.

  13174   Wed Aug 9 11:33:49 2017 gautamUpdateElectronicsMC2 de-whitening

Summary:

The analog de-whitening filters for MC2 are different from those on the other optics (i.e. ITMs and ETMs). They have one complex pole pair @7Hz, Q~sqrt(2), one complex zero pair @50Hz, Q~sqrt(2), one real pole at 2.5kHz, and one real zero @250Hz (with a DC gain of 10dB).

Details:

I took the opportunity last night to measure all 4 de-whitening channel TFs. Measurements and overlaid LISO fits are seen in Attachment #1. 

The motivation behind this investigation was that last week, I was unable to lock the IMC to one of the arms. In the past, this has been done simply by routing the control signal of the appropriate arm filter bank (e.g. C1:LSC-YARM_OUT) to MC2 instead of ETMY via the LSC output matrix (if the matrix element to ETMY is 1, the matrix element to MC2 is -1).

Looking at the coil output filter banks on the MC2 suspension MEDM screen (see Attachment #2), the positions of filters in the filter banks is different from that on the other optics. In general, the BIO outputs of the DAC are wired such that disengaging FM9 on the MEDM screen engages the analog de-whitening path. FM10 then has the inverse of the de-whitening filter, such that the overall TF from DAC to optic is unity. But on MC2, these filters occupy FM7 and FM8, and FM9 was originally a 28Hz Elliptic Low-pass filter.

So presumably, I was unable to lock the IMC to an arm because for either configuration of FM9 (ON or OFF), the signal to the optic was being aggressively low-passed. To test this hypothesis, I simply copied the 28Hz elliptic to FM6, put a gain of 1 on FM9, left it engaged (so that the analog path TF is just flat with gain x3), and tried locking the IMC to the arm again - I was successful. See Attachment #3 for comparison of the control signal spectra of the X-arm control signal, with the IMC locked to the Y-arm cavity.

In this test, I also confirmed that toggling FM9 in the coil output filter banks actually switches the analog path on the de-whitening boards.

Since I now have the measurements for individual channels, I am going to re-configure the filter arrangement on MC2 to mirror that on the other optics. 


Unrelated to this work: the de-whitening boards used for MC1 and MC3 are D000316, as opposed to D000183 used for all other SOS optics. From the D000316 schematic, it looks like the signals from the AI board are routed to this board via the backplane. I will try squishing this backplane connector in the hope it helps with the glitching MC1 suspension.


GV Aug 13 11:45pm - I've made a DCC page for the MC2 dewhitening board. For now, it has the data from this measurement, but if/when we modify the filter shape, we can keep track of it on this page (for MC2 - for the other suspensions, there are other pages). 

Attachment 1: MC2deWhites.pdf
MC2deWhites.pdf
Attachment 2: MC2Coils.png
MC2Coils.png
Attachment 3: MC2stab.pdf
MC2stab.pdf
  13177   Wed Aug 9 12:35:47 2017 gautamUpdateALSFiber ALS

Last week, we were talking about reviving the Fiber ALS box. Right now, it's not in great shape. Some changes to be made:

  1. Supply power to the PDs (Menlo FPD310) via a power regulator board. The datasheet says the current consumption per PD is 250 mA. So we need 500mA. We have the D1000217 power regulator board available in the lab. It uses the LM2941 and LM2991 power regulator ICs, both of which are rated for 1A output current, so this seems suitable for our purposes. Thoughts?
  2. Install power decoupling capacitors on the PDs.
  3. Clean up the fiber arrangement inside the box.
  4. Install better switches, plus LED indicators.
  5. Cover the box.
  6. Install it in a better way on the PSL table. Thoughts? e.g. can we mount the unit in some electronics rack and route the fibers to the rack? Perhaps the PSL IR and one of the arm fibers are long enough, but the other arm might be tricky.

Previous elog thread about work done on this box: elog11650

Attachment 1: IMG_3942.JPG
IMG_3942.JPG
  13178   Wed Aug 9 15:15:47 2017 gautamUpdateSUSMC1 glitches return

Happened again just now, although the characteristics of the glitch are very different from the previous post, its less abrupt. Only actuation on MC1 at this point was local damping.

Attachment 1: MC1_glitch.png
MC1_glitch.png
  13180   Wed Aug 9 19:21:18 2017 gautamUpdateALSALS recovery

Summary:

Between frequent MC1 excursions, I worked on ALS recovery today. Attachment #1 shows the out-of-loop ALS noise as of today evening (taken with arms locked to IR) - I have yet to check loop shapes of the ALS servos, looks like there is some tuning to be done.

On the PSL table:

  • First, I locked the arms to IR, ran the dither alignment servos to maximize transmission.
  • I used the IR beat PDs to make sure a beat existed, at approximately.
  • Then I used a scope to monitor the green beat, and tweaked steering mirror alignment until the beat amplitude was maximized. I was able to improve the X arm beat amplitude, which Koji and Naomi had tweaked last week, by ~factor of 2, and Y arm by ~factor of 10.
  • I used the DC outputs of the BBPDs to center the beam onto the PD.
  • Currently, the beat notes have amplitudes of ~-40dBm on the scopes in the control room (there are various couplers/amplifiers in the path so I am not sure what beatnote amplitude this translates to at the BBPD output). I have yet to do a thorough power budget, but I have in my mind that they used to be ~-30dBm. To be investigated.
  • Removed the fiber beat PD 1U chassis unit from the PSL table for further work. The fibers have been capped and remain on the PSL table. Cleaned the NW corner of the PSL table up a bit.

To do:

  • Optimization of the input pointing of the green beam for X (with PZTs) and Y (manual) arms.
  • ALS PDH servo loop measurement. Attachment #1 suggests some loop gain adjustment is required for both arms (although the hump centered around ~70Hz seem to be coming from the IR lock).
  • Power budgeting on the PSL table to compare to previous such efforts.

Note: Some of the ALS scripts are suffering from the recent inablilty of cdsutils to pull up testpoints (e.g. the script that is used to set the UGFs of the phase tracker servo). The workaround is to use DTT to open the test points first (just grab 0.1s time series for all channels of interest). Then the cdsutils scripts can read the required channels (but you have to keep the DTT open).

Attachment 1: ALS_oolSpec.pdf
ALS_oolSpec.pdf
  13185   Thu Aug 10 14:25:52 2017 gautamUpdateCDSSlow EPICS channels -> Frames re-enabled

I went into /opt/rtcds/caltech/c1/target/daqd, opened the master file, and uncommented the line with C0EDCU.ini (this is the file in which all the slow machine channels are defined). So now I am able to access, for example, the c1vac1 channels.

The location of the master file is no longer in /opt/rtcds/caltech/c1/target/fb, but is in the above mentioned directory instead. This is part of the new daqd paradigm in which separate processes are handling the data transfer between FEs and FB, and the actual frame-writing. Jamie will explain this more when he summarizes the CDS revamp.

It looks like trend data is also available for these newly enabled channels, but thus far, I've only checked second trends. I will update with a more exhaustive check later in the evening.

So, the two major pending problems (that I can think of) are:

  1. Inability to unload models cleanly
  2. Inability of dataviewer (and cdsutils) to open testpoints.

Apart from this, dataviewer frequently hangs on Donatella at startup. I used ipcs -a | grep 0x | awk '{printf( "-Q %s ", $1 )}' | xargs ipcrm to remove all the extra messages in the dataviewer queue.


Restarting the daqd processes on fb1 using Jamie's instructions from earlier in this thread works - but the mx_stream processes do not seem to come back automatically on c1lsc, c1sus and c1ioo (reasons unknown). I've made a copy of the mxstreamrestart.sh script with the new mxstream restart commands, called mxstreamrestart_debian.sh, which lives in /opt/rtcds/caltech/c1/scripts/cds. I've also modified the CDS overview MEDM screen such that the "mxstream restart" calls this modified script. For now, this requires you to enter the controls password for each machine. I don't know what is a secure way to do it otherwise, but I recall not having to do this in the past with the old mxstreamrestart.sh script.

  13187   Thu Aug 10 21:01:43 2017 gautamUpdateSUSMC1 glitches debugging

I have squished cables in all the places I can think of - but MC1 has been glitching regularly today. Before starting to pull electronics out, I am going to attempt a more systematic debugging in the hope I can localize the cause.

To this end, I've disabled the MC autolocker, and have shutdown the MC1 watchdog. I plan to leave it in this state overnight. From this, I hope to look at the free-swinging optic spectra to see that this isn't a symptom of something funky with the suspension itself.

Some possible scenarios (assuming the free swinging spectra look alright and the various resonances are where we expect them to be):

  1. With the watchdog shutdown, the PIT/YAW bias voltages still goto the coil (low-passed by 4 poles @1Hz). So if the glitching happens in this path, we should see it in both the shadow sensors and the DC spot positions on the WFS.
  2. If the glitching happens in the shadow sensor readout electronics/cabling, we should see it in the shadow sensor channels, but NOT in the DC spot positions on the WFS (as the watchdog is shutdown, so there should be no actuation to the coils based on OSEM signals).
  3. If we don't see any glitches in WFS spot positions or shadow sensors, then it is indicative of the problem being in the coil driver board / dewhitening board/anti-aliasing board.
  4. I am discounting the problem being in the Satellite box, as we have switched around the MC1 satellite box multiple times - the glitches remain on MC1 and don't follow a Satellite Box. Of course there is the possibility that the cabling from 1X5/1X6 to the Satellite box is bad.

MC1 has been in a glitchy mood today, with large (MC-REFL spot shifts by ~1 beam diameter on the CCD monitor) glitches happening ~every 2-3 hours. Hopefully it hasn't gone into an extended quiet period. For reference, I've attached the screen-grab of the MC-QUAD and MC-REFL as they are now.


GV 9.20PM: Just to make sure of good SNR in measuring the pendulum eigenfreqs, I ran /opt/rtcds/caltech/c1/scripts/SUS/freeswing MC1 in a terminal . The result looked rather violent on the camera but its already settling down. The terminal output:

The following optics were kicked:
MC1
Thu Aug 10 21:21:24 PDT 2017
1186460502
Quote:

Happened again just now, although the characteristics of the glitch are very different from the previous post, its less abrupt. Only actuation on MC1 at this point was local damping.

 

Attachment 1: MC_QUAD_10AUG2017.jpg
MC_QUAD_10AUG2017.jpg
Attachment 2: MCR_10AUG2017.jpg
MCR_10AUG2017.jpg
  13189   Fri Aug 11 00:10:03 2017 gautamUpdateCDSSlow EPICS channels -> Frames re-enabled

Seems like something has failed after I did this - full frames are no longer on Aug 10 being written since ~2.30pm PDT. I found out when I tried to download some of the free-swinging MC1 data.

To clarify, I logged into fb1, and ran sudo systemctl restart daqd_*. The only change I made was to uncomment the line quoted below in the master file.

Looking at the log using systemctl, I see the following (I just tried restarting the daqd processes again):

Aug 11 00:00:31 fb1 daqd_fw[16149]: LDASUnexpected::unexpected: Caught unexpected exception      "This is a bug. Please log an LDAS problem report including this message.
Aug 11 00:00:31 fb1 daqd_fw[16149]: daqd_fw: LDASUnexpected.cc:131: static void LDASTools::Error::LDASUnexpected::unexpected(): Assertion `false' failed.
Aug 11 00:00:32 fb1 systemd[1]: daqd_fw.service: main process exited, code=killed, status=6/ABRT
Aug 11 00:00:32 fb1 systemd[1]: Unit daqd_fw.service entered failed state.
Aug 11 00:00:32 fb1 systemd[1]: daqd_fw.service holdoff time over, scheduling restart.
Aug 11 00:00:32 fb1 systemd[1]: Stopping Advanced LIGO RTS daqd frame writer...
Aug 11 00:00:32 fb1 systemd[1]: Starting Advanced LIGO RTS daqd frame writer...
Aug 11 00:00:32 fb1 systemd[1]: daqd_fw.service start request repeated too quickly, refusing to start.
Aug 11 00:00:32 fb1 systemd[1]: Failed to start Advanced LIGO RTS daqd frame writer.
Aug 11 00:00:32 fb1 systemd[1]: Unit daqd_fw.service entered failed state.

Oddly, I am able to access second trends for the same channels from the past which will be useful for the MC1 debugging). Not sure whats going on.


The live data grabbing using cdsutils still seems to be working though - so I've kicked MC1 again, and am grabbing 2 hours of data live on Pianosa.

Quote:

I went into /opt/rtcds/caltech/c1/target/daqd, opened the master file, and uncommented the line with C0EDCU.ini (this is the file in which all the slow machine channels are defined). So now I am able to access, for example, the c1vac1 channels.

The location of the master file is no longer in /opt/rtcds/caltech/c1/target/fb, but is in the above mentioned directory instead. This is part of the new daqd paradigm in which separate processes are handling the data transfer between FEs and FB, and the actual frame-writing. Jamie will explain this more when he summarizes the CDS revamp.

It looks like trend data is also available for these newly enabled channels, but thus far, I've only checked second trends. I will update with a more exhaustive check later in the evening.

So, the two major pending problems (that I can think of) are:

  1. Inability to unload models cleanly
  2. Inability of dataviewer (and cdsutils) to open testpoints.

Apart from this, dataviewer frequently hangs on Donatella at startup. I used ipcs -a | grep 0x | awk '{printf( "-Q %s ", $1 )}' | xargs ipcrm to remove all the extra messages in the dataviewer queue.


Restarting the daqd processes on fb1 using Jamie's instructions from earlier in this thread works - but the mx_stream processes do not seem to come back automatically on c1lsc, c1sus and c1ioo (reasons unknown). I've made a copy of the mxstreamrestart.sh script with the new mxstream restart commands, called mxstreamrestart_debian.sh, which lives in /opt/rtcds/caltech/c1/scripts/cds. I've also modified the CDS overview MEDM screen such that the "mxstream restart" calls this modified script. For now, this requires you to enter the controls password for each machine. I don't know what is a secure way to do it otherwise, but I recall not having to do this in the past with the old mxstreamrestart.sh script.

 

  13192   Fri Aug 11 11:14:24 2017 gautamUpdateCDSSlow EPICS channels -> Frames re-enabled

I commented out the line pertaining to C0EDCU again, now full frames are being written again.

But we no longer have access to the slow EPICS records.

I am not sure what the failure mode is here - In the master file, there is a line that says the EDCU list "*MUST* COME *AFTER* ALL OTHER FAST INI DEFINITIONS" which it does. But there are a bunch of lines that are testpoint lists after this EDCU line. I wonder if that is the problem?

Quote:

Seems like something has failed after I did this - full frames are no longer on Aug 10 being written since ~2.30pm PDT. I found out when I tried to download some of the free-swinging MC1 data.

 

  13195   Fri Aug 11 12:32:46 2017 gautamUpdateSUSMC1 glitches debugging

Attachment #1: Free swinging sensor spectra. I havent done any peak fitting but the locations of the resonances seem consistent with where we expect them to be.

The MC_REFL spot appears to not have shifted significantly (so slow bias voltages are probably not to blame). Now I have to look at trend data to see if there is any evidence of glitching.

I'm not sure I understand the input matrix though - the matrix elements would have me believe that the sensing of POS in UL is ~5x stronger than in UR and LL, but the peak heights don't back that up.

Attachment #3: Second trend over 5hours (since frame writing was re-enabled this morning). Note that MC1 is still free-swinging but there is no evidence of steps of ~30cts which have been observed some days ago. Also, from my observations yesterday, MC1 glitched multiple times over a few hours timescale. More data will have to be looked at, but as things stand, Hypothesis #3 below looks the best.

Quote:
 

Some possible scenarios (assuming the free swinging spectra look alright and the various resonances are where we expect them to be):

  1. With the watchdog shutdown, the PIT/YAW bias voltages still goto the coil (low-passed by 4 poles @1Hz). So if the glitching happens in this path, we should see it in both the shadow sensors and the DC spot positions on the WFS.
  2. If the glitching happens in the shadow sensor readout electronics/cabling, we should see it in the shadow sensor channels, but NOT in the DC spot positions on the WFS (as the watchdog is shutdown, so there should be no actuation to the coils based on OSEM signals).
  3. If we don't see any glitches in WFS spot positions or shadow sensors, then it is indicative of the problem being in the coil driver board / dewhitening board/anti-aliasing board.
  4. I am discounting the problem being in the Satellite box, as we have switched around the MC1 satellite box multiple times - the glitches remain on MC1 and don't follow a Satellite Box. Of course there is the possibility that the cabling from 1X5/1X6 to the Satellite box is bad.

 

Attachment 1: MC1_freeswinging.pdf
MC1_freeswinging.pdf
Attachment 2: MC1_inmatrix.png
MC1_inmatrix.png
Attachment 3: MC1_sensors.png
MC1_sensors.png
  13196   Fri Aug 11 17:36:47 2017 gautamUpdateSUSMC1 <--> MC3

About 30mins ago, I saw another glitch on MC1 - this happened while the Watchdog was shutdown.

In order to further narrow down the cause of the glitch, we switched the Coil Driver Board --> Satellite box DB(15?) connectors on the coil drivers between MC1 and MC3 coil driver boards. I also changed the static PIT/YAW bias voltages to MC1 and MC3 such that MC-REFL is now approximately back to the center of the CCD monitor.

 

Attachment 1: MC1_glitch_watchdog_shutdown.png
MC1_glitch_watchdog_shutdown.png
  13197   Fri Aug 11 18:53:35 2017 gautamUpdateCDSSlow EPICS channels -> Frames re-enabled
Quote:

Seems like something has failed after I did this - full frames are no longer on Aug 10 being written since ~2.30pm PDT. I found out when I tried to download some of the free-swinging MC1 data.

To clarify, I logged into fb1, and ran sudo systemctl restart daqd_*. The only change I made was to uncomment the line quoted below in the master file.

Looking at the log using systemctl, I see the following (I just tried restarting the daqd processes again):

Aug 11 00:00:31 fb1 daqd_fw[16149]: LDASUnexpected::unexpected: Caught unexpected exception      "This is a bug. Please log an LDAS problem report including this message.
Aug 11 00:00:31 fb1 daqd_fw[16149]: daqd_fw: LDASUnexpected.cc:131: static void LDASTools::Error::LDASUnexpected::unexpected(): Assertion `false' failed.
Aug 11 00:00:32 fb1 systemd[1]: daqd_fw.service: main process exited, code=killed, status=6/ABRT
Aug 11 00:00:32 fb1 systemd[1]: Unit daqd_fw.service entered failed state.
Aug 11 00:00:32 fb1 systemd[1]: daqd_fw.service holdoff time over, scheduling restart.
Aug 11 00:00:32 fb1 systemd[1]: Stopping Advanced LIGO RTS daqd frame writer...
Aug 11 00:00:32 fb1 systemd[1]: Starting Advanced LIGO RTS daqd frame writer...
Aug 11 00:00:32 fb1 systemd[1]: daqd_fw.service start request repeated too quickly, refusing to start.
Aug 11 00:00:32 fb1 systemd[1]: Failed to start Advanced LIGO RTS daqd frame writer.
Aug 11 00:00:32 fb1 systemd[1]: Unit daqd_fw.service entered failed state.

Oddly, I am able to access second trends for the same channels from the past which will be useful for the MC1 debugging). Not sure whats going on.


The live data grabbing using cdsutils still seems to be working though - so I've kicked MC1 again, and am grabbing 2 hours of data live on Pianosa.

So we tried this again with a fresh build of daqd_fw, and it still fails.  The error message is pointing to an underlying bug in the framecpp library ("LDASTools"), which may be tricky to solve.  I'm rustling the appropriate bushes...

  13199   Sat Aug 12 14:09:36 2017 gautamUpdateSUSGlitches stay on MC1

Even in the switched state, the glitches stayed on MC1.

The coil driver electronics for MC1, upstream of the Satellite box, was what was previously MC3 electronics.

Attachment #1 shows that there were no glitches in MC3 sensor channels (which are now physically connected to what was previously MC1 coil driver electronics).

Attachment #2 shows the second trends for a 12 hour period for MC1 and MC3 sensor channels. The MC3 channels look well behaved, but there are frequent glitches (at least 9 in the last 12 hours indecision) visible in the MC1 channels.

So to recap:

  • We switched MC1 satellite box - but glitch stayed on MC1, so it would seem the Satellite box is not to blame.
  • We shutdown the watchdog and the glitches persisted.
  • We switched the coil driver electronics for MC1, but glitches remained on MC1, and MC3 doesn't show any evidence of glitching. This and the previous bullet point suggest the coil driver electronics are not to blame.
  • For the glitch posted in Attachment #1, I could see the MC-REFL spot moving around on the CCD monitors, so the glitches aren't just a feature in the shadow sensor readout. 

I need to confirm that the output of the coil driver board goes straight to the Sat. Box, but if there are no intermediate elements, the problem is either in the cable from coil driver to sat. box, or downstream of the Satellite box - i.e. vacuum feedthroughs or the suspension itself? The size of the glitches is roughly the same in all 4 face channels (~60-80cts pp).

Quote:

About 30mins ago, I saw another glitch on MC1 - this happened while the Watchdog was shutdown.

In order to further narrow down the cause of the glitch, we switched the Coil Driver Board --> Satellite box DB(15?) connectors on the coil drivers between MC1 and MC3 coil driver boards. I also changed the static PIT/YAW bias voltages to MC1 and MC3 such that MC-REFL is now approximately back to the center of the CCD monitor.

 


GV addendum 14 Aug 2017, 10.30am: Attachment #3 shows the second trend for the MC sensor channels over the weekend. While there were many on Saturday, it seems that Sunday was quieter.

Attachment 1: MC1_glitch.png
MC1_glitch.png
Attachment 2: MC_12hr_trend.png
MC_12hr_trend.png
Attachment 3: MC1_glitches_intermittent.png
MC1_glitches_intermittent.png
  13204   Mon Aug 14 16:24:09 2017 gautamUpdateALSFiber ALS

Today, I borrowed the fiber microscope from Johannes and took a look at the fibers coupled to the PDs. The PD labelled "BEAT PD AUX Y" has an end that seems scratched (Attachments #1 and #2). The scratch seems to be on (or at least very close to) the core. The other PD (Attachments #3 and #4) doesn't look very clean either, but at least the area near the core seems undamaged. The two attachments for each PD corresponds to the two available lighting settings on the fiber microscope.

I have not attempted to clean them yet, though I have also borrowed the cleaning supplies to facilitate this from Johannes. I also plan to inspect the ends of all other fiber connections before re-installing them.

Quote:

Last week, we were talking about reviving the Fiber ALS box. Right now, it's not in great shape. Some changes to be made:

  1. Supply power to the PDs (Menlo FPD310) via a power regulator board. The datasheet says the current consumption per PD is 250 mA. So we need 500mA. We have the D1000217 power regulator board available in the lab. It uses the LM2941 and LM2991 power regulator ICs, both of which are rated for 1A output current, so this seems suitable for our purposes. Thoughts?
  2. Install power decoupling capacitors on the PDs.
  3. Clean up the fiber arrangement inside the box.
  4. Install better switches, plus LED indicators.
  5. Cover the box.
  6. Install it in a better way on the PSL table. Thoughts? e.g. can we mount the unit in some electronics rack and route the fibers to the rack? Perhaps the PSL IR and one of the arm fibers are long enough, but the other arm might be tricky.

Previous elog thread about work done on this box: elog11650

 

Attachment 1: IMG_7471.JPG
IMG_7471.JPG
Attachment 2: IMG_7472.JPG
IMG_7472.JPG
Attachment 3: IMG_7473.JPG
IMG_7473.JPG
Attachment 4: IMG_7474.JPG
IMG_7474.JPG
  13206   Mon Aug 14 20:01:38 2017 gautamUpdateSUSGlitches stay on MC1

I don't think we can say for sure. I was just talking to EricQ about this, he said the glitches were often seen when changing the alignment offsets when aligning the arm. I am pretty sure I have seen the ETMX alignment change abruptly since the Ruby Standoff replacement (the Oplev spot just slides across the MEDM display rapidly), but I can't find an elog where I've put in details. I also haven't done a whole lot of work with the arm cavities where I would have noticed this problem. There is this test that Eric did, and it didn't throw up any red flags. But the suspension can be well behaved for weeks at a time before this problem pops up again.

There was also the flaky power connection to the timing card on the ETMX expansion chassis which was fixed only recently, after which there has been no systematic investigation of the status of ETMX.

If it is true that these events are caused by strain building up in the suspension wire, I wonder how we can take systematic steps to avoid it. From what I remember of the SOS assembly procedure, the (unglued) standoff is slid along the optic with the wire under slight tension until the wire slips into the groove on the standoff. Then the tension in the wire is adjusted till the optic is pitch balanced and at the desired height. But it is easy to imagine imprinting some torsional stresses in the (40 um?) wire during this process of looping it around under the optic and placing it in the groove. But perhaps this mechanism makes a negligible contribution to the effect we are seeing, and some other mechanism is responsible in this case.

Quote:

We used to have similar suspension excursion at ETMX. This was the motivation to replace the stand-offs from Al ones to ruby ones. Did the replacement solve the issue at ETMX?

 

  13212   Wed Aug 16 14:54:13 2017 gautamUpdateCDSPSL monitoring Acromag EPICS server restarted

[johannes, gautam, jamie]

  • Made a directory /opt/rtcds/caltech/c1/scripts/Acromag/PSL where I copied over the files needed my modbusApp to start the server from Lydia's user directory
  • Edited /ligo/apps/ubuntu12/ligoapps-user-env.sh to export a couple of EPICS variables to facilitate easy startup of the EPICS server
  • Started a tmux session on (soon to be re-christened?) megatron called "acroEPICS"
  • Ran the following command to start up the EPICS server:
${EPICS_MODULES}/modbus/bin/${EPICS_HOST_ARCH}/modbusApp npro_config.cmd

To do:

  1. Make a startup script that runs the above command - eventually this can contain the initialization instructions for all the Acromags
  2. Figure out the initctl/systemctl stuff to make the server automatically restart if it drops for some reason (e.g. power failure)
  13220   Wed Aug 16 19:50:17 2017 gautamUpdateSUSMC1 <--> MC3 switched back

Now that all the CDS overview lights are green, I decided to switch back the coil driver outputs to their original state so that the MC optics could be damped and the IMC relocked. I also restored the static PIT/YAW bias values to their original values.

MC1 has been quiet over the last couple of days, lets see how it behaves in the next few days. In all the glitches I have observed, if the IMC is locked and WFS loops are enabled, the loops are able to correct for the DC misalignment caused by the glitch. But the mcwfs off script is currently set up in such a way that the output history is cleared between IMC locks. I made two copies of the mcwfson/mcwfsoff scripts, called mcwfsunhold/mcwfshold respectively. They live in /opt/rtcds/caltech/c1/scripts/MC/WFS. I've also modified the autolocker script to call these modified scripts, such that when the IMC loses lock, the WFS servo outputs are held, while the input is turned off. The hope is that in this configuration, the autolocker can catch a lock even if there is a glitch on MC1.

I haven't tried locking the arms yet, but I think other IFO work discussed at the meeting (like arm loss estimation / cavity scans etc) can proceed.

Quote:

In order to further narrow down the cause of the glitch, we switched the Coil Driver Board --> Satellite box DB(15?) connectors on the coil drivers between MC1 and MC3 coil driver boards. I also changed the static PIT/YAW bias voltages to MC1 and MC3 such that MC-REFL is now approximately back to the center of the CCD monitor.

 

 

  13221   Wed Aug 16 20:01:03 2017 gautamUpdateGeneralSUS model ASC input weirdness

I'm not sure if this has something to do with the model restarts / new RCG, but while I was re-enabling the MC watchdogs, I noticed the RMS sensor voltage channels on ITMX hovering around ~100mV, even though local damping was on (in which configuration I would expect <1mV if everything is working normally).  I was confused by this behaviour, and after staring at the ITMX suspension screen for a while, I noticed that the input to the "ASCP" and "ASCY" servos were "-nan", and the outputs were 10^20 cts frown (see Attachment #1).

Digging a little deeper, I found that the same problem existed on ITMY, ETMX, ETMY, PRM (but not BS or SRM) - reasons unknown for now.

I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible.

After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.

gedit 28 Oct 0026: Seems like this problem is seen at the sites as well. I wonder if the problem is related.

Attachment 1: ITMX_ASC.png
ITMX_ASC.png
  13222   Wed Aug 16 20:24:23 2017 gautamUpdateALSFiber ALS

Today, with Johannes' help, I cleaned the fiber tips of the photodiodes. The effect of the cleaning was dramatic - see Attachments #1-4, which are X Beat PD, axial illumination, X Beat PD, oblique illumination, Y beat PD, axial illumination, Y beat PD, oblique illumination. They look much cleaner now, and the feature that looked like a scratch has vanished.

The cleaning procedure followed was:

  • Blow clean air over the fiber tip
  • First, we tried cleaning with the Q-tip like tool, but the results weren't great. The way to use it is to dip the tip in the cleaning solvent for a few seconds, hold the tip to the fiber taking into account the angled cut, and apply 10 gentle quarter turns.
  • Next, we tried cleaning with the wipes. We peeled out an approximately 5" section of the wipe, and laid it out on the table. We then applied cleaning solvent liberally on the central area where we were sure we hadn't touched the wipe. Then you just drag the fiber tip along the soaked part of the wipe. If you get the angle exactly right, the fiber glides smoothly along the surface, but if you are a little misaligned, you get a scratchy sensation. 
  • Blow dry and inspect.

I will repeat this procedure for all fiber connections once I start putting the box back together - I'm almost done with the new box, just waiting on some hardware to arrive.

 

Quote:

Today, I borrowed the fiber microscope from Johannes and took a look at the fibers coupled to the PDs. The PD labelled "BEAT PD AUX Y" has an end that seems scratched (Attachments #1 and #2). The scratch seems to be on (or at least very close to) the core. The other PD (Attachments #3 and #4) doesn't look very clean either, but at least the area near the core seems undamaged. The two attachments for each PD corresponds to the two available lighting settings on the fiber microscope.

I have not attempted to clean them yet, though I have also borrowed the cleaning supplies to facilitate this from Johannes. I also plan to inspect the ends of all other fiber connections before re-installing them.

 

Attachment 1: IMG_7476.JPG
IMG_7476.JPG
Attachment 2: IMG_7477.JPG
IMG_7477.JPG
Attachment 3: IMG_7478.JPG
IMG_7478.JPG
Attachment 4: IMG_7479.JPG
IMG_7479.JPG
  13225   Thu Aug 17 11:17:49 2017 gautamUpdateSUSMC1 <--> MC3 switched back

Seems like this modification didn't really work. There were several large MC1 glitches, and one of them misaligned MC1 so much that the IMC didn't relock for the last ~6 hours. I re-aligned MC1 manually, and now it is locked fine.

Quote:

Now that all the CDS overview lights are green, I decided to switch back the coil driver outputs to their original state so that the MC optics could be damped and the IMC relocked. I also restored the static PIT/YAW bias values to their original values.

MC1 has been quiet over the last couple of days, lets see how it behaves in the next few days. In all the glitches I have observed, if the IMC is locked and WFS loops are enabled, the loops are able to correct for the DC misalignment caused by the glitch. But the mcwfs off script is currently set up in such a way that the output history is cleared between IMC locks. I made two copies of the mcwfson/mcwfsoff scripts, called mcwfsunhold/mcwfshold respectively. They live in /opt/rtcds/caltech/c1/scripts/MC/WFS. I've also modified the autolocker script to call these modified scripts, such that when the IMC loses lock, the WFS servo outputs are held, while the input is turned off. The hope is that in this configuration, the autolocker can catch a lock even if there is a glitch on MC1.

I haven't tried locking the arms yet, but I think other IFO work discussed at the meeting (like arm loss estimation / cavity scans etc) can proceed.

 

 

Attachment 1: MC1_misaligned.png
MC1_misaligned.png
Attachment 2: MC1_glitch.png
MC1_glitch.png
  13226   Thu Aug 17 17:33:01 2017 gautamUpdateSUSMC1 <--> MC3 switched back

that's why the Autolocker clears the outputs; we don't want to be holding the offsets from the last ms of lock when it was all messed up; instead it would be best to have a slow (~mHz) relief script that takes the WFS controls and puts them onto the MC SUS sliders. This would then re-align the MC to the input beam rather than the input to the MC. Which is not the best idea.

Quote:

Seems like this modification didn't really work.

 

  13228   Fri Aug 18 21:58:35 2017 gautamUpdateGeneralSUS model ASC input weirdness

I spent some time today trying to debug this issue.

Jamie and I had opened up the c1sus frontend to try and replace the RFM card before we realized that the problem was in the RCG code generator. During this process, we had disconnected all of the back-panel cabling to this machine (2 ethernet cables, dolphin cable, and RFM cables/fibers). I thought I may have accidentally returned the cables to the wrong positions - but all the status indicator lights indicate that everything is working as it should, and I also confirmed that the cabling is as it is in the pictures of the rack on the wiki page.

Looking at the SimuLink model diagram (see Attachment #1 for example), it looks like (at least some of) these channels are actually on the dolphin network, and not the RFM network (with which we were experiencing problems). This suggests that the problem is something deeper. Although I did see nans in some of the ETMX ASC channels as well, for which the channels are piped over the RFM network. Even more puzzling is that the ASC MEDM screen (Attachment #3) and the SimuLink diagram (Attachment #2) suggest that there is an output matrix in between the input signals and the output angular control signals to the suspensions. As Attachment #4 shows, the rows corresponding to ITMX PIT and YAW are zero (I confirmed using z read <matrixElement>). Attachment #3 shows that the output of all the servo banks except CARM_YAW is zero, but CARM_YAW has no matrix element going to the ITMs (also confirmed with z read <servoOutputChannel>). So 0 x 0 should be 0, but for some reason the model doesn't give this output?

GV Edit: As EricQ just pointed out to me, nan x 0 is still nan, which probably explains the whole issue. Poking a little further, it seems like this is an SDF issue - the SDF table isn't able to catch differences for this hold output channel.


As I was writing this elog, I noticed that, as mentioned above, the CARM_YAW output was "nan". When I restart the model (thankfully this didn't crash c1lsc!), it seems to default to this state. Opening up the filter module, I saw that the "hold output" was enabled.

Toggling that switch made the nans in all the SUS ASC channels disappear. Mysterious indecision.

All the points above stand - CARM_YAW output shouldn't have been going anywhere as per the output matrix, but it seems to have been responsible? Seems like a bug in any case if a model restarts with a field as "nan".

Anyways the problem seems to have been resolved so I'm going to try locking and dither aligning the arms now.

Rolf mentioned that a simple update could fix several of the CDS issues we are facing (e.g. inability to open up testpoints), but he didn't seem to have any insight into this particular issue. Jamie will try and recompile all the models and then we have to see if that fixes the remaining problems.

Quote:
 

I have to check where this signal is coming from, but for now I just turned the "ASC Input" switch off. More investigation to be done, but in the meantime, ASS dither alignment may not be possible.

After consulting with Jamie, I have just disabled all outputs to the suspensions other than local damping loop outputs. I need to figure out how to get this configuration into the safe.snap file such that until we are sure of what is going on, the models start up in this safer configuration.

 

Attachment 1: ITMXP.png
ITMXP.png
Attachment 2: ASC_model_outmatrix.png
ASC_model_outmatrix.png
Attachment 3: ASC_medm.png
ASC_medm.png
Attachment 4: ASC_outMat.png
ASC_outMat.png
  13229   Fri Aug 18 23:59:53 2017 gautamUpdateALSX Arm ALS lock

[ericq, gautam]

  • I was just getting the IFO aligned, and single arm lock going, when EricQ came in and asked if we could get some ALS data.
  • ALS beats seemed fine, in particular the X-Arm. The broad hump around ~70Hz that was present in my previous ALS update was nowhere to be seen - reasons unknown.
  • Copied over /opt/rtcds/caltech/c1/scripts/YARM/Lock_ALS_YARM.py to /opt/rtcds/caltech/c1/scripts/XARM/Lock_ALS_XARM.py. Could be useful when we want to do arm cavity scans.
  • Made appropriate changes to allow ALS locking of Xarm - the testpoint inaccessibility makes things a little annoying but for tonight we just used DQ channels in place (or slow channels when DQ chans were not available)
  • Calibration of X arm error signal seemed off - so we fixed it by driving a line in ETMX and matching up the peaks in the ALS error signal and POX11. We then updated the gain of the filter in the CINV filter bank accordingly.
  • Got some decent data - X arm stayed locked on ALS for >60mins, during which time the Y arm stayed locked on POY11, and the Y green also reained locked yes. There was no evidence of the X arm 00 mode randomly dropping out of lock tonight.
  • EQ will update with a sick comparison plot - today we looked at the ALS noise from the perspective of the Green Locking Izumi et. al. paper.
  • Y arm ALS noise didn't look so hot tonight - to be investigated...

Leaving LSC mode OFF for now while CDS is still under investigation


Not really related to this work: We saw that the safe.snap file for c1oaf seems to have gotten overwritten at some point. I restored the EPICS values from a known good time, and over-wrote the safe.snap file.

  13233   Mon Aug 21 14:53:32 2017 gautamUpdateVACRGA reset

[gautam, steve]

In the aftermath of the accidental vent, it looks like the RGA was shutdown.

We followed the instructions in this elog to restart the RGA.

Seems to be working now, Steve says we just need to wait for it to warm up before we can collect a reliable scan.

Quote:

We have good RGA scan now. There was no scan for 3 months.

 

  13234   Mon Aug 21 16:35:48 2017 gautamUpdateVACUPS checkup

[steve, gautam]

At Rolf/Rich Abbott's request, we performed a check of the UPS today.

Steve believed that the UPS was functioning as it should, and the recent accidental vent was because the UPS batteries were insufficiently charged when the test was performed. Today, we decided to try testing the UPS.

We first closed V1, VM1 and VA6 using the MEDM screen. We prepared to pull power on all these valves by loosening the power connections (but not detaching them). [During this process, I lost the screw holding the power cord fixed to the gate valve V1 - we are looking for a replacement right now but it seems to be an odd size. It is cable tied for now.]

The battery charge indicator LEDs on the UPS indicated that the batteries were fully charged.

Next, we hit the "Test" button on the UPS - it has to be held down for ~3 seconds for the test to be actually initiated, seems to be a safety feature of the UPS. Once the test is underway, the LED indicators on the UPS will indicate that the loading is on the UPS batteries. The test itself lasts for ~5seconds, after which the UPS automatically reverts to the nominal configuration of supplying power from the main line (no additional user input is required).

In this test, one of the five battery charge indicator LEDs went off (5 ON LEDs indicate full charge).

So on the basis of this test, it would seem that the UPS is functioning as expected. It remains to be investigated if the various hardware/software interlocks in place will initiate the right sequence of valve closures when required.


Quote:
 

Never hit O on the Vacuum UPS !

Note: the " all off " configuration should be all valves closed ! This should be fixed now.

In case of  emergency  you can close V1 with disconnecting it's actuating power as shown on Atm3 if you have peumatic pressure 60 PSI 

 

  13236   Mon Aug 21 21:26:41 2017 gautamSummaryGeneralLoss measurements plan

In case you want to use it, I had profiled the Lightwave NPRO sometime back, and we were even using it as the AUX X laser for a short period of time. 

As for using the AS laser for mode spectroscopy: don't we want to match the beam into the cavity as best as possible, and then use some technique to disturb the input mode (like the dental tooth scraper technique from Chris Mueller's thesis)? 

Johannes and I did an arm scan of the X arm today (arm controlled with ALS, monitoring IR transmission) - only 2 IR FSRs were scanned, but there should be sufficient information in there to extract the modulation depth and mode matching - can we use Kaustubh's/Naomi's code?. The Y arm ALS needs to be touched up so I don't have a Y arm scan yet. Note that to get a good arm scan measurement, the High Gain Thorlabs PD should be used as the transmission PD.

Quote:
 

Week Aug 21 - Aug 27:

  • Update mode-overlap estimates
  • Obtain current DC refl estimates
  • Spatial profile of auxiliary NPRO
  • Fiber setup concept; purchasing
  • CCD software prep work

 

  13237   Mon Aug 21 23:38:55 2017 gautamUpdateALSALS out-of-loop noise

I worked a little bit on the Y arm ALS today. 

  • Started by locking the Y arm to IR with POY, and then ran the dither alignment script to maximize Y arm transmission.
  • Green TRY DC monitor was around 0.16, whereas I have seen ~0.45 when we were doing DRFPMI locking.
  • So I went to the Y end table and tweaked the steering mirrors a little. I was able to get GTRY to ~0.42. I think this can be tweaked a little further but I decided to push on for tonight.
  • The beat amplitude on the network analyzer in the control room is comparable to the X arm beat now.
  • Adjusted the gain of the phase tracker servos, cleared phase history.
  • Looking at the ALS beat noise with the arms locked to IR and the slow ALS temperature control loops ON (see Attachment #1), the current measurements line up quite well with the reference traces.

I am now going to measure the OLTFs of both green PDH loops to check that the overall loop gain is okay, and also check the measurement against EricQ's LISO model of the (modified) AUX green PDH servos. Results to follow.


Some weeks ago, I had moved some of the Green steering optics on the PSL table around, in order to flip some mirror mounts and try and get angles of incidence closer to ~45deg on some of the steering mirrors. As a result of this work, I can see some light on the GTRY CCD when the X green shutter is open. It is unclear if there is also some scattered light on the RFPDs. I will post pictures + a more detailed investigation of the situation on the PSL table later, there are multiple stray green beams on the PSL table which should probably be dumped.


As I was writing this elog, I saw the X green lock drop abruptly. During this time, the X arm stayed locked to the IR, and the Y arm beat on the control room network analyzer did not jump (at least not by an amount visible to the eye). Toggling the X end shutter a few times, the green TEM00 lock was re-acquired, but the beatnote has moved on the control room analyzer by ~40MHz. On Friday evening however, the X green lock held for >1 hour. Need to keep an eye on this.

Attachment 1: ALS_21082017.pdf
ALS_21082017.pdf
  13238   Tue Aug 22 02:19:11 2017 gautamUpdateALSALS OLTFs

Attachment #1 shows the results of my measurements tonight (SR785 data in Attachment #2). Both loops have a UGF of ~10kHz, with ~55 degrees of phase margin.

Excitation was injected via SR560 at the PDH error point, amplitude was 35mV. According to the LED indicators on these boxes, the low frequency boost stages were ON. Gain knob of the X end PDH box was at 6.5, that of the Y end PDH box was at 4.9. I need to check the schematics to interpret these numbers. GV Edit: According to this elog, these numbers mean that the overall gain of the X end PDH box is approx. 25dB, while that of the Y end PDH box is approx. 15dB. I believe the Y end Lightwave NPRO has an actuator discriminant ~5MHz/V, while the X end Innolight is more like 1MHz/V.

Not sure what to make of the X PDH loop measurement being so much noisier than the Y end, I need to think about this.

More detailed analysis to follow.

Quote:

 

I am now going to measure the OLTFs of both green PDH loops to check that the overall loop gain is okay, and also check the measurement against EricQ's LISO model of the (modified) AUX green PDH servos. Results to follow.

 

Attachment 1: ALS_OLTFs.pdf
ALS_OLTFs.pdf
Attachment 2: ALS_OLTF_Aug2017.zip
  13240   Tue Aug 22 15:40:06 2017 gautamUpdateComputersOld frames accessible again

[jamie, gautam]

I had some trouble getting the daqd processes up and running again using Jamie's instructions.

With Jamie's help however, they are back up and running now. The problem was that the mx infrastructure didn't come back up on its own. So prior to running sudo systemctl restart daqd_*, Jamie ran sudo systemctl start mx. This seems to have done the trick.

c1iscey was still showing red fields on the CDS overview screen so Jamie did a soft reboot. The machine came back up cleanly, so I restarted all the models. But the indicator lights were still red. Apparently the mx processes weren't running on c1iscey. The way to fix this is to run sudo systemctl start mx_stream. Now everything is green.

Now we are going to work on trying the fix Rolf suggested on c1iscex.

Quote:

It turns out the problem was just a bent pin on the SCSI cable, likely from having to stretch things a bit to reach optimus from the RAID unit.frown

I hooked it up to megatron, and it was automatically recognized and mounted. yes

I had to turn off the new FB machine and remove it from the rack to be able to access megatron though, since it was just sitting on top. FB needs a rail to sit on!

At a cursory glance, the filesystem appears intact. I have copied over the achived DRFPMI frame files to my user directory for now, and Gautam is going to look into getting those permanently stored on the LDAS copy of 40m frames, so that we can have some redundancy.

Also, during this time, one of the HDDs in the RAID unit failed its SMART tests, so the RAID unit wanted it replaced. There were some spare drives in a little box directly under the unit, so I've installed one and am currently incorporating it back into the RAID.

There are two more backup drives in the box. We're running a RAID 5 configuration, so we can only lose one drive at a time before data is lost.

 

  13242   Tue Aug 22 17:11:15 2017 gautamUpdateComputersc1iscex model restarts

[jamie, gautam]

We tried to implement the fix that Rolf suggested in order to solve (perhaps among other things) the inability of some utilities like dataviewer to open testpoints. The problem isn't wholly solved yet - we can access actual testpoint data (not just zeros, as was the case) using DTT, and if DTT is used to open a testpoint first, then dataviewer, but DV itself can't seem to open testpoints.

Here is what was done (Jamie will correct me if I am mistaken).

  1. Jamie checked out branch 3.4 of the RCG from the SVN.
  2. Jamie recompiled all the models on c1iscex against this version of RCG.
  3. I shutdown ETMX watchdog, then ran rtcds stop all on c1iscex to stop all the models, and then restarted them using rtcds start <model> in the order c1x01, c1scx and c1asx. 
  4. Models came back up cleanly. I then restarted the daqd_dc process on FB1. At this point all indicators on the CDS overview screen were green.
  5. Tried getting testpoint data with DTT and DV for ETMX Oplev Pitch and Yaw IN1 testpoints. Conclusion as above.

So while we are in a better state now, the problem isn't fully solved. 

Comment: seems like there is an in-built timeout for testpoints opened with DTT - if the measurement is inactive for some time (unsure how much exactly but something like 5mins), the testpoint is automatically closed.

  13243   Tue Aug 22 18:36:46 2017 gautamUpdateComputersAll FE models compiled against RCG3.4

After getting the go ahead from Jamie, I recompiled all the FE models against the same version of RCG that we tested on the c1iscex models.

To do so:

  • I did rtcds make and rtcds install for all the models.
  • Then I ssh-ed into the FEs and did rtcds stop all, followed by rtcds start <model> in the order they are listed on the CDS overview MEDM screen (top to bottom).
  • During the compilation process (i.e. rtcds make), for some of the models, I got some compilation warnings. I believe these are related to models that have custom C code blocks in them. Jamie tells me that it is okay to ignore these warnings at that they will be fixed at some point.
  • c1lsc FE crashed when I ran rtcds stop all - had to go and do a manual reboot.
  • Doing so took down the models on c1sus and c1ioo that were running - but these FEs themselves did not have to be robooted.
  • Once c1lsc came back up, I restarted all the models on the vertex FEs. They all came back online fine.
  • Then I ssh-ed into FB1, and restarted the daqd processes - but c1lsc and c1ioo CDS indicators were still red.
  • Looks like the mx_stream processes weren't started automatically on these two machines. Reasons unknown. Earlier today, the same was observed for c1iscey.
  • I manually restarted the mx_stream processes, at which point all CDS indicator lights became green (see Attachment #1).

IFO alignment needs to be redone, but at least we now have a (admittedly rounabout way) of getting testpoints. Did a quick check for "nan-s" on the ASC screen, saw none. So I am re-enabling watchdogs for all optics.

GV 23 August 9am: Last night, I re-aligned the TMs for single arm locks. Before the model restarts, I had saved the good alignment on the EPICs sliders, but the gain of x3 on the coil driver filter banks have to be manually turned on at the moment (i.e. the safe.snap file has them off). ALS noise looked good for both arms, so just for fun, I tried transitioning control of both arms to ALS (in the CARM/DARM basis as we do when we lock DRFPMI, using the Transition_IR_ALS.py script), and was successful.

Quote:

[jamie, gautam]

We tried to implement the fix that Rolf suggested in order to solve (perhaps among other things) the inability of some utilities like dataviewer to open testpoints. The problem isn't wholly solved yet - we can access actual testpoint data (not just zeros, as was the case) using DTT, and if DTT is used to open a testpoint first, then dataviewer, but DV itself can't seem to open testpoints.

Here is what was done (Jamie will correct me if I am mistaken).

  1. Jamie checked out branch 3.4 of the RCG from the SVN.
  2. Jamie recompiled all the models on c1iscex against this version of RCG.
  3. I shutdown ETMX watchdog, then ran rtcds stop all on c1iscex to stop all the models, and then restarted them using rtcds start <model> in the order c1x01, c1scx and c1asx. 
  4. Models came back up cleanly. I then restarted the daqd_dc process on FB1. At this point all indicators on the CDS overview screen were green.
  5. Tried getting testpoint data with DTT and DV for ETMX Oplev Pitch and Yaw IN1 testpoints. Conclusion as above.

So while we are in a better state now, the problem isn't fully solved. 

Comment: seems like there is an in-built timeout for testpoints opened with DTT - if the measurement is inactive for some time (unsure how much exactly but something like 5mins), the testpoint is automatically closed.

 

Attachment 1: CDS_Aug22.png
CDS_Aug22.png
  13246   Wed Aug 23 17:22:36 2017 gautamUpdateALSFiber ALS - reinstalled

I completed the revamp of the box, and re-installed the box on the PSL table today. I think it would be ideal to install this on one of the electronic racks, perhaps 1X2 would be best. We would have to re-route the fibers from the PSL table to 1X2, but I think they have sufficient length, and this way, the whole arrangement is much cleaner.

Did a quick check to make sure I could see beat notes for both arms. I will now attempt to measure the ALS noise with this revamped box, to see if the improved power supply and grounding arrangement, as well as fiber cleaning, has had any effect.

Photos + power budget + plan of action for using this box to characterize the green PDH locking to follow. 

For quick reference: here is the AM/PM measurement done when we re-installed the repaired Innolight NPRO on the new X endtable.

  13248   Thu Aug 24 00:39:47 2017 gautamUpdateLSCDRMI locking attempt

Since the single arm locking and dither alignment seemed to work alright after the CDS overhaul, I decided to try some recycling cavity locking tonight.

  • First, I locked single arms, ran dither alignment servos, and centered all test mass Oplevs. Note: the X arm dither alignment doesn't seem to work if we use the High-Gain Thorlabs PD as the Transmission PD. The BS loops just seem to pick up large offsets and the alignment actually degrades over a couple of minutes. This needs to be investigated.
  • Next, to get good PRM alignment, I manually moved the EPICS sliders till the REFL spot became roughly centered on the CCD screen.
  • Then I tried locking PRMI on carrier using the usual C1IFOConfigure script - the lock was caught within ~30 seconds.
  • The PRCL and MICH dither servo scripts also ran fine.
    • Centered PRM Oplev.
  • Next, I tried enabling the PRC angular feedforward.
    • OAF model does not automatically revert to its safe.snap configuration on model reboot, so I first manually did this such that the correct filter banks were enabled.
    • I was able to turn on the angular feedforward without disturbing the PRMI carrier lock. The angular motion of the POP spot on the CCD monitor was visibly reduced.
  • At this point I decided to try DRMI locking.
    • I centered the beam on the AS PDs with the simple Michelson.
    • Centered the beam on the REFL PDs with PRM aligned and PRC flashing through resonances.
    • Restored SRM alignment by eye with EPICS sliders.
    • Cavity alignment seemed alright - so I tried to lock DRMI with the old settings (i.e. from DRMI 1f locking a couple of months ago). But I had no success.
    • The behaviour of REFL55 (used for SRCL control) has changed dramatically - the analog whitening gain for this PD used to be +18dB, but at this setting, there are frequent ADC overflows. I had to reduce the whitening gain to +6dB to stop the ADC overflows. I also checked to make sure that the whitening setting was "manual" and not triggered.

Why should this have changed? I was just on the AS table and did re-center the beam onto the REFL 55 RFPD, but I had also done this in April/May when I was last doing DRMI locking. But I can't explain the apparent factor of ~4 increase in light level. I think I have some measurements of the light levels at various PDs from April 2017, I will see how the present levels line up.

Of course dataviever won't cooperate when I am trying to monitor testpoints.

I may be missing something obvious, but I am quitting for tonight, will look into this more tomorrow.


Unrelated to this work: looking at the GTRY spot on the CCD monitor, there seems to be some excess angular motion. Not sure where this is coming from. In the past, this sort of problem has been symptomatic of something going wonky with the Oplev loops. But I took loop measurements for ITMY and ETMY PIT and YAW, they look normal. I will investigate further when I am doing some more ALS work.

  13249   Thu Aug 24 17:36:11 2017 gautamUpdateCDSFSS Slow Python maintenance

A couple of weeks ago, I was trying to modernize the python version of the FSS Slow temperature control loops, when I accidentally ended up deleting it frown. There was no svn backup. So the old Perl PID script has been running for the last few days.

Today, I checked out the latest version that Andrew and co. have running in the PSL lab. I had to make some important modifications for the script to work for the 40m setup.

  1. The script is conveniently setup in a way that the channels it needs to read from / write to are read in from an .ini file. I renamed all the channels to match the appropriate 40m ones.
  2. We don't have a soft epics channel in which to define the setpoint for our PID servo (which is 0). Rather than poke around with slow machine EPICS records, I simply commented out this line in the script and included the hard-coded value of 0. When we modernize to the Acromag era, we can setup an EPICS channel + MEDM slider for the setpoint.
  3. The way the Perl script was setup, the error signal was pre-scaled by a factor of 0.01, supposedly to make the PID gains be of order 1. For consistency, I re-inserted this scaling, which awade and co. had removed.
  4. Modified the FSSslowPy.init file to call the script in accordance with the new syntax:
python FSSSlow.py -i FSSSlowPy.ini

Then I stopped the Perl process on megatron by running

sudo initctl stop FSSslow

and started the Python process by running

sudo initctl start FSSslowPy

I have now committed the files FSSSlow.py and FSSSlowPy.ini to the 40m svn.  Things seem to be stable for the last 20 mins or so, let's keep an eye on this though - although we had been running the Python PID loop for some months, this version is a slightly modified one. 

The initctl stuff still isn't very robust - I think both the Autolocker and the FSS slow servos have to be manually restarted if megatron is shutdown/restarted for whatever reason. It doesn't seem to be a problem with the initctl routine itself - looking at the logs, I can see that init is trying to start both processes, but is failing to do so each time. To be investigated. The wiki procedure to restart this process is up to date.

GV Edit 0000 25 Aug 2017: I had to add a line to the script that checks MC transmission before enabling the PID loop. Change has been committed to svn. Now, when the MC loses lock or if the PSL shutter is kept closed for an extended period of time, the temperature loop doesn't rail.

  13252   Fri Aug 25 01:20:52 2017 gautamUpdateLSCDRMI locking attempt

I tried some DRMI locking again tonight, but had no success. Here is the story.

  • I started out by going to the AS table and measuring the light level on the REFL55 photodiode (with PRM aligned and the PRC flashing, but LSC disabled).
    • The Ophir power meter reads 13mW
    • The DC output of the photodiode shows ~500mV on an oscilloscope.
    • Both of these numbers line up well with measurements I made in April/May.
  • Returned to the control room and aligned the IFO for DRMI locking - but LSC servos remained disabled.
    • At the nominal REFL55 whitening level of +18dB, the REFL 55 signals saturated the ADC (confirmed by looking at the traces on dataviewer).
    • But the signals still looked like PDH error signals.
    • Lowering the whitening gain to 6dB makes the PDH error signal horns peak around 20,000 counts.
    • Could this be indicative of problems with either the analog whitening gain switching or the LSC Demod Boards? To be investigated.
  • Tried enabling LSC servos with same settings with which I had success right up till a couple of months ago, but had no success.
    • If it is true that the REFL55 signal is getting amplified because of some gain stage not being switched correctly, I should still have been able to lock the SRC with a lowered loop gain - but even lowering the gain by a factor of 10 had no effect on the locking success rate.

Looks like I will have to embark on the REFL55 LSC electronics investigation. I was able to successfully lock the PRC on carrier and sideband, and the Michelson lock also seems to work fine, all of which seem to point to a hardware problem with the REFL55 signal chain.

I did a quick check by switching the output of the REFL55 demod board to the inputs normally used by AS55 signals on the whitening board. Setting the whitening gain to +18dB for these channels had the same effect - ADC overflow galore. So looks like the whitening board isn't to blame. I will have to check the demod board out.

 

ELOG V3.1.3-