40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 212 of 341  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  13430   Thu Nov 16 00:45:39 2017 gautamUpdateSUSSOS Sapphire Prism design

 

Quote:
 
  • If I could get pictures of the lower mirror clamp (document D960008), it would be helpful in making solidworks model. Document is unclear. Same for sensor/actuator head assembly. 

If you go through this thread of elogs, there are lots of pictures of the SOS assembly with the optic in it from the vent last year. I think there are many different perspectives, close ups of the standoffs, and of the OSEMs in their holders in that thread.

This elog has a measurement of the pendulum resonance frequencies with ruby standoffs - although the ruby standoff used was cylindrical, and the newer generation will be in the shape of a prism. There is also a link in there to a document that tells you how to calculate the suspension resonance frequencies using analytic equations.

  13431   Thu Nov 16 00:53:26 2017 gautamUpdateLSCDRMI noise sub-budgets

I've incorporated the functionality to generate sub-budgets for the various grouped traces in the NBs (e.g. the "A2L" trace is really the quadrature sum of the A2L coupling from 6 different angular servos).

For now, I'm only doing this for the A2L coupling, and the AUX length loop coupling groups. But I've set up the machinery in such a way that doing so for more groups is easy.

Here are the sub-budget plots for last night's lock - for the OL plot, there are only 3 lines (instead of 6) because I group the PIT and YAW contributions in the function that pulls the data from the nds server, and don't ever store these data series individually. This should be rectified, because part of the point of making these sub-budgets is to see if there is a particularly bad offender in a given group.

I'll do a quick OL loop noise budget for the ITM loops tomorrow.

I also wonder if it is necessary to measure the Oplev A2L coupling from lock to lock? This coupling will be dependant on the spot position on the optic, and though I run the dither alignment servos to minimize REFL_DC, AS_DC, I don't have any intuition for how the offset from center of optic varies from lock to lock, and if this is at all significant. I've been using a number from a measurement made in May. Need to do some algebra...

Attachment 1: C1NB_a2l_40m_MICH_NB_2017-11-15.pdf
C1NB_a2l_40m_MICH_NB_2017-11-15.pdf
Attachment 2: C1NB_aux_40m_MICH_NB_2017-11-15.pdf
C1NB_aux_40m_MICH_NB_2017-11-15.pdf
  13432   Thu Nov 16 13:57:01 2017 gautamUpdateOptical LeversOptical lever noise

I disabled the OL loops for ITMX, ITMY and BS at GPStime 1194897655 to come up with an Oplev noise budget. OL spots were reasonably well centered - by that, I mean that the PIT/YAW error signals were less than 20urad in absolute value.

Attachment #1 is a first look at the DTT spectra - I wonder why the BS Oplev signals don't agree with the ITMs at ~1Hz? Perhaps the calibration factor is off? The sensing noise not really flat above 100Hz - I wonder what all those peaky features are. Recall that the ITM OLs have analog whitening filters before the ADC, but the BS doesn't...

In Attachment #2, I show comparison of the error signal spectra for ITMY and SRM - they're on the same stack, but the SRM channels don't have analog de-whitening before the ADC.

For some reason, DTT won't let me save plots with latex in the axes labels...

Attachment 1: VertexOLnoise.pdf
VertexOLnoise.pdf
Attachment 2: ITMYvsSRM.pdf
ITMYvsSRM.pdf
  13436   Tue Nov 21 11:21:26 2017 gautamUpdateCDSRFM network down

I noticed yesterday evening that I wasn't able to engage the single arm locking servos - turned out that they weren't getting triggered, which in turn pointed me to the fact that the arm transmssion channels seemed dead. Poking around a little, I found that there was a red light on the CDS overview screen for c1rfm.

  • The error seems to be in the receiving model only, i.e. c1rfm, all the sending models (e.g. c1scx) don't report any errors, at least on the CDS overview screen.
  • Judging by dataviewer trending of the c1rfm status word, seems like this happened on Sunday morning, around 11am.
  • I tried restarting both sender and receiver models, but error persists.
  • I got no useful information from the dmesg logs of either c1sus (which runs c1rfm), or c1iscex (which runs c1scx).
  • There are no physical red lights in the expansion chassis that I could see - in the past, when we have had some timing errors, this would be a signature.

Not sure how to debug further...

* Fix seems to be to restart the sender RFM models (c1scx, c1scy, c1asx, c1asy).

Attachment 1: RFMerrors.png
RFMerrors.png
  13437   Tue Nov 21 11:37:29 2017 gautamUpdateOptical LeversBS OL calibration updated

I calibrated the BS oplev PIT and YAW error signals as follows:

  1. Locked X-arm, ran dither alignment servos to maximize transmission.
  2. Applied an offset to the ASC PIT/YAW filter banks. Set the ramp time to something long, I used 60 seconds.
  3. Monitored the X arm transmission while the offset was being ramped, and also the oplev error signal with its current calibration factor.
  4. Fit the data, oplev error signal vs arm transmission, with a gaussian, and extracted the scaling factor (i.e. the number which the current Oplev error signals have to be multiplied by for the error signal to correspond to urad of angular misalignment as per the overlap of the beam axis to the cavity axis.
  5. Fits are shown in Attachment #1 and #2.
  6. I haven't done any error analysis yet, but the open loop OL spectra for the BS now line up better with the other optics, see Attachment #3 (although their calibration factors may need to be updated as well...). Need to double check against OSEM readout during the sweep.
  7. New numbers have been SDF-ed.

The numbers are:

BS Pitch     15  /  130    (old/new)     urad/counts

BS Yaw       14  /  170    (old/new)     urad/counts

Quote:

I bet the calibration is out of date; probably we replaced the OL laser for the BS and didn't fix the cal numbers. You can use the fringe contrast of the simple Michelson to calibrate the OLs for the ITMs and BS.

 

Attachment 1: OL_calib_BS_PERROR.pdf
OL_calib_BS_PERROR.pdf
Attachment 2: OL_calib_BS_YERROR.pdf
OL_calib_BS_YERROR.pdf
Attachment 3: VertexOLnoise_updated.pdf
VertexOLnoise_updated.pdf
  13439   Tue Nov 21 16:28:23 2017 gautamUpdateOptical LeversBS OL calibration updated

The numbers I have from the fitting don't agree very well with the OSEM readouts. Attachment #1 shows the Oplev pitch and yaw channels, and also the OSEM ones, while I swept the ASC_PIT offset. The output matrix is the "naive" one of (+1,+1,-1,-1). SUSPIT_IN1 reports ~30urad of motion, while SUSYAW_IN1 reports ~10urad of motion.

From the fits, the BS calibration factors were ~x8 for pitch and x12 for yaw - so according to the Oplev channels, the applied sweep was ~80urad in pitch, and ~7urad in yaw.

Seems like either (i) neither the Oplev channels nor the OSEMs are well diagonalized and that their calibration is off by a factor of ~3 or (ii) there is some significant imbalance in the actuator gains of the BS coils...

Quote:
 

Need to double check against OSEM readout during the sweep.

 

Attachment 1: BS_oplev_sweep.png
BS_oplev_sweep.png
  13441   Tue Nov 21 23:04:12 2017 gautamUpdateOptical LeversOplev "noise budget"

Per our discussions in the meetings over the last week, I've tried to put together a simple Oplev noise budget. The only two terms in this for now are the dark noise and a model for the seismic noise, and are plotted together with the measured open-loop error signal spectra.

  1. Dark noise
    • Beam was taken off the OL QPD
    • A small DC offset was added to all the oplev segment input filters to make the sum ~20-30 cts [call this testSum] (usually it varies from 4000-13000 for the BS/ITMs, call this nominalSum).
    • I downloaded 20mins of dq-ed error signal data, and computed the ASD, dividing by a factor of nominalSum / testSum to account for the usual light intensity on the QPD.
  2. Seismic noise
    • This is a very simplistic 1/f^2 pendulum TF with a pair of Q=2 poles at 1Hz.
    • I adjusted the overall gain such that the 1Hz peak roughly line up in measurement and model.
    • The stack isn't modelled at all.

Some remarks:

  • The BS oplev doesn't have any whitening electronics, and so has a higher electronics noise floor compared to the ITMs. But it doesn't look like we are limited by this lower noise floor anywhere..
  • I wonder what all those high frequency features seen in the ITM error signal spectra are - mechanical resonances of steering optics? It is definitely above the dark noise floor, so I am inclined to believe this is real beam motion on the QPD, but surely this can't be the test-mass motion? If it were, the measured A2L would be much higher than the level it is adjudged to be at now. Perhaps it's some resonances of steering mirrors?
  • The seismic displacement @100Hz per the GWINC model is ~1e-19 m/rtHz. Assuming the model A2L = d_rms * theta(f) where d_rms is the rms offset of the beam spot from the optic center, and theta(f) is the angular control signal to the optic, for a 5mm rms offset of the spot from the center, theta(f) must be ~1e-17 urad @100Hz. This gives some requirement on the low pass required - I will look into adding this to the global optimization cost.

 

Attachment 1: vertexOL_noises.pdf
vertexOL_noises.pdf vertexOL_noises.pdf vertexOL_noises.pdf
  13442   Tue Nov 21 23:47:51 2017 gautamConfigurationComputersnodus post OS migration admin

I restored the nodus crontab (copied over from the Nov 17 backup of the same at /opt/rtcds/caltech/c1/scripts/crontab/crontab_nodus.20171117080001. There wasn't a crontab, so I made one using sudo crontab -e.

This crontab is supposed to execute some backup scripts, send pizza emails, check chiara disk usage, and backup the crontab itself.

I've commented out the backup of nodus' /etc and /export for now, while we get back to fully operational nodus (though we also have a backup of /cvs/cds/caltech/nodus_backup on the external LaCie drive), they can be re-enabled by un-commenting the appropriate lines in the crontab.

Quote:

The post OS migration admin for nodusa bout apache, elogd, svn, iptables, etc can be found in https://wiki-40m.ligo.caltech.edu/NodusUpgradeNov2017

Update: The svn dump from the old svn was done, and it was imported to the new svn repository structure. Now the svn command line and (simple) web interface is running. "websvn" is not installed.

 

  13445   Wed Nov 22 11:51:38 2017 gautamConfigurationComputersnodus post OS migration admin

Confirmed that this crontab is running - the daily backup of the crontab seems to have successfully executed, and there is now a file crontab_nodus.ligo.caltech.edu.20171122080001 in the directory quoted below. The $HOSTNAME seems to be "nodus.ligo.caltech.edu" whereas it was just "nodus", so the file names are a bit longer now, but I guess that's fine...

Quote:

I restored the nodus crontab (copied over from the Nov 17 backup of the same at /opt/rtcds/caltech/c1/scripts/crontab/crontab_nodus.20171117080001. There wasn't a crontab, so I made one using sudo crontab -e.

This crontab is supposed to execute some backup scripts, send pizza emails, check chiara disk usage, and backup the crontab itself.

I've commented out the backup of nodus' /etc and /export for now, while we get back to fully operational nodus (though we also have a backup of /cvs/cds/caltech/nodus_backup on the external LaCie drive), they can be re-enabled by un-commenting the appropriate lines in the crontab.

 

 

  13448   Wed Nov 22 15:29:23 2017 gautamUpdateOptical LeversOplev "noise budget"

[steve, gautam]

What is the best way to set this test up?

I think we need a QPD to monitor the spot rather than a single element PD, to answer this question about the sensor noise. Ideally, we want to shoot the HeNe beam straight at the QPD - but at the very least, we need a lens to size the beam down to the same size as we have for the return beam on the Oplevs. Then there is the power - Steve tells me we should expect ~2mW at the output of these HeNes. Assuming 100kohm transimpedance gain for each quadrant and Si responsivity of 0.4A/W at 632nm, this corresponds to 10V (ADC limit) for 250uW of power - so it would seem that we need to add some attenuating optics in the way.

Also, does anyone know of spare QPDs we can use for this test? We considered temporarily borrowing one of the vertex OL QPDs (mark out its current location on the optics table, and move it over to the SP table), but decided against it as the cabling arrangement would be too complicated. I'd like to use the same DAQ electronics to acquire the data from this test as that would give us the most direct estimate of the sensor noise for supposedly no motion of the spot, although by adding 3 optics between the HeNe and the QPD, we are introducing possible additional jitter couplings...

Quote:

For the OL NB, probably don't have to fudge any seismic noise, since that's a thing we want to suppress. More important is "what the noise would be if the suspended mirrors were no moving w.r.t. inertial space".

For that, we need to look at the data from the OL test setup that Steve is putting on the SP table.

 

Attachment 1: OplevTest.jpg
OplevTest.jpg
  13450   Wed Nov 22 17:52:25 2017 gautamUpdateOptimal ControlVisualizing cost functions

I've attempted to visualize the various components of the cost function in the way I've defined it for the current iteration of the Oplev optimal control loop design code. For each term in the cost function, the way the cost is computed depends on the ratio of the abscissa value to some threshold value (set by hand for now) - if this ratio is >1, the cost is the logarithm of the ratio, whereas if the ratio is <1, the cost is the square of the ratio. Continuity is enforced at the point at which this transition happens. I've plotted the cost function for some of the terms entering the code right now - indicated in dashed red lines are the approximate value of each of these costs for our current Oplev loop - the weights were chosen so that each of the costs were O(10) for the current controller, and the idea was that the optimizer could drive these down to hopefully O(1), but I've not yet gotten that to happen.

Based on the meeting yesterday, some possible ideas:

  1. For minimizing the control noise injection - we know the transfer function from the Oplev control signal coupling to MICH from measurements, and we also have a model for the seismic noise. So one term could be a weighted integral of (coupling - seismic) - the weight can give more importance to f>30Hz, and even more importance to f>100Hz. Right now, I don't have any suc frequency dependant requirement on the control signal.
  2. Try a simpler problem first - pendulum local damping. The position damping controller for example has fewer roots in the complex plane. Although it too has some B/R notches, which account for 16 complex roots, and hence, 32 parameters, so maybe this isn't really a simpler problem?
  3. How do we pick the number of excess poles compared to zeros in the overall transfer function? The OL loop low-pass filters are elliptic filters, which achieve the fastest transition between the passband and stopband, but for the Oplev loop roll-off, perhaps its better to have a just have some poles to roll off the HF response?
Attachment 1: globalCosts.pdf
globalCosts.pdf
  13452   Wed Nov 22 23:56:14 2017 gautamUpdateOptical LeversOplev "noise budget"

Do not turn on BS/ITMY/SRM/PRM Oplev servos without reading this elog and correcting the needful!

I've setup a test setup on the ITMY Oplev table. Details + pics to follow, but for now, be aware that

  1. I've turned off the HeNe that is used for the SRM and ITMY Oplev.
  2. Moved one of the HeNe's Steve setup on the SP table to the ITMY Oplev table.
  3. Output power was 2.5mW, whereas normal power incident on this PD was ~250uW.
  4. So I changed all transimpedance gains on the ITMY Oplev QPD from 100k to 10k thin film - these should be changed back when we want to use this QPD for Oplev purposes again. Note that I did not change the compensation capacitors C3-C6, as with 10k transimpedance, and assuming they are 2.2nF, we get a corner frequency of 6.7kHz. The original schematic recommends 0.1uF. In hindsight, I should have changed these to 22nF to keep roughly the same corner frequency of ~700Hz.
    I've implemented this change as of ~5pm Nov 23 2017 - C3-C6 are now 22nF, so the corner frequency is 676Hz, as opposed to 723Hz before... This should also be undone when we use this QPD as an Oplev QPD again...
  5. I marked the position of the ITMY Oplev QPD with sharpie and also took pics so it should be easy enough to restore when we are done with this test.
  6. I couldn't get the HeNe to turn on with any of the power supplies I found in the cabinet, so I borrowed the one used to power the BS/PRM. So these oplevs are out of commission until this test is done.
  7. There is a single steering mirror in a Polaris mount which I used to center the spot on the QPD.
  8. The specular reflection (~250uW, i.e. 10% of the power incident on the QPD) is dumped onto a clean razor beam stack. Steve can put in a glass beam dump on Monday.
  9. Just in case someone accidentally turns on some servos - I've disabled the inputs to the BS, PRM and SRM oplevs, and set the limiter on the ITMY servo to 0.

Here are some pics of the setup: https://photos.app.goo.gl/DHMINAV7aVgayYcf1.None of the existing Oplev input/output steering optics were touched. Steve can make modifications as necessary, perhaps we can make similar mods to the SRM Oplev QPD and the BS one to run the HeNe test for a few days...

Quote:

too complex; just shoot straight from the HeNe to the QPD. We lower the gain of the QPD by changing the resistors; there's no sane reason to keep the existing 100k resistors for a 2 mW beam. The specular reflection of the QPD must be dumped on a black glass V dump (not some flimsy anodized aluminum or dirty razor stack)

 

  13453   Thu Nov 23 18:03:52 2017 gautamUpdateOptical LeversOplev "noise budget"

Here are a couple of preliminary plots of the noise from a 20minute stretch of data - the new curve is the orange one, labelled sensing, which is the spectrum of the PIT/YAW error signal from the HeNe beam single bounce off a single steering mirror onto the QPD, normalized to account for the difference in QPD sum. The peaky features that were absent in the dark noise are present here.

I am a bit confused about the total sum though - there is ~2.5mW of light incident on the PD, and the transimpedance gain is 10.7kohm. So I would expect 2.5e-3 mW * 0.4A/W * 10.7 kV/A ~ 10.7V over 4 quadrants. The ADC is 16 bit and has a range +/- 10V, so 10.7 V should be ~35,000 cts. But the observed QPD sum is ~14,000 counts. The reflected power was measured to be ~250uW, so ~10% of the total input power. Not sure if this is factored into the photodiode efficiency value of 0.4A/W. I guess there is some fraction of the QPD that doesn't generate any photocurrent (i.e. the grooves defining the quadrants), but is it reasonable that when the Oplev beam is well centered, ~50% of the power is not measured? I couldn't find any sneaky digital gains between the quadrant channels to the sum channel either... But in the Oplev setup, the QPD had ~250uW of power incident on it, and was reporting a sum of ~13,000 counts with a transimpedance gain of 100kohm, so at least the scaling seems to hold...

I guess we wan't to monitor this over a few days, see how stationary the noise profile is etc. I didn't look at the spectrum of the intensity noise during this time.

Quote:

I've setup a test setup on the ITMY Oplev table. Details + pics to follow, but for now, be aware that

Here are some pics of the setup: https://photos.app.goo.gl/DHMINAV7aVgayYcf1.None of the existing Oplev input/output steering optics were touched. Steve can make modifications as necessary, perhaps we can make similar mods to the SRM Oplev QPD and the BS one to run the HeNe test for a few days...

 

 

Attachment 1: ITMY_P_noise.pdf
ITMY_P_noise.pdf
Attachment 2: ITMY_Y_noise.pdf
ITMY_Y_noise.pdf
  13461   Sun Dec 3 05:25:59 2017 gautamConfigurationComputerssendmail installed on nodus

Pizza mail didn't go out last weekend - looking at logfile, it seems like the "sendmail" service was missing. I installed sendmail following the instructions here: https://tecadmin.net/install-sendmail-server-on-centos-rhel-server/

Except that to start the sendmail service, I used systemctl and not init.d. i.e. I ran systemctl start sendmail.service (as root). Test email to myself works. Let's see if it works this weekend. Of course this isn't so critical, more important are the maintenance emails that may need to go out (e.g. disk usage alert on chiara / N2 pressure check, which looks like nodus' responsibilities). 

  13476   Thu Dec 14 19:33:20 2017 gautamFrogsASSc1ass slow channel offloading scripts with small

I don't think this is really a problem - we offload to the fast channels and not to the slow (although we really should offload to the slow channels). I think the best approach is to use the ezcaservo utility to offload the DC part of the ASS control signals to the slow channels, so as to not waste fast channel DAC counts on DC offsets. In principle, this approach should be somewhat immune to the slow channel calibration not being perfect.

Quote:

While staring at epics records all day I noticed something about the PIT/YAW offset sliders and ASS offset offloading to slow channels scripts that I'm not sure others are aware off, so I'll briefly discuss it in this post.

The PIT and YAW sliders directly control soft channels that are hosted on the slow machine. Secondary epics records disentangle them for the individual coils:

  • UL = PIT+YAW
  • LL = -PIT+YAW
  • UR = PIT-YAW
  • LR = -PIT-YAW

These channels are the direct input for the physical output channels that generate the control voltage.

The fast channels for PIT and YAW have a numerical correction factor built in that accounts for differences between the OSEMs, but the slow channels don't. This means that the slow PIT/YAW controls are not entirely orthogonal but have crosstalk on the order of 10 percent. This in itself is not that dramatic, however the offload offsets scripts for the dither alignment use the fast PIT/YAW values as inputs, which represent the necessary adjustments to the OSEMs only after the individual correction factors have been applied. The offloading to slow knows nothing of this calibration difference between the OSEMs. The result is that there is a ~10 percent of the offset correction error on the mirror alignment AFTER offloading. This will of course converge after a few iterations, but in any case it is recommendable to run the dither alignment again after offloading and not offload the new offsets to the fast channels.

 

  13477   Thu Dec 14 19:41:00 2017 gautamUpdateCDSCDS recovery, NFS woes

[Koji, Jamie(remote), gautam]

Summary: The CDS system seems to be back up and functioning. But there seems to be some pending problems with the NFS that should be looked into.

We locked Y-arm, hand aligned transmission to 1yes. Some pending problems with ASS model (possibly symptomatic of something more general). Didn't touch Xarm because we don't know what exactly the status of ETMX is.

Problems raised in elogs in the thread of 13474 and also 13436 seem to be solved.


I would make a detailed post with how the problems were fixed, but unfortunately, most of what we did was not scientific/systematic/repeatable. Instead, I note here some general points (Jamie/Koji can addto /correct me):

  1. There is a "known" problem with unloading models on c1lsc. Sometimes, running rtcds stop <model> will kill the c1lsc frontend.
  2. Sometimes, when one machine on the dolphin network goes down, all 3 go down.
  3. The new FB/RCG means that some of the old commands now no longer work. Specifically, instead of telnet fb 8087 followed by shutdown (to fix DC errors) no longer works. Instead, ssh into fb1, and run sudo systemctl restart daqd_*.
  4. Timing error on c1sus machine was linked to the mx_stream processes somehow not being automatically started. The "!mxstream restart" button on the CDS overview MEDM screen should run the necessary commands to restart it. However, today, I had to manually run sudo systemctl start mx_stream on c1sus to fix this error. It is a mystery why the automatic startup of this process was disabled in the first place. Jamie has now rectified this problem, so keep an eye out.
  5. c1oaf persistently reported DC errors (0x2bad) that couldn't be addressed by running mxstream restart or restarting the daqd processes on FB1. Restarting the model itself (i.e. rtcds restart c1oaf) fixed this issue (though of course I took the risk of having to go into the lab and hard-reboot 3 machines).
  6. At some point, we thought we had all the CDS lights green - but at that point, the END FEs crashed, necessitating Koji->EX and Gautam->EY hard reboots. This is a new phenomenon. Note that the vertex machines were unaffected.
  7. At some point, all the DC lights on the CDS overview screen went white - at the same time, we couldn't ssh into FB1, although it was responding to ping. After ~2mins, the green lights came back and we were able to connect to FB1. Not sure what to make of this.
  8. While trying to run the dither alignment scripts for the Y-arm, we noticed some strange behaviour:
    • Even when there was no signal (looking at EPICS channels) at the input of the ASS servos, the output was fluctuating wildly by ~20cts-pp.
    • This is not simply an EPICS artefact, as we could see corresponding motion of the suspension on the CCD.
    • A possible clue is that when I run the "Start Dither" script from the MEDM screen, I get a bunch of error messages (see Attachment #2).
    • Similar error messages show up when running the LSC offset script for example. Seems like there are multiple ports open somehow on the same machine?
    • There are no indicator lights on the CDS overview screen suggesting where the problem lies.
    • Will continue investigating tomorrow.

Some other general remarks:

  1. ETMX watchdog remains shutdown.
  2. ITMY and BS oplevs have been hijacked for HeNe RIN / Oplev sensing noise measurement, and so are not enabled.
  3. Y arm trans QPD (Thorlabs) has large 60Hz harmonics. These can be mitigated by turning on a 60Hz comb filter, but we should check if this is some kind of ground loop. The feature is much less evident when looking at the TRANS signal on the QPD.

UPDATE 8:20pm:

Koji suggested trying to simply retsart the ASS model to see if that fixes the weird errors shown in Attachment #2. This did the trick. But we are now faced with more confusion - during the restart process, the various indicators on the CDS overview MEDM screen froze up, which is usually symptomatic of the machines being unresponsive and requiring a hard reboot. But we waited for a few minutes, and everything mysteriously came back. Over repeated observations and looking at the dmesg of the frontend, the problem seems to be connected with an unresponsive NFS connection. Jamie had noted sometime ago that the NFS seems unusually slow. How can we fix this problem? Is it feasible to have a dedicated machine that is not FB1 do the NFS serving for the FEs?

Attachment 1: CDS_14Dec2017.png
CDS_14Dec2017.png
Attachment 2: CDS_errors.png
CDS_errors.png
  13481   Fri Dec 15 11:19:11 2017 gautamUpdateCDSCDS recovery, NFS woes

Looking at the dmesg on c1iscex for example, at least part of the problem seems to be associated with FB1 (192.168.113.201, see Attachment #1). The "server" can be unresponsive for O(100) seconds, which is consistent with the duration for which we see the MEDM status lights go blank, and the EPICS records get frozen. Note that the error timestamped ~4000 was from last night, which means there have been at least 2 more instances of this kind of freeze-up overnight.

I don't know if this is symptomatic of some more widespread problem with the 40m networking infrastructure. In any case, all the CDS overview screen lights were green today morning, and MC autolocker seems to have worked fine overnight.

I have also updated the wiki page with the updated daqd restart commands.

Unrelated to this work - Koji fixed up the MC overview screen such that the MC autolocker button is now visible again. The problem seems to do with me migrating some of the c1ioo EPICS channels from the slow machine to the fast system, as a result of which the EPICS variable type changed from "ENUM" to something that was not "ENUM". In any case, the button exists now, and the MC autolocker blinky light is responsive to its state.

Quote:

I don't think the problem is fb1.  The fb1 NFS is mostly only used during front end boot.  It's the rtcds mount that's the one that sees all the action, which is being served from chiara.

 

Attachment 1: NFS.png
NFS.png
Attachment 2: MCautolocker.png
MCautolocker.png
  13482   Fri Dec 15 17:05:55 2017 gautamUpdatePEMTrillium seismometer DC offset

Yesterday, while we were bringing the CDS system back online, we noticed that the control room wall StripTool traces for the seismic BLRMS signals did not come back to the levels we are used to seeing even after restarting the PEM model. There are no red lights on the CDS overview screen indicative of DAQ problems. Trending the DQ-ed seismometer signals (these are the calibrated (?) seismometer signals, not the BLRMS) over the last 30 days, it looks like

  1. On ~1st December, the signals all went to 0 - this is consistent with signals in the other models, I think this is when the DAQ processes crashed.
  2. On ~8 December, all the signals picked up a DC offset of a few 100s (counts? or um/s? this number is after a cts2vel calibration filter). I couldn't find anything in the elog on 8 December that may be related to this problem.

I poked around at the electronics rack (1X5/1X6) which houses the 1U interface box for these signals - on its front panel, there is a switch that has selectable positions "UVW" and "XYZ". It is currently set to the latter. I am assuming the former refers to velocities in the xyz directions, and the latter is displacement in these directions. Is this the nominal state? I didn't spend too much time debugging the signal further for now.

 

Attachment 1: Trillium.png
Trillium.png
  13485   Fri Dec 15 19:09:49 2017 gautamUpdateIOOIMC lockloss correlated with PRM alignment?

Motivation:

To test the hypothesis that the IMC lock duty cycle is affected by the PRM alignment. Rana pointed out today that the input faraday has not been tuned to maximize the output->input isolation in a while, so the idea is that perhaps when the PRM is aligned, some of the reflected light comes back towards the PSL through the Faraday and hence, messes with the IMC lock.

A script to test this hypothesis is running over the weekend (in case anyone was thinking of doing anything with the IFO over the weekend).

Methodology:

I've made a simple script - the pseudocode is the following:

  • Align PRM
  • For the next half hour, look for downward transitions in the EPICS record for MC TRANS > 5000 cts - this is a proxy for an MC lockloss
  • At the end of 30 minutes, record number of locklosses in the last 30 minutes
  • Misalign PRM, repeat the above 3 bullets

The idea is to keep looping the above over the weekend, so we can expect ~100 datapoints, 50 each for PRM misaligned/aligned. The times at which PRM was aligned/misaligned is also being logged, so we can make some spectrograms of PC drive RMS (for example) with PRM aligned/misaligned. The script lives at /opt/rtcds/caltech/c1/scripts/SUS/FaradayIsolationTest/FaradayIsolCheck.py. Script is being run inside a tmux session on pianosa, hopefully the machine doesn't crash over the weekend and MC1/CDS stays happy.

A more direct measurement of the input Faraday isolation can be made by putting a photodiode in place of the beam dump shown in Attachment #1 (borrowed from this elog). I measured ~100uW of power leaking through this mirror with the PRM misaligned (but IMC locked). I'm not sure what kind of SNR we can expect for a DC measurement, but if we have a chopper handy, we could put a chopper (in the leaked beam just before the PD so as to allow the IMC to be locked) and demodulate at that frequency for a cleaner measurement? This way, we could also measure the contribution from prompt reflections (up to the input side of the Faraday) by simply blocking the beam going into the vacuum. The window itself is wedged so that shouldn't be a big contributor.

Attachment 1: PSL_layout.JPG
PSL_layout.JPG
  13486   Mon Dec 18 16:45:44 2017 gautamUpdateIOOIMC lockloss correlated with PRM alignment?

I stopped the test earlier today morning around 11:30am. The log file is located at /opt/rtcds/caltech/c1/scripts/SUS/FaradayIsolationTest/PRM_stepping.txt. It contains the times at which the PRM was aligned/misaligned for lookback, and also the number of MC unlocks during every 30 minute period that the PRM alignment was toggled. This was computed by:

  • continuously reading the current value of the EPICS record for MC Trans.
  • comparing its current value to its values 3 seconds ago.
  • If there is a downward step in this comparison greater than 5000 counts, increment a counter variable by 1.
  • Reset counter at the end of 30 minute period.

I think this method is a pretty reliable proxy, because the MC autolocker certainly takes >3 seconds to re-acquire the lock (it has to run mcdown, wait for the next cavity flash, and run mcup in the meantime).

Preliminary analysis suggests no obvious correlation between MC lock duty cycle and PRM alignment.

I leave further analysis to those who are well versed in the science/art of PRM/IMC statistical correlations.

  13488   Mon Dec 18 20:37:18 2017 gautamUpdatePSLPMC MEDM cleanup

There are fewer lies on this screen now. For reference, the details of the electronics modifications made are in this elog.

  1. Error and control signals are now in units of nm, the appropriate filter switches have been SDF'ed.
  2. I think it's useful to see the control voltage to the PZT in volts as well, so I've made two readbacks available at the control point, one in V and one in nm.
  3. Indicated that the on-board LO mon readback, which reads "nan", is no longer meaningful, as the mixer is off the demod board.
  4. Indicated that the PMC Trans readback of "0" is because of a dead ADC.
Quote:

I think many of the readbacks on the PMC MEDM screen are now bogus and misleading since the PMC RF upgrade that Gautam did awhile ago. We ought to fix the screen and clearly label which readbacks and actuators are no longer valid.

 

Attachment 1: PMC_revamped.png
PMC_revamped.png
  13493   Thu Dec 28 17:22:02 2017 gautamUpdateGeneralpower outage - CDS recovery
  1. I had to manually reboot c1lsc, c1sus and c1ioo.
  2. I edited the line in /etc/rt.sh (specifically, on FB /diskless/root.jessie/etc/rt.sh) that lists models running on a given frontend, to exclude c1dnn and c1oaf, as these are the models that have been giving us most trouble on startup. After this, I was able to bring back all models on these three machines using rtcds restart --all. The original line in this file has just been commented out, and can be restored whenever we wish to do so.
  3. mx_stream processes are showing failed status on all the frontends. As a result, the daqd processes are still not working. Usual debugging methods didn't work.
  4. Restored all sus dampings.
  5. Slow computers all seem to be responsive, so no action was required there.
  6. Burtrestored c1psl to solve the "sticky slider" problem, relocked PMC. I didn't do anything further on the PSL table w.r.t. the manual beam block Steve has placed there till the vacuum situation returns to normal.

@Steve: I noticed that we are down to our final bottle of N2, not sure if it will last till 2 Jan which is presumably when the next delivery will come in. Since V1 is closed and the PSL beam is blocked, perhaps this doesn't matter.

from Steve: there are spare full N2 bottles at the south end outside and inside. I replaced the N2 on Sunday night. So the system should be Ok as is.

I also hard-rebooted megatron and optimus as these were unresponsive to ping.

*Seems like the mx_stream errors were due to the mx process not being started on FB. I could fix this by running sudo systemctl start mx on FB. After which I ran sudo systemctl restart daqd_*. But the DC errors persist - not sure how to fix this. Elogging suggests that "0x4000" errors are connected to timing problems on FB, but restarting the ntp service on FB (which is the suggested fix in said elogs) didn't fix it. Also unsure if mx process is supposed to automatically start on FB at startup.

Attachment 1: 28.png
28.png
  13496   Tue Jan 2 16:24:29 2018 gautamUpdatesafetyProjector periodically shuts itself off

I noticed this behaviour since ~Dec 20th, before the power failure. The bulb itself seems to work fine, but the projector turns itself off after <1 minute after being manually turned on by the power button. AFAIK, there was no changes made to the projector/Zita. Perhaps this is some kind of in-built mechanism that is signalling that the bulb is at the end of its lifetime? It has been ~4.5 months (3240 hours) since the last bulb replacement (according to the little sticker on the back which says the last bulb replacement was on 15 Aug 2017

  13497   Tue Jan 2 16:37:26 2018 gautamUpdateOptimal ControlOplev loop tuning

I've made various changes to the optimal loop design approach, but am still not having much success. A summary of changes made:

  1. Parametrization of filter - enforcing uniqueness
    • Previously, the input to the particle swarm was a vector of root frequencies and associated Q-factors.
    • This way of parametrization is not unique - permuting the order of the roots yield the same filter, but particles traversing the high (65) dimensional parameter space may have to go over very expensive regions in order to converge with the global minimum / best performing particle.
    • One way around this is to parametrize the filter by the highest pole/zero frequency, and then specifying the remaining roots by the cumulative separation from this highest root. This guarantees that a unique vector input to the particle swarm function specifies a unique filter.
    • To avoid negative frequencies, I manually set a particular element of the vector to 0 if the cumulative sum yields a negative frequency. I believe this is how MATLAB's particle swarm implements the "constraints" in the constrained optimization routines.
  2. Cost function - I've reformulated this into something that makes more sense to me, but probably can be improved further.
    • Term #1 - integral of the area (evaluated with MATLAB's trapz utility) between the in-loop (i.e. suppressed) error signal and the sensing noise spectrum (for the latter, I use the orange curve from this plot). This is a signed number, so that suppression below the sensing noise is penalized. Target value is 1 urad rtHz. One problem I see with this approach is that if we believe the sensing noise measurement, then even at 10mHz, it looks like sensing noise is below the out-of-loop error signal level. So the optimizer doesn't seem to want to make the loop AC coupled.
    • Term #2 - stability margin. I'm using this number, which is the distance-of-closest-approach to the point -1 in the Nyquist plot, instead of gain and phase margins, as this yields a more conservative robustness measure. Target value is 0.65.
    • Term #3 - A2L contribution of in-loop control signal. This contribution is calculated using measurements of A2L coupling for the DRMI. The actual term that goes into the cost function is the ratio of the area under the in-loop control signal to that under the seismic noise curve above 35Hz. Further, f>100Hz is given 10x the weight of 35Hz<f<100Hz (I've not really played around with this weighting function). The goal is to be as close to the seismic curve as possible, at which point this term becomes 1.
    • Terms #4 and #5 - the maximum open loop gain evaluated in a 1Hz wide bin centered around the bounce and roll resonances. The aim is to not exceed -40dB in these bins. Perhaps this needs to be reformulated, as the optimizer seems to be giving this term too much importance - the optimized loops have extremely deep bandstops around the BR resonances.
    • To normalize each term, I divide by the "target" value mentioned above, so as to make the various terms comparable.
    • Each term in the cost function has two regimes - one where it is rapidly varying close to the desired operating point, and one far away where the cost still increases monotonically, but slower (see Attachment #2).
    • A scalar cost function is evaluated by taking a weighted sum of the above terms. The weights are chosen so as to make each term ~10 for the controller currently implemented.
    • All of the above are only applicable if the resulting loop is stable - else, a large cost is assigned (exponential of sum of real parts of poles of OLTF).

Attachment #1 shows the outcome of a typical optimization run - so while I am having some more success with this than before, where the PSO algorithm was stalling and terminating before any actual optimization was done, it seems like I need to re-think the cost function yet again...

Attachment #2 shows the current terms entering the cost function, and their "desired" values.

The current version of the code I am using is here: although I may not have inculded some of the data files required to run it, to be fixed...

Attachment 1: loopOpt_180102_1706.pdf
loopOpt_180102_1706.pdf
Attachment 2: globalCosts.pdf
globalCosts.pdf
  13501   Wed Jan 3 18:00:46 2018 gautamUpdatePonderSqueezeplan of action

Notes of stuff we discussed @ today's meeting, and afterwards, towards measuring ponderomotive squeezing at the 40m.

  1. Displacement noise requirements
    • Kevin is going to see if we can measure any kind of squeezing on a short timescale by tuning various parameters.
    • Specifically, without requiring crazy ultra low current noise level for the coil driver noise.
  2. Investigate how much actuation range we need for lock acquisition and maintaining lock.
    • Specifically, for DARM.
    • We will measure this by having the arms controlled with ALS in the CARM/DARM basis.
    • Build up a noise budget for this, see how significant the laser noise contribution is.
  3. RC folding mirrors
    • In the present configuration, these are introducing ~2.5% RT loss in the RCs.
    • This affects PRG, and on the output side, measurable squeezing.
    • We want to see if we can relax the requirements on the RC folding mirrors such that we don't have to spend > 20 k$.
    • Specifically, consider spec'ing the folding mirror coatings to only have HR @1064 nm, and take what we get at 532 nm.
    • But still demand tolerances on RoC driven by mode-matching between the RCs and the arm cavities.
  4. ALS with Beat Mouth
    • Use the fiber coupled light from the ends to make the ALS signals.
    • Gautam will update diagram to show the signal chain from end-to-end (i.e. starting at AUX laser, ending at ADC input).
    • Make a noise budget for the same - preliminary analysis suggests a sensing noise floor of ~10 mHz/rtHz.

RXA:

  • For the ALS-DARM budget the idea is that we can do lock acquisition better, so we don't need to care about the acquisition reqs. i.e. we just need to set the ETM coil driver current range based on the DARM in-lock values.
    • To get the coil driver noise to be low enough to detect squeezing we need to use a ~10-15 kOhm series resistor.
    • We assume that all DAC and coil driver input noises can be sufficiently filtered.
    • We are assuming that we don't change the magnet sizes or the number of coil windings in the OSEMs.
    • The noise in the ITMs doesn't matter because we don't use them for any locking activity, so we can easily set the coil driver series resistors to 15 kOhm.
    • We will do the bias for the ETMs and ITMs using some HV circuit (not the existing ones on the coil driver boards) and doing the summation after the main coil driver series resistor. This HV bias module needs to handle the ~ (2 V / 400 Ohm) = 5 mA which is now used. This would require (5 mA) x (15 kOhm) = 60+ V drivers.
  • IF we can get away with doing the ALS beat note with just red (still using GREEN light from the end laser to lock to the arms from the ends), we will not have any requirements for the 532 nm transmission of any optics in the DRMI area.
    • Get some quotes for the new PR/SR mirrors having tight RoC tolerance, high R for 1064, and no spec for 532.
    • Check that the 1-way fiber noise for 1064 nm is < 100 mHz/rHz in the 50-1000 Hz band. If its more, explore putting better acoustic foam around the fiber run.
    • Improve the mode-matching of the IR beam into the fibers at the ends. We want >80% to reduce the noise do to scattering; we don't really care about the amount of light available in the PSL - this is just to reduce the IR-ALS noise.
  13502   Thu Jan 4 12:46:27 2018 gautamUpdateALSFiber ALS assay

Attachment #1 is the updated diagram of the Fiber ALS setup. I've indicated part numbers, power levels (optical and electrical). For the light power levels, numbers in green are for the AUX lasers, numbers in red are for the PSL.

I confirmed that the output of the power splitter is going to the "RF input" and the output of the delay line is going to the "LO input" of the demodulator box. Shouldn't this be the other way around? Unless the labels are misleading and the actual signal routing inside the 1U chassis is correctly done :/

  • Mode-matching into the fibers is rather abysmal everywhere.
  • In this diagram, only the power levels measured at the lasers and inputs of the fiber couplers are from today's measurements. I just reproduced numbers for inside the beat mouth from elog13254.
  • Inside the beat mouth, the PD output actually goes through a 20dB coupler which is included in this diagram for brevity. Both the direct and coupled outputs are available at the front panel of the beat mouth. The latter is meant for diagnostic purposes. The number of -8dBm of beat @30MHz is quoted using the direct output, and not the coupled output.

Still facing some CDS troubles, will start ALS recovery once I address them.

Attachment #2 is the svg file of Attachment #1, which we can update as we improve things. I'll put it on the DCC 40m tree eventually.

Attachment 1: FiberALS.pdf
FiberALS.pdf
Attachment 2: FiberALS.svg.zip
FiberALS.svg.zip
  13503   Thu Jan 4 14:39:50 2018 gautamUpdateGeneralpower outage - timing error

As mentioned in my previous elog, the CDS overview screen "DC" indicators are all RED (everything else is green). Opening up the displays for individual CPUs, the error message shown is "0x4000", which is indicative of some sort of timing error. Indeed, it seems to me that on the FB machine, the gpstime command shows a gps time that is ~1 second ahead of the times on other FE machines.

Running gpstime on other FE machines throws up an error, saying that it cannot connect to the network to update leap second data. Not sure what this is about...

I double checked the GPS timing module, we had some issues with this in the recent past. But judging by its front panel display, everything seems to be in order...

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/gpstime", line 9, in <module>
    load_entry_point('gpstime==0.2', 'console_scripts', 'gpstime')()
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 356, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2476, in load_entry_point
    return ep.load()
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2190, in load
    ['__name__'])
  File "/usr/lib/python3/dist-packages/gpstime/__init__.py", line 41, in <module>
    LEAPDATA = ietf_leap_seconds.load_leapdata(notify=True)
  File "/usr/lib/python3/dist-packages/ietf_leap_seconds.py", line 158, in load_leapdata
    fetch_leapfile(leapfile)
  File "/usr/lib/python3/dist-packages/ietf_leap_seconds.py", line 115, in fetch_leapfile
    r = requests.get(LEAPFILE_IETF)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 60, in get
    return request('get', url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 49, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 457, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 569, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 407, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', OSError(101, 'Network is unreachable'))

 

 

  13507   Fri Jan 5 22:19:53 2018 gautamUpdateGeneralpower outage - timing error

Just putting the relevant line from email from Rolf which at least identifies the problem here:

Looks like FB time is actually off by 1 year, as your timing system does not get year info.

There still seems to be something funky with the X arm transmission PDs - I can't seem to get the triggering to switch between the QPD and the Thorlabs PD, and the QPD signal seems to be wildly fluctuating by several orders of magnitude from 0.01-100. The c1iscex FE was pulled out, and it seemed to me like someone was doing some cable re-arrangement at the X end.

I will look into this tomorrow. 

Quote:

Rolf came here in the morning, but not sure what he did or if Jamie remotely did something. But the screen is green.

 

  13510   Sat Jan 6 18:27:37 2018 gautamUpdateGeneralpower outage - IFO recovery

Mostly back to nominal operating conditions now.

  1. EX TransMon QPD is not giving any sensible output. Seems like only one quadrant is problematic, see Attachment #1. I blame team EX_Acromag for bumping some cabling somewhere. In any case, I've disabled output of the QPD, and forced the LSC servo to always use the Thorlabs "High Gain" PD for now. Dither alignment servo for X arm does not work so well with this configuration - to be investigated.
  2. BS Seismometer (Trillium) is still not giving any sensible output.
    • I looked under the can, the little spirit level on the seismometer is well centered.
    • I jiggled all the cabling to rule out any obvious loose connections - found none at the seismometer, or at the interface unit (labelled D1002694 on the front panel) in 1X5/1X6.
    • All 3 axes are giving outputs with DC values of a few hundred - I guess there could've been some big earthquake in early December which screwed the internal alignment of the sensing mass in the seismometer. I don't know how to fix this.
    • Attachment #2 = spectra for the 3 channels. Can't say they look very seismicy frown. I've assumed the units are in um/sec.
    • This is mainly bothering me in the short term because I can't use the angular feedforward on PRC alignment, which is usually quite helpful in DRMI locking.
    • But I think the PRM Oplev loop is actually poorly tuned, in which case perhaps the feedforward won't really be necessary once I touch that up.

What I did today (may have missed some minor stuff but I think this is all of it):

  1. At EX:
    • Toggled power to Thorlabs trans monitoring PD, checked that it was actually powered, squished some cables in the e- rack.
    • Removed PDA55 in the green path (put there for EX laser AM/PM measurement). So green beam can now enter the X arm cavity.
    • Re-connected ALS cabling.
    • Turned on HV supply for EX Green PZT steering mirrors (this has to be done every time there is a power failure).
  2. At ITMY table:
    • Removed temporary HeNe RIN/ Oplev sensing noise measurement setup. HeNe + 1" vis-coated steering mirror moved to SP table.
    • Turned on ITMY/SRM Oplev HeNe.
    • Undid changes on ITMY Oplev QPD and returned it to its original position.
    • Centered ITMY reflected beam on this QPD.
  3. At vertex area
    • Looked under Trillium seismometer can - I've left the clamps undone for now while we debug this problem.
    • Power-cycled Trillium interface box.
    • Touched up PMC alignment.
  4. Control room
    • Recover IFO alignment using combination of IR and Green beams.
    • Single arm locking recovered, dither alignment servos run to maximize arm transmission. Single arm locks holding for hours, that's good.
    • The X arm dither alignment isn't working so well, the transmission never quite hits 1 and it undergoes some low frequency (T~30secs) oscillations once the transmission reaches its peak value.
    • Had to do the usual ipcrm thing to get dataviewer to run on pianosa.

Next order of business:

  1. Recover ALS:
    • aim is to replace the vertex area ALS signals derived from 532nm with their 1064nm counterparts.
    • Need to touch up end PDH servos, alignment/MM into arms, and into Fibers at ends etc.
    • Control the arms (with RMs misaligned) in the CARM/DARM basis using the revised ALS setup.
    • Make a noise budget - specifically, we are interested in how much actuation range is required to maintain DARM control in this config.
  2. Recover DRMI locking
    • Continue NBing.
    • Do a statistical study of actuation range required for acquiring and maintaining DRMI locking.
Attachment 1: EX_QPD_Quad1_Faulty.pdf
EX_QPD_Quad1_Faulty.pdf
Attachment 2: Trillium_faulty.pdf
Trillium_faulty.pdf
  13514   Sun Jan 7 17:27:13 2018 gautamUpdatePonderSqueezeDisplacement requirements for short-term squeezing

Maybe you've accounted for this already in the Optickle simulations - but in Finesse (software), the "tuning" corresponds to the microscopic (i.e. at the nm level) position of the optics, whereas the macroscopic lengths, which determine which fields are resonant inside the various cavities, are set separately. So it is possible to change the microscopic tuning of the SRC, which need not necessarily mean that the correct resonance conditions are satisfied. If you are using the Finesse model of the 40m I gave you as a basis for your Optickle model, then the macroscopic length of the SRC in that was ~5.38m. In this configuration, the f2 (i.e. 55MHz sideband) field is resonant inside the SRC while the f1 and carrier fields are not.

If we decide to change the macroscopic length of the SRC, there may also be a small change to the requirements on the RoCs of the RC folding mirrors. Actually, come to think of it, the difference in macroscopic cavity lengths explains the slight differences in mode-matching efficiencies I was seeing between the arms and RCs I was seeing before.

Quote:

Yes, this SRC detuning is very close to extreme signal recycling (0° in this convention), and the homodyne angle is close to the amplitude quadrature (90° in this convention).

For T(SRM) = 5% at the optimal angles (SRC detuning of -0.01° and homodyne angle of 89°), we can see 0.7 dBvac at 210 Hz.

 

  13518   Tue Jan 9 11:52:29 2018 gautamUpdateCDSslow machine bootfest

Eurocrate key turning reboots today morning for and c1susaux, c1auxey and c1iscaux. These were responding to ping but not telnet-able. Usual precautions were taken to minimize risk of ITMX getting stuck.

 

  13519   Tue Jan 9 21:38:00 2018 gautamUpdateALSALS recovery
  • Aligned IFO to IR.
    • Ran dither alignment to maximize arm transmission.
    • Centered Oplev reflections onto their respective QPDs for ITMs, ETMs and BS, as DC alignment reference. Also updated all the DC alignment save/restore files with current alignment. 
  • Undid the first 5 bullets of elog13325. The AUX laser power monitor PD remains to be re-installed and re-integrated with the DAQ.
    • I stupidly did not refer to my previous elog of the changes made to the X end table, and so spent ages trying to convince Johannes that the X end green alignment had shifted, and turned out that the green locking wasn't going because of the 50ohm terminator added to the X end NPRO PZT input. I am sorry for the hours wasted sad
    • GTRY and GTRX at levels I am used to seeing (i.e. ~0.25 and ~0.5) now. I tweaked input pointing of green and also movable MM lenses at both ends to try and maximize this. 
    • Input green power into X arm after re-adjusting previously rotated HWP to ~100 degrees on the dial is ~2.2mW. Seems consistent with what I reported here.
    • Adjusted both GTR cameras on the PSL table to have the spots roughly centered on the monitors.
    • Will update shortly with measured OLTFs for both end PDH loops.
    • X end PDH seems to have UGF ~9kHz, Y end has ~4.5kHz. Phase margin ~60 degrees in both cases. Data + plotting code attached. During the measurement, GTRY ~0.22, GTRX~0.45.

Next, I will work on commissioning the BEAT MOUTH for ALS beat generation. 

Note: In the ~40mins that I've been typing out these elogs, the IR lock has been stable for both the X and Y arms. But the X green has dropped lock twice, and the Y green has been fluctuating rather more, but has mangaged to stay locked. I think the low frequency Y-arm GTRY fluctuations are correlated with the arm cavity alignment drifting around. But the frequent X arm green lock dropouts - not sure what's up with that. Need to look at IR arm control signals and ALS signals at lock drop times to see if there is some info there.

Attachment 1: GreenLockStability.png
GreenLockStability.png
Attachment 2: ALS_OLTFs_20180109.pdf
ALS_OLTFs_20180109.pdf
Attachment 3: ALS_OLTF_data_20180109.tar.bz2
  13520   Tue Jan 9 21:57:29 2018 gautamUpdateOptimal ControlOplev loop tuning

After some more tweaking, I feel like I may be getting closer to a cost-function definition that works.

  • The main change I made was to effectively separate the BR-bandstop filter poles/zeros and the rest of the poles and zeros.
  • So now the input vector is still a list of highest pole frequency followed by frequency separations, but I can specify much tighter frequency bounds for the roots of the part of the transfer function corresponding to the Bounce/Roll bandstops.
  • This in turn considerably reduces the swarming area - at the moment, half of the roots are for the notches, and in the (f0,Q) basis, I see no reason for the bounds on f0 to be wider than [10,30]Hz.

Some things to figure out:

  1. How the "force" the loop to be AC coupled without explicitly requiring it to be so? What should the AC coupling frequency be? From the (admittedly cursory) sensing noise measurement, it would seem that the Oplev error signal is above sensing noise even at frequencies as low as 10mHz.
  2. In general, the loops seem to do well in reducing sensing noise injection - but they seem to do this at the expense of the loop gain at ~1Hz, which is not what we want.
    • I am going to try and run the optimizer with an excess of poles relative to zeros
    • Currently, n(Poles) = n(Zeros), and this is the condition required for elliptic low pass filters, which achieve fast transition between the passband and stopband - but we could just as well use a less rapid, but more monotonic roll-off. So the gain at 50Hz might be higher, but at 200Hz, we could perhaps do better with this approach.
  3. The loop shape between 10 and 30Hz that the optimizer outputs seems a but weird to me - it doesn't really quite converge to a bandstop. Need to figure that out.
Attachment 1: loopOpt_180108_2232.pdf
loopOpt_180108_2232.pdf
  13522   Wed Jan 10 12:24:52 2018 gautamUpdateCDSslow machine bootfest

MC autolocker got stuck (judging by wall StripTool traces, it has been this way for ~7 hours) because c1psl was unresponsive so I power cycled it. Now MC is locked.

  13523   Wed Jan 10 12:42:27 2018 gautamUpdateSUSETMX DC alignment

I've been observing this for a few days: ETMX's DC alignment seems to drift by so much that the previously well aligned X arm cavity is now totally misaligned.

The wall StripTool trace shows that both the X and Y arms were locked with arm transmissions around 1 till c1psl conked out - so in the attached plot, around 1400 UTC, the arm cavity was well aligned. So the sudden jump in the OSEM sensor signals is the time at which LSC control to the ETM was triggered OFF. But as seen in the attached plot, after the lockloss, the Oplev signals seem to show that the mirror alignment drifted by >50urad. This level of drift isn't consistent with the OSEM sensor signals - of course, the Oplev calibration could be off, but the tension in values is almost an order of magnitude. The misalignment seems real - the other Oplev spots have stuck around near the (0,0) points where I recentered them last night, only ETMX seems to have undergone misalignment.

Need to think about what's happening here. Note that this kind of "drift" behaviour seems to be distinct from the infamous ETMX "glitching" problem that was supposed to have been fixed in the 2016 vent.

 

Attachment 1: ETMXdrift.png
ETMXdrift.png
  13527   Wed Jan 10 18:53:31 2018 gautamUpdateSUSETMX DC alignment

I should've put in the SUSPIT and SUSYAW channels in the previous screenshot. I re-aligned ETMX till I could see IR flashes in the arm, and also was able to lock the green beam on a TEM00 mode with reasonable transmission. As I suspected, this brought the Oplev spot back near the center of it's QPD. But the answer to the question "How much did I move the ETM by" still varies by ~1 order of magnitude, depending on if you believe the OSEM SUSPIT and SUSYAW signals, or the Oplev error signals - I don't know which, if any, of these, are calibrated.

Attachment 1: ETMXdrift.png
ETMXdrift.png
  13531   Thu Jan 11 14:22:40 2018 gautamUpdateALSFiber ALS assay

I did a cursory check of the ALS signal chain in preparation for commissioning the IR ALS system. The main elements of this system are shown in my diagram in the previous elog in this thread.

Questions I have:

  1. Does anyone know what exactly is inside the "Delay Line" box? I can't find a diagram anywhere.
    • Jessica's SURF report would suggest that there are just 2 50m cables in there.
    • There are two power splitters taped to the top of this box.
    • It is unclear to me if there are any active components in the box.
    • It is unclear to me if there is any thermal/acoustic insulation in there.
    • For completeness, I'd like to temporarily pull the box out of the LSC rack, open it up, take photos, and make a diagram unless there are any objections.
  2. If you believe the front panel labeling, then currently, the "LO" input of the mixer is being driven by the part of the ALS beat signal that goes through the delay line. The direct (i.e. non delayed) output of the power splitter goes to the "RF" input of the mixer. The mixer used, according to the DCC diagram, is a PE4140. Datasheet suggests the LO power can range from -7dBm to +20dBm. For a -8dBm beat from the IR beat PDs, with +24dB gain from the ZHL3A but -3dB from the power splitter, and assuming 9dB loss in the cable (I don't know what the actual loss is, but according to a Frank Seifert elog, the optimal loss is 8.7dB and I assume our delay line is close to optimal), this means that we have ~4dBm at the "LO" input of the demod board. The schematic says the nominal level the circuit expects is 10dBm. If we use the non-delayed output of the power splitter, we would have, for a -8dBm beat, (-8+24-3)dBm ~13dBm, plus probably some cabling loss along the way which would be closer to 10dBm. So should we use the non-delayed version for the LO signal? Is there any reason why the current wiring is done in this way?

 

  13533   Thu Jan 11 18:50:31 2018 gautamUpdateIOOMCautolocker getting stuck

I've noticed this a couple of times today - when the autolocker runs the mcdown script, sometimes it doesn't seem to actually change the various gain sliders on the PSL FSS. There is no handshaking built in to the autolocker at the moment. So the autolocker thinks that the settings are correct for lock re-acquisition, but they are not. The PCdrive signal is often railing, as is the PZT signal. The autolocker just gets stuck waiting to re-acquire lock. This has happened today ~3 times, and each time, the Autolocker has tried to re-acquire lock unsuccessfully for ~1hour.

Perhaps I'll add a line or two to check that the signal levels are indicative of mcdown being successfully executed.

  13534   Thu Jan 11 20:51:20 2018 gautamUpdateALSFiber ALS assay

After labeling cables I would disconnect, I pulled the box out of the LSC rack. Attachment #1 is a picture of the insides of the box - looks like it is indeed just two lengths of cabling. There was also some foam haphazardly stuck around inside - presumably an attempt at insulation/isolation.

Since I have the box out, I plan to measure the delay in each path, and also the signal attenuation. I'll also try and neaten the foam padding arrangement - Steve was showing me some foam we have, I'll use that. If anyone has comments on other changes that should be made / additional tests that should be done, please let me know.

20180111_2200: I'm running some TF measurements on the delay line box with the Agilent in the control room area (script running in tmux sesh on pianosa). Results will be uploaded later.

Quote:

For completeness, I'd like to temporarily pull the box out of the rack, open it up, take photos, and make a diagram unless there are any objections.

 

Attachment 1: IMG_5112.JPG
IMG_5112.JPG
  13535   Thu Jan 11 20:59:41 2018 gautamUpdateDAQetmx slow daq chassis

Some suggestions of checks to run, based on the rightmost colum in the wiring diagram here - I guess some of these have been done already, just noting them here so that results can be posted.

  1. Oplev quadrant slow readouts should match their fast DAQ counterparts.
  2. Confirm that EX Transmon QPD whitening/gain switching are working as expected, and that quadrant spectra have the correct shape.
  3. Watchdog tripping under different conditions.
  4. Coil driver slow readbacks make sense - we should also confirm which of the slow readbacks we are monitoring (there are multiple on the SOS coil driver board) and update the MEDM screen accordingly.
  5. Confirm that shadow sensor PD whitening is working by looking at spectra.
  6. Confirm de-whitening switching capability - both to engage and disengage - maybe the procedure here can be repeated.
  7. Monitor DC alignment of ETMX - we've seen the optic wander around (as judged by the Oplev QPD spot position) while sitting in the control room, would be useful to rule out that this is because of the DC bias voltage stability (it probably isn't).
  8. Confirm that burt snapshot recording is working as expected - this is not just for c1auxex, but for all channels, since, as Johannes pointed out, the 2018 directory was totally missing and hence no snapshots were being made.
  9. Confirm that systemd restarts IOC processes when the machine currently called c1auxex2 gets restarted for whatever reason.

 

  13536   Thu Jan 11 21:09:33 2018 gautamUpdateCDSrevisiting Acromag

We'd like to setup the recording of the PSL diagnostic connector Acromag channels in a more robust way - the objective is to assess the long term performance of the Acromag DAQ system, glitch rates etc. At the Wednesday meeting, Rana suggested using c1ioo to run the IOC processes - the advantage being that c1ioo has the systemd utility, which seems to be pretty reliable in starting up various processes in the event of the computer being rebooted for whatever reason. Jamie pointed out that this may not be the best approach however - because all the FEs get the list of services to run from their common shared drive mount point, it may be that in the event of a power failure for example, all of them try and start the IOC processes, which is presumably undesirable. Furthermore, Johannes reported the necessity for the procServ utility to be able to run the modbusIOC process in the background - this utility is not available on any of the FEs currently, and I didn't want to futz around with trying to install it.

One alternative is to connect the PSL Acromag also to the Supermicro computer Johannes has set up at the Xend - it currently has systemd setup to run the modbusIOC, so it has all the utilities necessary. Or else, we could use optimus, which has systemd, and all the EPICS dependencies required. I feel less wary of trying to install procServ on optimus too. Thoughts?

 

  13539   Fri Jan 12 12:31:04 2018 gautamConfigurationComputerssendmail troubles on nodus

I'm having trouble getting the sendmail service going on nodus since the Christmas day power failure - for some reason, it seems like the mail server that sendmail uses to send out emails on nodus (mx1.caltech.iphmx.com, IP=68.232.148.132) is on a blacklist! Not sure how exactly to go about remedying this.

Running sudo systemctl status sendmail.service -l also shows a bunch of suspicious lines:

Jan 12 10:15:27 nodus.ligo.caltech.edu sendmail[6958]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 10:15:45 nodus.ligo.caltech.edu sendmail[6958]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+10:49:16, xdelay=00:00:39, mailer=esmtp, pri=5432408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable
Jan 12 11:15:23 nodus.ligo.caltech.edu sendmail[10334]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 11:15:31 nodus.ligo.caltech.edu sendmail[10334]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+11:49:02, xdelay=00:00:27, mailer=esmtp, pri=5522408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable
Jan 12 12:15:25 nodus.ligo.caltech.edu sendmail[13747]: STARTTLS=client, relay=cluster6a.us.messagelabs.com., version=TLSv1/SSLv3, verify=FAIL, cipher=DHE-RSA-AES256-GCM-SHA384, bits=256/256
Jan 12 12:15:42 nodus.ligo.caltech.edu sendmail[13747]: w0A7QThE032091: to=<umakant.rapol@iiserpune.ac.in>, ctladdr=<controls@nodus.ligo.caltech.edu> (1001/1001), delay=2+12:49:13, xdelay=00:00:33, mailer=esmtp, pri=5612408, relay=cluster6a.us.messagelabs.com. [216.82.251.230], dsn=4.0.0, stat=Deferred: 421 Service Temporarily Unavailable

 

Why is nodus attempting to email umakant.rapol@iiserpune.ac.in?

  13541   Fri Jan 12 18:08:55 2018 gautamUpdateGeneralpip installed on nodus

After much googling, I figured out how to install pip on SL7:

sudo easy_install pip

Next, I installed git:

sudo yum install git A

Turns out, actually, pip can be installed via yum using

sudo yum install python-pip
  13542   Fri Jan 12 18:22:09 2018 gautamConfigurationComputerssendmail troubles on nodus

Okay I will port awade's python mailer stuff for this purpose.

gautam 14Jan2018 1730: Python mailer has been implemented: see here for the files. On shared drive, the files are at /opt/rtcds/caltech/c1/scripts/general/pizza/pythonMailer/

gautam 11Feb2018 1730: The python mailer had never once worked successfully in automatically sending the message. I realized this may be because I had put the script on the root user's crontab, but had setup the authentication keyring with the password for the mailer on the controls user. So I have now setup a controls user crontab, which for now just runs the pizza mailing. let's see if this works next Sunday...

Quote:

I personally don't like the idea of having sendmail (or something similar like postfix) on a personal server as it requires a lot of maintenance cost (like security update, configuration, etc). If we can use external mail service (like gmail) via gmail API on python, that would easy our worry, I thought.

 

  13547   Mon Jan 15 11:53:57 2018 gautamUpdateIOOMCautolocker getting stuck

Looks like this problem presisted over the weekend - Attachment #1 is the wall StripTool trace for PSL diagnostics, seems like the control signal to the NPRO PZT and FSS EOM were all over the place, and saturated for the most part.

I traced down the problem to an unresponsive c1iool0. So looks like for the IMC autolocker to work properly (on the software end), we need c1psl, c1iool0 and megatron to all be running smoothly. c1psl controls the FSS box gains through EPICS channels, c1iool0 controls the MC servo board gains through EPICS channels, and megatron runs the various scripts to setup the gains for either lock acquisition or in lock states. In this specific case, the autolocker was being foiled because the mcdown script wasn't running properly - it was unable to set the EPICS channel C1:IOO-MC_VCO_GAIN to its lock acquisition value of -15dB, and was stuck at its in-lock value of +7dB. Curiously, the other EPICS channels on c1iool0 seemed readable and were reset by mcdown. Anyways, keying the c1iool0 crate seems to have fixed the probelm.

Quote:

I've noticed this a couple of times today - when the autolocker runs the mcdown script, sometimes it doesn't seem to actually change the various gain sliders on the PSL FSS. There is no handshaking built in to the autolocker at the moment. So the autolocker thinks that the settings are correct for lock re-acquisition, but they are not. The PCdrive signal is often railing, as is the PZT signal. The autolocker just gets stuck waiting to re-acquire lock. This has happened today ~3 times, and each time, the Autolocker has tried to re-acquire lock unsuccessfully for ~1hour.

Perhaps I'll add a line or two to check that the signal levels are indicative of mcdown being successfully executed.

 

Attachment 1: MCautolkockerStuck.png
MCautolkockerStuck.png
  13548   Mon Jan 15 17:36:03 2018 gautamHowToOptical LeversOplev calibration

Summary:

I checked the calibration of the Oplevs for both ITMs, both ETMs and the BS. The table below summarizes the old and new cts->urad conversion factors, as well as the factor describing the scaling applied. Attachment #1 is a zip file of the fits performed to calculate these calibration factors (GPS times of the sweeps are in the titles of these plots). Attachment #2 is the spectra of the various Oplev error signals (open loop, so a measure of seismic induced angular motion for a given optic, and DoF) after the correction. Loop TF measurements post calibration factor update and loop gain adjustment to be uploaded tomorrow.

Optic, DoF Old calib [urad/ct] New Calib [urad/ct] Correction Factor [new/old]
ETMX, Pitch 200 175 0.88
ETMX, Yaw 222 175 0.79
ITMX, Pitch 122 134 1.1
ITMX, Yaw 147 146 1
BS, Pitch 130 136 1.05
BS, Yaw 170 176 1.04
ITMY, Pitch 239 254 1.06
ITMY, Yaw 226 220 0.97
ETMY, Pitch 140 164 1.17
ETMY, Yaw 143 169 1.18

Motivation:

We'd like for the Oplev calibration to be a reliable readback of the optic alignment. For example, a calibrated Oplev would be a useful diagnostic to analyze the drifting (?) ETMX.

Details:

  1. I locked and dither aligned the individual arms.
  2. I then used a 60 second ramp time to misalign <optic> in {ITMX, ITMY, BS, ETMX, ETMY} one at a time, and looked at the appropriate arm cavity transmission while the misalignment was applied. The amplitude of the misalignment was chosen such that in the maximally misaligned state, the arm cavity was still locked to a TEM00 mode, with arm transmission ~40% of the value when the cavity transmission was maximized using the dither alignment servos. The CDS ramp is not exactly linear, it looks more like a sigmoid near the start and end, but I don't think that really matters for these fits.
  3. I used the script OLcalibFactor.py (located at /opt/rtcds/caltech/c1/scripts/OL) to fit the data and extract calibration factors. This script downloads the arm cavity transmission and the OL error signal during the misalignment period, and fits a Gaussian profile to the data (X=oplev error signal, Y=arm transmission). Using geometry and mode overlap considerations, we can back out the misalignment in physical units (urad).

Comments:

  1. For the most part, the correction was small, of the order of a few percent. The largest corrections were for the ETMs. I believe the last person to do Oplev calibration for the TMs was Yutaro in Dec 2015, and since then, we have certainly changed the HeNes at the X and Y ends (but not for the ITMs), so this seems consistent.
  2. From attachment #2, most of the 1Hz resonances line up quite well (around 1-3urad/rtHz), so gives me some confidence in this calibration.
  3. I haven't done a careful error analysis yet - but the fits are good to the eye, and the residuals look randomly distributed for the most part. I've assumed precision to the level of 1 urad/ct in setting the new calibration factors.
  4. I think the misalignment period of 60 seconds is sufficiently long that the disturbance applied to the Oplev loop is well below the lower loop UGF of ~0.2Hz, and so the in loop Oplev error signal is a good proxy for the angular (mis)alignment of the optic. So no loop correction factor was applied.
  5. I've not yet calibrated the PRM and SRM oplevs.

Now that the ETMX calibration has been updated, let's keep an eye out for a wandering ETMX.

Attachment 1: OLcalib_20180115.tar.bz2
Attachment 2: Oplevs.pdf
Oplevs.pdf Oplevs.pdf
  13549   Tue Jan 16 11:05:51 2018 gautamUpdatePSL PSL shelf - AOM power connection interrupted

While moving the RefCav to facilitate the PSL shelf install, I bumped the power cable to the AOM driver. I will re-solder it in the evening after the shelf installation. PMC and IMC have been re-locked. Judging by the PMC refl camera image, I may also have bumped the camera as the REFL spot is now a little shifted. The fact that the IMC re-locked readily suggests that the input pointing can't have changed significantly because of the RefCav move.

 

  13551   Tue Jan 16 21:46:02 2018 gautamUpdatePSL PSL shelf - AOM power connection interrupted

Johannes informed me that he touched up the PMC REFL camera alignment. I am holding off on re-soldering the AOM driver power as I could use another pair of hands getting the power cable disentangled and removed from the 1X2 rack rails, so that I can bring it out to the lab and solder it back on.

Is anyone aware of a more robust connector solution for the type of power pins we have on the AOM driver?

Quote:

While moving the RefCav to facilitate the PSL shelf install, I bumped the power cable to the AOM driver. I will re-solder it in the evening after the shelf installation. PMC and IMC have been re-locked. Judging by the PMC refl camera image, I may also have bumped the camera as the REFL spot is now a little shifted. The fact that the IMC re-locked readily suggests that the input pointing can't have changed significantly because of the RefCav move.

 

 

  13552   Tue Jan 16 21:50:53 2018 gautamUpdateALSFiber ALS assay

With Johannes' help, I re-installed the box in the LSC electronics rack. In the end, I couldn't find a good solution to thermally insulate the inside of the box with foam - the 2U box is already pretty crowded with ~100m of cabling inside of it. So I just removed all the haphazardly placed foam and closed the box up for now. We can evaluate if thermal stability of the delay line is limiting us anywhere we care about and then think about what to do in this respect. This box is actually rather heavy with ~100m of cabling inside, and is right now mounted just by using the ears on the front - probably should try and implement a more robust mounting solution for the box with some rails for it to sit on.

I then restored all the cabling - but now, the delayed part of the split RF beat signal goes to the "RF in" input of the demod board, and the non-delayed part goes to the back-panel "LO" input. I also re-did the cabling at the PSL table, to connect the two ZHL3-A amplifier inputs to the IR beat PDs in the BeatMouth instead of the green BBPDs.

I didn't measure any power levels today, my plan was to try and get a quick ALS error signal spectrum - but looks like there is too much beat signal power available at the moment, the ADC inputs for both arm beat signals are overflowing often. The flat gain on the AS165 (=ALS X) and POP55 (=ALS Y) channels have been set to 0dB, but still the input signals seem way too large. The signals on the control room spectrum analyzer come from the "RF mon" ports on the demod board, and are marked as -23dBm. I looked at these peak heights with the end green beams locked to the arm cavities, as per the proposed new ALS scheme. Not sure how much cable loss we have from the LSC rack to the network analyzer, but assuming 3dB (which is the Google value for 100ft of RG58), and reading off the peak heights from the control room analyzer, I figure that we have ~0dBm of RF signal in the X arm. => I would expect ~3dBm of signal to the LO input. Both these numbers seem well within range of what the demod board is designed to handle so I'm not sure why we are saturating.

Note that the nominal (differential) I and Q demodulated outputs from the demod board come out of a backplane connector - but we seem to be using the front panel (single-ended) "MON" channels to acquire these signals. I also need to update my Fiber ALS diagram to indicate the power loss in cabling from the PSL table to the LSC electronics rack, expect it to be a couple of dB.

 

Quote:

After labeling cables I would disconnect, I pulled the box out of the LSC rack. Attachment #1 is a picture of the insides of the box - looks like it is indeed just two lengths of cabling. There was also some foam haphazardly stuck around inside - presumably an attempt at insulation/isolation.

Since I have the box out, I plan to measure the delay in each path, and also the signal attenuation. I'll also try and neaten the foam padding arrangement - Steve was showing me some foam we have, I'll use that. If anyone has comments on other changes that should be made / additional tests that should be done, please let me know.

20180111_2200: I'm running some TF measurements on the delay line box with the Agilent in the control room area (script running in tmux sesh on pianosa). Results will be uploaded later.

 

 

  13553   Wed Jan 17 14:32:51 2018 gautamUpdateDAQAcromag checks
  1. I take back what I said about the OSEM PD mon at the meeting - there does seem to be to be some overall calibration factor (Attachment #1) that has scaled the OSEM PD readback channels, by a factor of (20000/2^15), which Johannes informs me is some strange feature of the ADC, which he will explain in a subsequent post.
  2. The coil redback fields on the MEDM screen have a "30Hz HPF" text field below them - I believe this is misleading. Judging by the schematic, we are monitoring, on the backplane (which is what these channels are reading back from), the coil to the voltage with a gain of 0.5. We can reconfirm by checking the ETMX coil driver board, after which we should remove the misleading label on the MEDM screens.
Quote:

Some suggestions of checks to run, based on the rightmost colum in the wiring diagram here - I guess some of these have been done already, just noting them here so that results can be posted.

  1. Oplev quadrant slow readouts should match their fast DAQ counterparts.
  2. Confirm that EX Transmon QPD whitening/gain switching are working as expected, and that quadrant spectra have the correct shape.
  3. Watchdog tripping under different conditions.
  4. Coil driver slow readbacks make sense - we should also confirm which of the slow readbacks we are monitoring (there are multiple on the SOS coil driver board) and update the MEDM screen accordingly.
  5. Confirm that shadow sensor PD whitening is working by looking at spectra.
  6. Confirm de-whitening switching capability - both to engage and disengage - maybe the procedure here can be repeated.
  7. Monitor DC alignment of ETMX - we've seen the optic wander around (as judged by the Oplev QPD spot position) while sitting in the control room, would be useful to rule out that this is because of the DC bias voltage stability (it probably isn't).
  8. Confirm that burt snapshot recording is working as expected - this is not just for c1auxex, but for all channels, since, as Johannes pointed out, the 2018 directory was totally missing and hence no snapshots were being made.
  9. Confirm that systemd restarts IOC processes when the machine currently called c1auxex2 gets restarted for whatever reason.

 

 

 

Attachment 1: OSEMPDmon_Acro.png
OSEMPDmon_Acro.png
ELOG V3.1.3-