40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 270 of 339  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  12672   Wed Dec 7 11:52:48 2016 ericqUpdateIMCPartial IMC ringdowns

The transients are likely due to doppler interference due to the input laser frequency sloshing due to errant control signals after the IMC unlock. I performed a few "partial" ringdowns by reducing the power by about 80% while keeping the IMC servo locked. (Function generator at 0.5Vpp square wave, 0.25V offet. Turned IMC boosts off to increase the stable range of the servo).

I need to work out how to extract the loss from this, I think having a partial ringdown may change the calculations somewhat; the time constants in the trans and refl signals are not identical.

Thanks to Gautams nice setup, it was very easy to take these measurements. Thanks! Code and data attached.

Attachment 2: IMCpartial.zip
  12673   Thu Dec 8 07:56:05 2016 SteveUpdatePEMEQ6.5m Northen CA

No damage. ITMY is glitching, so it has not been damped.

 

Attachment 1: eq6.5FerndaleCA.png
eq6.5FerndaleCA.png
Attachment 2: 16d_glitching-_trend.png
16d_glitching-_trend.png
Attachment 3: EQ6.5_&4.7mFerndaleCa.png
EQ6.5_&4.7mFerndaleCa.png
  12674   Thu Dec 8 10:13:43 2016 SteveUpdateLSCglitching ITMY_UL has a history

 

 

Attachment 1: glitching__ITMY-UL_2007.png
glitching__ITMY-UL_2007.png
  12675   Thu Dec 8 19:01:21 2016 ranaUpdateIMCPartial IMC ringdowns

Mach Zucker on howto do Ringdowns:  https://dcc.ligo.org/LIGO-T900007

  12676   Tue Dec 13 17:26:42 2016 KojiUpdateIOOIMC WFS whitening filter investigation

Rana pointed out that this modification (removal of 900Ohm) leave the input impedance as low as 100Ohm.
As OP284 can drive up to 10mA, the input can span only +/-1V with some nonlinearity.

Rather than reinstalling the 900Ohms, Rana will investigate the old-days fix for the whitening filter that may involve the removal of AD602s.
Until the solution is supplied, the IMC WFS project is suspended.

  12677   Wed Dec 14 19:16:57 2016 LydiaUpdateCDSAcromag Binary I/O testing

I looked into converting the QPD whitening switches for the X end to Acromag.

  • To test this out and be able to freely toggle filters without messing anything up, I added a temporary dummy cdsFiltCtrl module (ACROMAG_BIO_TEST) to the c1scx model.
  • The filters can be toggled from the automatically generated medm screen medm/c1scx/C1SCX_ACROMAG_BIO_TEST.adl
  • The control output of the dummy filter bank is sent to a channel named C1:SCX-ACROMAG_SWCTRL.
  • I was able to read in the appropriate bits from there and send them to the appropriate acromag channel using a calcout channel.
    • I couldn't get individual bo channels to work. This Acromag module is configured to write to 4 channels at a time, so I set that up with an analog output channel. The calcout channel shifts each relevant bit from C1:SCX-ACROMAG_SWCTRL to the right place for writing to the Acromag. 
  • I connected the Acromag XT1111 Binary I/O unit to a temporary power supply and verified that toggling the filters on and off changed the output appropriately. This is a sinking output model so the output pin is connected to the return if the switch is on. 

The plan from here:

  • Determine how to configure these outputs to be compatible with the QPD whitening board.
  • Modify the SUS PD whitening board to always use the analog filter and remove digital option in models.
  • Test DACs 
  • Verify that the QPD whitening gain switches aren't doing anything
  • Assemble new Acromag box for X end and connect to QPD whitening, SUS PD whitening and SOS driver boards
  12678   Thu Dec 15 03:46:19 2016 ranaUpdateIOOIMC WFS whitening filter investigation

https://dcc.ligo.org/LIGO-D1400414

As it turns out, its not so old as I thought. Jenne and I reworked these in 2014-2015. The QPD whitening is the same as the IMC WFS whitening so we can just repeat those fixes here for the IMC.

Quote:

Rana pointed out that this modification (removal of 900Ohm) leave the input impedance as low as 100Ohm.
As OP284 can drive up to 10mA, the input can span only +/-1V with some nonlinearity.

Rather than reinstalling the 900Ohms, Rana will investigate the old-days fix for the whitening filter that may involve the removal of AD602s.
Until the solution is supplied, the IMC WFS project is suspended.

 

  12681   Thu Dec 22 09:37:20 2016 SteveUpdateVACRGA scan at day 63

Valve configuration: vacuum normal

RGA head temp: 43.5 C

Vac envelope temp: 23 C

 

Attachment 1: pd80-d63.png
pd80-d63.png
Attachment 2: pd80-580Hz-d63.png
pd80-580Hz-d63.png
  12687   Thu Dec 29 10:24:56 2016 SteveUpdatePEMEQ5.7mHawthorn NV

Sus damping restored.

 

Attachment 1: eq5.7HawthorneNV.png
eq5.7HawthorneNV.png
  12691   Thu Dec 29 21:48:32 2016 ranaUpdateIOOMC AutoLocker hung because c1iool0 asleep again

MC unlocked, Autolocker waiting for c1iool0 EPICS channels to respond. c1iool0 was responding to ping, but not to telnet. Keyed the crate and its coming back now.

There's many mentions of c1iool0 in the recent past, so it seems like its demise must be imminent. Good thing we have an Acromag team on top of things!

Also, the beam on WFS2 is too high and the autolocker is tickling the Input switch on the servo board too much: this is redundant / conflicting with the MC2 tickler.

  12692   Fri Dec 30 10:27:46 2016 ranaUpdateDetCharsummary pages dead again

Dead again. No outputs for the past month. We really need a cron job to check this out rather than wait for someone to look at the web page.

  12693   Thu Jan 5 21:43:16 2017 ranaUpdateDetCharsummary pages dead again

Max tells us that soem conf files were bad and that he did something and now some pages are being made. But the PEM and MEDM pages are bank. Also the ASC tab looks bogus to me.

  12695   Sun Jan 8 12:47:06 2017 ranaUpdateGeneralOptical Layout in DCC

Manasa pointed me to the CAD drawings in the 40m SVN and I've now uploaded them to the 40m DCC Tree so that EricG and SteveV can convert them into SolidWorks.

  12696   Mon Jan 9 09:18:47 2017 SteveUpdatePEMpower glitch

There was a power glitch last night around 1:15am

The vacuum was not effected.

PSL laser turned on, PMC locked, PSL shutter opened and MC locked.

IR lasers at the ends turned on.

East arm air cond turned on.

The computers are all done.

The last power glitch was at Nov 3, 2016

 

 

Attachment 1: MondayMorning.png
MondayMorning.png
  12697   Mon Jan 9 16:12:30 2017 SteveUpdateGeneralOptical Layout in DCC

Caltech Facilities promissed to email the 40m facility drawings in Cad format.

I organized the old of optical , vacuum and facility layout drawings on paper in the old cabinet. 

Quote:

Manasa pointed me to the CAD drawings in the 40m SVN and I've now uploaded them to the 40m DCC Tree so that EricG and SteveV can convert them into SolidWorks.

 

Attachment 1: drawings_on_paper.jpg
drawings_on_paper.jpg
  12698   Tue Jan 10 14:24:09 2017 SteveUpdateVACRGA scan at day 82

Valve configuration: vacuum normal

Vacuum envelope: 23C

Rga head: 44C

 

Attachment 1: pd80VNd82.png
pd80VNd82.png
  12699   Tue Jan 10 16:20:11 2017 SteveUpdateCDSpower glitch......Raid is rebuilding

Jamie started the fm40m Raid rebuilding. It has been beeping since the power outage.

Summary pages have no reading since power glitch.

 

Attachment 1: rebuilding_in_progress.png
rebuilding_in_progress.png
  12700   Tue Jan 10 21:47:00 2017 ranaUpdateCDSpower glitch

Does "done" mean they are OK or they are somehow damaged? Do you mean the workstations or the front end machines?

The computers are all done.

megatron and optimus are not responding to ping commands or ssh -- please power them up if they are off; we need them to get data remotely

  12701   Tue Jan 10 22:55:43 2017 gautamUpdateCDSpower glitch - recovery steps

Here is a link to an elog with the steps I had to follow the last time there was a similar power glitch.

The RAID array restart was also done not too long ago, we should also do a data consistency check as detailed here, if not already..

If someone hasn't found the time to do this, I can take care of it tomorrow afternoon after I am back.

Quote:

Does "done" mean they are OK or they are somehow damaged? Do you mean the workstations or the front end machines?

The computers are all done.

megatron and optimus are not responding to ping commands or ssh -- please power them up if they are off; we need them to get data remotely

 

  12702   Wed Jan 11 16:35:03 2017 gautamUpdateCDSpower glitch - recovery progress

[lydia, ericq, gautam]

We set about following the instructions linked in the previous elog. A few notes/remarks:

  1. It is important to run the ntpdate commands before restarting the models. Sometimes, multiple restarts of the models were required to turn all the indicator blocks on the MEDM screen green.
  2. There was also an issue of multiple ntpd processes running on the same machine, which obviously caused all sorts of timing havoc. EricQ helped us diagnose and fix these. At the moment, all the lights are green on the CDS status MEDM screen
  3. On the hardware side, apart from the usual suspects of frontends/megatron/optimus/fb needing to be rebooted, I noticed that the ETMX OSEM lights were off on the control room monitors. Investigation pointed to the 2 20V sorensens at the X end outputting 0V, 0A after the power glitch. We turned down both dials, and then gradually ramped them up again. Both Sorensens now read +/-20V, 0.3A, which is in agreement with the label stuck onto them.
  4. Restarted MC autolocker and FSS Slow scripts on megatron. I have not yet looked at the status of the nds2 server on megatron.
  5. 11 MHz Marconi has yet to be restarted - but I am unable to get even the IMC locked at the moment. For some reason, the RMS of the MC1 and MC3 coils are way higher than what I am used to seeing (~5mV rms as compared to the <1mV rms I am used to seeing for a damped optic). I will investigate further. Leaving MC autolocker disabled for now.
  12703   Wed Jan 11 19:20:23 2017 Max IsiUpdateSummary PagesDecember outage

The summary pages were not successfully generated for a long period of time at the end of 2016 due to syntax errors in the PEM and Weather configuration files.

These errors caused the INI parser to crash and brought down the whole gwsumm system. It seems that changes in the configuration of the Condor daemon at the CIT clusters may have made our infrastructure less robust against these kinds of problems (which would explain why there wasn't a better error message/alert), but this requires further investigation.

In any case, the solution was as simple as correcting the typos in the config side (on the nodus side) and restarting the cron jobs (on the cluster side, by doing `condor_rm 40m && condor_submit DetectorChar/condor/gw_daily_summary.sub`). Producing pages for the missing days will take some time (how to do so for a particular day is explained in the wiki https://wiki-40m.ligo.caltech.edu/DailySummaryHelp).

RXA: later, Max sent us this secret note:

However, I realize it might not be clear from the page which are the key steps. These are just running:

1) ./DetectorChar/bin/gw_daily_summary --day YYYYMMDD --file-tag some_custom_tag To create pages for day YYYYMMDD (the file-tag option is not strictly necessary but will prevent conflict with other instances of the code running simultaneously).

2) sync those days back to nodus by doing, eg: ./DetectorChar/bin/pushnodus 20160701 20160702

This must all be done from the cluster using the 40m shared account.
  12704   Thu Jan 12 02:45:53 2017 JohannesUpdateGeneralNext armloss steps

As stated in elog 12618, using an oscilloscope to average the reflected powers and thus circumventing all filtering yielded much better results than before:

XARM: 21 +/- 35 ppm
YARM: 69 +/- 45 ppm

We can probably decrease the measurement uncertainty further by using a larger photodiode that is more suited for DC measurements. It will be placed in the AS pathtemporarily. If we get below 10 ppm systematic errors will begin to matter. To get those under control I will have to re-determine the visibility in the arm cavities and the modulation indices. The numbers to match from an estimate via the power recycing gain are <= 50 ppm arm average from elog 12586. Once the measurement scheme is up and running, we can proceed to generate ETM lossmaps. ITM will still be tricky but let's see what we can do.

Following Yutaro's approach, we can move the beams on the optcs in a deterministic way by several mm on the ETMs. Moving the beam is achieved by introducing offsets into the ASS auto alignment. As an example, the Yaw dither for ETMY is shown:

Each of the 8 test mass rotational degrees of freedom is driven by a particular frequency, and 2 signals are digitally demodulated in the real-time system: The arm transmission ("T") and the LSC arm length feedback signal to the ETM (L). The T signal feeds back to the input pointing, aka Tip Tilts and BS. This maximizes the transmission for a given test mass orientation. The L feedback controls the beam position on the mirrors in the arms. It minimizes the coupling of the dither to the length feedback, which is achieved when the beam goes through the axis of the rotational motion. This is where we introduce the offset:

The signal C1:ASS-YARM_ETM_YAW_L_DEMOD_I_OFFSET (for this example) moves the locking point of the dither-to-length coupling and thus moves the beam around on the ETM. This is true for the PIT and YAW of all test masses except ITMX. In the current configuration the TTs optimize the alignment into the YARM, and for the X we only have the BS, which is why the beam spot on ITMX cannot be independently controlled as-is. We could, however, for the sake of this measurement, temporarily temporarily give TT authority to the XARM feedback to control the ITMX beam position. I imagine something like dither-aligning with ASS the normal way, and then run a customized script in which the XARM is treated as the YARM, feecback to the BS is cut, and the YAW signals are inverted due to the reflection on BS.

Knowing the angle of the offset gives us a way to calculate the beam spot displacement with the cavity geometry. For best results I want to make sure our OpLev calibration is still good (laser power decay, although last time this was done was only about a year ago), which would be analogous to elog 11831.

As for ITM beam position, this scheme only works partially, because it would require the beam to steer further off its axis than in the ETM case. This is problematic because of the spacing between tip tilts and ITMs. I summarize:

  1. Place larger DCPD in AS path
  2. Confirm mode-matching and mod-indices
  3. Assess loss in center with zero offsets
  4. Uncertainty low enough? If not get better.
  5. Calibrate OpLevs
  6. Introduce calibrated offsets in dither alignment
  7. Wander beam on test masses, recording arm losses
  8. ???
  9. Profit
Attachment 1: ass_illustration.pdf
ass_illustration.pdf
  12708   Thu Jan 12 17:31:51 2017 gautamUpdateCDSDC errors

The IFO is more or less back to an operational state. Some details:

  1. The IMC mirror excess motion alluded to in the previous elog was due to some timing issues on c1sus. The "DAC" and "DK" blocks in the c1x02 diag word were red instead of green. Restarting all the models on c1sus fixed the problem
  2. When c1ioo was restarted, all of Koji's changes (digital) to the MC WFS servo where lost as they were not committed to the SDF. Eric suggested that I could just restore them from burt snapshots, which is what I did. I used the c1iooepics.snap file from 12:19PM PST on 26 December 2016, which was a time when the WFS servo was working well as per this elog by Koji. I have also committed all the changes to the SDF. IMC alignment has been stable for the last 4 hours.
  3. Johannes aligned and locked the arms today. There was a large DC offset on POX11, which was zeroed out by closing the PSL shutter and running LSC offsets. Both arms lock and stay aligned now.
  4. The doubling oven controller at the Y end was switched off. Johannes turned it on.
  5. Eric and I started a data consistency check on the RAID array yesterday, it has completed today and indicated no issues
  6. NDS2 is now running again on megatron so channel access from outside should(???) be possible again.

One error persists - the "DC" indicator (data concentrator?) on the CDS medm screen for the various models spontaneously go red and return to green often. Is this a known issue with an easy fix?

  12709   Thu Jan 12 23:22:34 2017 ranaUpdateSummary PagesDecember outage

Pages still not working: PEM and MEDM blank.

  • Committed existing MEDM grabbing scripts to SVN. Ran the cron job on megatron by hand. It grabs PNG files, but somehow its not getting into the summary pages.
  • Changed the MEDM grabbing scripts to use '/usr/bin/env'.
  • GW summary log files were numbering in the many thousands, so I moved everything over 320 days old into the OLD/ sub-directory using 'find . -type f -mtime +320 -exec mv {} OLD/ \;' (the semi-colon is needed)
  • Did apt-get upgrade on Megatron.
  • pinged Max
  • Stared at GWsumm docs to see if there's a clue about what (if anything) is wrong with the .ini file.
  12710   Fri Jan 13 08:54:32 2017 JohannesUpdateGeneralDC PD installed

I installed a DC PD (Thorlabs PDA 520) in the beam path to AS55. I placed a 2" 90/10 BS on a flip mount that picks of enough light for the PD to spit out ~8V when the port is bright. Single arm continuous signal will be ~2V. While most of the light still continues towards AS55, the displacement from the BS moves the beam off AS55, so I used the flip mount in case anyone needs to use AS55. The current configuration is UP.

When we're done with loss investigations the flip mount should be removed from the bench.

I hooked the PD up to an ethernet-enabled scope and started scripting the loss map measurement (scope can receive commands via http so we can automate the data acquisition). The scope that was present at the bench and had been used for the MC ringdown measurements had a 'scrambled' screen that I couldn't fix so I had to retrieve another scope ("scope1"). I'll try to find out what's wrong with it but we may have to send it in for repair.

 

  12711   Fri Jan 13 10:53:03 2017 SteveUpdatePEMdoors are fixed

Control room to outside door was realigned.

                                                   It is self closing now.

Control room to IFO door lock optimized to soft closing.

All other doors lubricated by Alex of the key shop.

  12712   Fri Jan 13 14:18:28 2017 SteveUpdatePEMair condition fixed

The old control room AC  has been stick in heating mode for about 2 months. It's thermostate and fan belt  was finally replaced. It was calibrated and set to 71 F ( just behind 1X6 on west wall ) around 1pm.

Out belt; sad inside 

at 4 pm Rana cried

It must be too tight.

Attachment 1: PEM_120d.png
PEM_120d.png
  12713   Fri Jan 13 14:33:00 2017 MAX (not Rana)UpdateSummary PagesDecember outage

PEM config file was also lacking a section named "summary", which is necessary for all parent tabs; this has now been solved. I have deactivated the MEDM pages because Praful's screencap script seemed to be broken (I should have logged this, I apologize).

Quote:

Pages still not working: PEM and MEDM blank.

  • Committed existing MEDM grabbing scripts to SVN. Ran the cron job on megatron by hand. It grabs PNG files, but somehow its not getting into the summary pages.
  • Changed the MEDM grabbing scripts to use '/usr/bin/env'.
  • GW summary log files were numbering in the many thousands, so I moved everything over 320 days old into the OLD/ sub-directory using 'find . -type f -mtime +320 -exec mv {} OLD/ \;' (the semi-colon is needed)
  • Did apt-get upgrade on Megatron.
  • pinged Max
  • Stared at GWsumm docs to see if there's a clue about what (if anything) is wrong with the .ini file.

 

  12715   Fri Jan 13 21:41:23 2017 KojiUpdateCDSDC errors

I think I fixed the DC error issue

1. I added the leap second (leapsecond ?) entry for 2016/12/31, 23:60:00 UTC to daqdrc


[OLD]
set gps_leaps = 820108813 914803214 1119744016;
[NEW]
set gps_leaps = 820108813 914803214 1119744016 1167264018;

2. Restarted FB and all realtime models

Now I don't see any RED light.

  12716   Fri Jan 13 23:39:46 2017 gautamUpdateGeneralETMX suspension electronics problems?

[Koji,gautam]

After Koji's leap second fix, we were playing around with the X arm locking. In particular, we were playing around with the limit value on the X arm LSC filter bank - the nominal value is 4000, we wanted to see if we could increase this without kicking the optic while acquiring arm lock. We initially increased it to 8000, and then turned it off altogether. Then we rapidly turned the output of the servo ON/OFF, and looked at the arm transmission to see if it came back to the level before unlocking, as an indication of whether the optic was kicked.

These trials suggested a value of 8000 for the limiter was OK, so we left the LSC mode on with the limiter set to 8000. But just as we were about to leave for the night, I noticed on the wall Striptool that the X arm was unlocked. Investigating, we found that the green wasn't even locking to a HOM. Further investigation of the Oplev spot showed that ETMX had received a large kick (both pitch and law errors were ~200urad). ITMX was unaffected.

We initially tried lowering the LSC limit value back to 4000, then used first the Oplev spot and then the green to align the arm. But turning on LSC misaligned the arm after acquiring lock. So we decided to leave LSC off, thinking that the notorious ETMX suspension problems have resurfaced. As a diagnostic, we figured we'd leave the watchdog tripped, and use the Oplev to see if the optic was getting kicked. But the act of turning the watchdog off kicked the optic again (WHY?!).

Looking at the ETMX sus screen, turning off all the damping and LSC (but watchdog on) still leaves a non-zero offset in the "Vmon" field, between 0.02-0.05V depending on the coil. Turning the watchdog OFF takes all these to 0.009V, although I can see the LR value fluctuating between 0.004V and 0.009V. I went to the Xend and squished all the cables on the Sat. Box, but the problem persisted.

At this time, I can't think of any explanation, so I am giving up for the night. To avoid unnecessarily kicking the optic, I am going to unplug the suspension from the Sat. Box and leave one of our tester boxes plugged in, lets see if that sheds any light on the situation...


Notes:

  1. The +/-20V sorensens at this end were "tripped" for a few days after the power glitch until they were reset and turned back on yesterday. But this should not affect Vmon, as these Sorensens only supply the DC voltage for the coil bias, which is a slow machine channel?
  2. The X arm was staying locked and well aligned for hours on end earlier this afternon - in fact it was locked for about 2 hours 6-8 hours ago, I can still see the trace on the wall StripTool....
  12718   Sat Jan 14 12:12:03 2017 ranaUpdateDAQminute trends missing

Did we turn off minute trend writing in one of the recent FrameBuilder debug sessions? Seems we only have second trends in 2016. Maybe this explains why its so slow to get minute trends? Dataviewer has to rebuild it from second trend.

controls@nodus|frames > l
total 64
drwx------   2 root     root     16384 Jun  8  2009 lost+found/
drwxr-xr-x   2 controls controls  4096 Jul 14  2015 tmp/
-rw-r--r--   1 controls controls     0 Jul 14  2015 test-file
drwxr-xr-x   5 controls controls  4096 Apr  7  2016 trend/
drwxr-xr-x   4 root     root      4096 Apr 11  2016 archive/
drwxr-xr-x 789 controls controls 36864 Jan 13 19:34 full/
controls@nodus|frames > cd trend
controls@nodus|trend > l
total 3340
drwxr-xr-x 258 controls controls 3342336 Jul  6  2015 minute_raw/
drwxr-xr-x 387 controls controls   36864 Nov  5  2015 minute/
drwxr-xr-x 969 controls controls   36864 Jan 13 19:49 second/

  12719   Sat Jan 14 12:36:57 2017 ericqUpdateDAQminute trends missing

Yes, writing minute trends causes hourly FB crashes in the current state of things. The "raw" minute trending is turned on, but I think that these are unknown to nds.

  12722   Mon Jan 16 18:54:01 2017 ranaUpdateSUSBS: whitening re-engaged

Found that the BS whitening was off. Gautam says that "it has always been that way" and "there's nothing in the elog about this" and "I have no special relationship with Putin".

I looked at DV and DTT while turning the OSEM whitening back on. As expected, the sensor noise improved by 10x above 10 Hz. The time series shows no problems - its just less fuzzy now.

All OSEM spectra after the switch show on upper panel of plot. Lower panel shows comparison of BS UL before/after. To rotate the DTT PDF landscape output I typed this:

pdftk BS-white.pdf cat 1N output BSwhite.pdf

"if you see something, do something"

Attachment 1: BSwhite.pdf
BSwhite.pdf
  12725   Mon Jan 16 23:25:07 2017 gautamUpdateSUSMC1 SUS electronics investigation

[rana,gautam]

Summary:

  • MC1 glitchy behaviour is back
  • Found a broken LEMO cable, left unplugged for the night -> to be repaired tomorrow
  • Further diagnosis to follow

During the course of Rana's inspection of the general state of the IFO, he commented that there seemed to be several seismic-related IMC lock losses in the time that he had been observing it. This issue looked suspiciously like the the MC1 glitches I had noticed sometime late last year, especially since each time the IMC would unlock, we could see significant amounts of motion on MC REFL. To diagnose, we did the following:

  1. Closed PSL shutter
  2. Ramped down the gains of the MC1 damping loops by factor of 1000 in ~4 secs using z step
  3. Shut down the watchdog for MC1
  4. Observed dataviewer traces for glitches

Sure enough, there were several glitches that occurred in all 5 sensor channels. These glitches varied in size from a few counts (the smaller ones) to 60-70counts for the bigger ones. In the past, squishing the LEMO connector on the front of the PD whitening board (D000210) had apparently made the glitching go away. So tonight, for starters, we squished everything else - Sat. Box connectors, the breakout board from Sat. Box to whitening board on the back of 1X6, and the DB connector on the front of the whitening board. This had no effect - the glitching remained consistent.

Next, Rana pulled out two of the three 4pin LEMOs, and left only those coresponding to UL/LL plugged in - but the glitching persisted in these two channels. We then pulled out the board. It was installed in 1998, but has a sticker on it that says "fixed in 2003". Not sure what the fix was. Visual inspection of the circuit didn't show anything obviously faulty, but it did look like the two MAX333A quad switches (these control whether the whitening is bypassed or not) had been replaced at some point. There are other undesirable features, such as the use of thick film resistors, but nothing that would explain the glitchy behaviour.

Next, we re-inserted the whitening board back into its original slot in the Eurocrate, but switched the cables (both D sub and LEMO, but only on the whitening board end) between the boards for MC1 and MC3 (i.e. MC1 cables were routed through the whitening board that was originally used for MC3, and vice-versa). But the glitches remained consistent on the MC1 channels. So it looks like the board is not a likely culprit.

Finally, we went in and squished all the cables from the PD whitening board to the ADC (via an AA filter board). For some of the LEMO cables from the whitening board, the LEMO backshells were not properly tightened. Rana fixed these before putting them back in. Some of the connectors were also not pushed in tightly enough, Rana heard the click when he pushed them in. The cables from the adaptor board to the ADC itself looked fine, it was screwed on at both ends, and all these connections looked snug enough. In the interest of completeness, Rana also pushed in the backplane connectors on the Eurocrate (these supply the signals from the BIO cards to switch the whitening ON/OFF). The one corresponding to MC1 was indeed a little loose.

Coming back to the control room, we saw that the MC1 LR sensor was dead. After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints. Could this explain the glitchy behaviour? Perhaps, but the glitches remained in the 3 channels that were connected. Anyways, I will repair this cable tomorrow, and we can see if this has fixed the problem or not..


Some misc points:

  1. Regarding the adaptor boards that take the PD signals from the satellite box and route it to the whitening board, there are some clamps that hold the IDE connectors in place for MC1, MC2 and MC3 boards, but not for the others (see attached picture). Steve, can we install clamps for all of the boards? [taken care of, see here]
  2. The whitening boards are not screwed in place into the Eurocrate. This should be rectified.

PSL shutter is closed, MC1 watchdog is shutdown for the night.

Attachment 1: 20170116_231625.png
20170116_231625.png
Attachment 2: IMG_7175.JPG
IMG_7175.JPG
Attachment 3: IMG_7174.JPG
IMG_7174.JPG
  12726   Tue Jan 17 20:39:30 2017 ranaUpdateComputer Scripts / Programsnodus web apache simlinks too soft

I tried to follow these instructions today to make the Simulink Webview accessible:

controls@nodus|public_html > ln -sfn /users/public_html/FE /export/home/

Quote:

The story is: we currently don't expose the whole /users/public_html folder. Instead, we are symlinking the folders from public_html to /export/home/ on nodus, which is where apache looks for things

So, I fixed the links on the Core Optics page by running:

controls@nodus|~ > ln -sfn /users/public_html/40m_phasemap /export/home/

But...I got a "403 Forbidden" message. What is the secret handshake to get this to work? And why have we added this extra step of security?

  12727   Tue Jan 17 20:47:23 2017 ranaUpdateCDSSimulink Webview updated

Seems like this stops working every ~2 years. Its been busted since early 2016 according to cron, so I fixed up the paths and restored some missing files and committed things to the SVN (with comments!) and now its working and grabbing the Web viewable versions of the front end models. Just need to restore its viewability and then the world can watch our models any time.

Quote:

Back in 2011, JoeB wrote some entries on how to automatically update the Simulink webview stuff.

Somehow, the cron broke down over the years. I reran the matlab file by hand today and it worked fine, so now you can see the up to date models using the internet.

https://nodus.ligo.caltech.edu:30889/FE/

 

  12728   Tue Jan 17 21:29:52 2017 gautamUpdateSUSMC1 SUS electronics investigation

 

Quote:
 

After some investigation, Rana found that on the AA filter board end, one of the 4pin LEMOs from the whitening board had one of its wires come unstuck from where it was soldered (this presumably happened while we were squishing cables tonight, as the LR channel was fine before that). Also, there was no heat shrink used on any of the solder joints.

The faulty cable has been re-soldered (with heat shrink) and replaced. All 5 sensor signals appear normal on dataviewer now. I am leaving things in this state for the night, let us see if the glitches return overnight.

PSL shutter remains closed

  12729   Tue Jan 17 21:31:57 2017 gautamUpdateGeneralETMX suspension electronics problems?

Last night, I plugged the ETMX suspension coils back into the satellite box. Tonight, we turned on the damping loops for ETMX. Rana centered the Oplev so we can use that as an additional diagnostic to see if the optic gets kicked around overnight. We will re-assess the situation tomorrow.

Sometime earlier today, Lydia noticed that the +/- 5V Sorensens at the X end were not displaying their nominal voltage/current values (as per the stickers on them). She corrected this.

  12730   Wed Jan 18 10:41:14 2017 gautamUpdateGeneralETMX suspension electronics problems?

Summary pages show no kicking in the ETMX watchdogs from midnight to 6 AM (0800 - 1400 UTC):

https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20170118/sus/watchdogs/

  12731   Wed Jan 18 11:40:54 2017 gautamUpdateSUSMC1 SUS electronics investigation

After the repair of the faulty LEMO cable, I left MC1 with it's watchdog off overnight. Unfortunately, it looks like the problem still persists. The first attachment shows a second trend plot for the past 15 hours. Towards the left end of the plot, you can see where I re-connected the LEMO cable for the LR/UR channels.

A couple of months ago, I added a BLRMS block for the IMC optics that calculates BLRMS for the shadow sensor output as well as the coil output. Looking at this trend overnight, I noticed that the glitches appear in the coil outputs as well, as shown in the plot below, which is for a 1 hour stretch last night (I used the full data from a 16Hz coil output channel and not the BLRMS, I am not sure if there is a DQ'ed version of the coil outputs?).

Zooming in further to one of these glitches, we can see that the glitches in the coil and shadow sensor signals are in fact coincident.

But given that the watchdog was turned off all this time, the only voltage going to the coils should be the DC bias voltages. So does this not support the hypothesis that the problem lies in the part of the signal chain that supplies the bias voltage to the coils?

Never mind, the "coil output" channel isn't a true readback of the voltage to the coil, but is the calculated damping output (which is not sent to the coils when the watchdog is shutdown...

  12733   Wed Jan 18 12:46:47 2017 ericqUpdateComputer Scripts / Programsnodus web apache simlinks too soft
Quote:

I tried to follow these instructions today to make the Simulink Webview accessible:

controls@nodus|public_html > ln -sfn /users/public_html/FE /export/home/

But...I got a "403 Forbidden" message. What is the secret handshake to get this to work? And why have we added this extra step of security?

This link works for me: https://nodus.ligo.caltech.edu:30889/FE/c1als_slwebview.html. The problem with just /FE/ is that there is no index.html, and we have turned off automatic directory listings.

IIRC, this arrangement was due to the fact that authentication of some of the folders (maybe the wikis) was broken during the nodus upgrade, so there was sensitive information being publicly displayed. This setup gives us discretion over what gets exposed.

  12734   Wed Jan 18 14:23:47 2017 gautamUpdateSUSMC1 SUS electronics investigation

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

  12735   Wed Jan 18 15:17:38 2017 ranaUpdateComputer Scripts / Programsnodus web apache simlinks too soft

I suppose before directory listings were turned off we should have fixed the script to make an index.html, but that's how it goes with "up"-grades. How about re-allow directory listing so that our old links for webview work again?


EQ: https://nodus.ligo.caltech.edu:30889/FE is live

  12736   Wed Jan 18 18:44:53 2017 gautamUpdateSUSMC1 SUS electronics investigation
Quote:

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.

  12737   Thu Jan 19 08:25:12 2017 SteveUpdateSUSMC1 SUS electronics investigation
Quote:
Quote:

As part of the ongoing debugging, I've switched the MC1 and MC3 satellite boxes. Both MC1 and MC3 have their watchdogs shudown for the moment.

In the last 3.5 hours, there has been nothing conclusive - no evidence of any glitching in either MC1 or MC3 sensor channels. I am going to hold off on doing the LEMO cable swap test for a few more hours, to see if we can rule out the satellite box.

No change.

Attachment 1: MC1_MC3_ITMY_ETMX_sensors.png
MC1_MC3_ITMY_ETMX_sensors.png
Attachment 2: sensors_UL.png
sensors_UL.png
  12738   Thu Jan 19 10:21:54 2017 AshleyUpdateGeneralPreliminary Microphone Data

Brief Summary: I am currently looking at the acoustic noise around both arms to see if there are any frequencies from machinery around the lab that stand out and to see what we can remove/change. I am using a Bluebird microphone suspended with surgical tubing from the cable trays to isolate it from vibrations. I am also using a preamp and the SR875 spectrum analyzer taking 6 sets of data every 1.5 meters (0 to 200Hz, 200Hz to 400Hz, 400z to 800Hz, 800Hz to 3200Hz, 3.2kHz to 12kHz, 12kHz to 100kHz).

 

·                Attachment 1 is a PSD of the first 3 measurements (from 0 to 12kHz) that I took every 1.5 meters along the x arm with the preamp and spectrum analyzer

·                Attachment 2 is a blrms color map of the first 6 sets of data I took (from 2.4m to 9.9m) 

·                Attachmetn 3 is a picture of the microphone set up with the surgical tubing 

Problems that occurred: settings on the preamp made the first set of data I took significantly smaller than the data I took with the 0dB button off and the last problem I had was the spectrum analyzer reading only from -50 to -50 dBVpk

 

 

Attachment 1: xend_psd.png
xend_psd.png
Attachment 2: xblrms.png
xblrms.png
Attachment 3: IMG_3734.JPG
IMG_3734.JPG
  12739   Thu Jan 19 12:00:10 2017 gautamUpdateSUSMC1 SUS electronics investigation

Going through the last ~20 hours of data, the MC1 sensor channels look glitch free the entire period. However, there is a ~10min period around 1PM UTC today when there were a couple of glitches ~80 counts in size in all the MC3 sensor channels. The attached shows the full 2k data from all 10 channels (MC1 and MC3 sensors) around this time.

Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...

 

  12740   Thu Jan 19 16:36:35 2017 ericqUpdateComputer Scripts / Programsnodus web apache simlinks too soft
Quote:

EQ: https://nodus.ligo.caltech.edu:30889/FE is live

This was done by adding "Options +Indexes" to /etc/apache/sites-available/nodus

I've added a little more info about the apache configuration on the wiki: ApacheOnNodus

  12741   Thu Jan 19 19:56:09 2017 ranaUpdateSUSMC1 SUS electronics investigation

Might be. Or it might be in the satellite box cabling. Hard to tell without a tester. I recommend you squish the cables on there and lock the MC and get back to the usual business. I'll check on sat. box with Ben.

Quote:

 

Is this sufficient evidence to conclude that the Satellite boxes are to blame? It's hard to explain why the glitches come and go in this fashion, and also the apparent difference in the length of time for which the glitches persist. Here, in almost 24 hours, there is one incidence of glitching, but in yesterday's trend plot, the glitching remains present over several hours... The amplitude of the glitches, and their coincidence in all 5 channels, seems consistent with what we have been seeing though...

 

  12742   Fri Jan 20 11:16:30 2017 gautamUpdateSUSMC1 SUS electronics investigation

Both suspensions have been relatively well behaved for the best part of the last two days, since I effected the Satellite Box swap. Today morning, I set about re-enabling the damping and locking the MC. Judging by the wall StripTool, it stayed locked for about 30 mins or so, after which the glitching returned.

Attached is a screenshot of the sensor signals from MC1 and MC3 (second trend), and also the highest band (>30Hz) BLRMS output for the same 10 channels (full data sampled at 16Hz). Note that MC1 and MC3 satellite boxes remain swapped. So the glitches now have migrated to the MC3 channels.

I need to think about whether this is just coincidence, or if me re-enabling the damping has something to do with the re-occurrence of the glitching...


Addendum 4.30pm: I've also re-aligned the Y arm. Its alignment has been stable over the last few hours, despite several mode cleaner lock losses in between, it recovers good IR transmission. The X arm has been re-aligned to green, but I can't get it locked to the IR - everytime I turn the LSC to ETMX on, there seems to be some large misalignment applied to it. c1iscaux was dead, I restarted it by keying the crate. I haven't had time to investigate the X arm locking in detail, I will continue to debug this.

ELOG V3.1.3-