40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m elog, Page 112 of 357  Not logged in ELOG logo
ID Date Author Type Category Subjectdown
  14471   Wed Feb 27 21:34:21 2019 gautamUpdateGeneralSuspension diagnosis

In my effort to understand what's going on with the suspensions, I've kicked all the suspensions and shutdown the watchdogs at 1235366912. PSL shutter is closed to avoid trying to lock to the swinging cavity. The primary aims are

  1. To see how much the resonant peaks have shifted w.r.t. the database, if at all - I claim that the ETMY resonances have shifted by a large amount and also has lost one of the resonant peaks.
  2. To check the status of the existing diagonalization.

All the tests I have done so far (looking at free swinging data, resonant frequencies in the Oplev error signals etc) seem to suggest that the problem is mechanical rather than electrical. I'll do a quick check of the OSEM PD whitening unit in 1Y4 to be sure.But the fact that the same three peaks appear in the OSEM and Oplev spectra suggests to me that the problem is not electrical.

Watchdogs restored at 10 AM PST

  14725   Thu Jul 4 10:54:21 2019 KojiSummarySUSSuspension damping recovered, ITMX stuck

So Cal Earthquake. All suspension watchdogs tripped.

Tried to recover the OSEM damping. 

=> The watchdogs for all suspensions except for ITMX were restored. ITMX seems to be stuck. No further action by me for now.

  176   Thu Dec 6 19:19:47 2007 AndreyConfigurationSUSSuspension damping Gain was restored

Suspension damping gain was disabled for some reason (all the indicators in the most right part of the screen C1SUS_ETMX.adl were red), it is now restored.
  16597   Wed Jan 19 14:41:23 2022 KojiUpdateBHDSuspension Status

Is this the correct status? Please directly update this entry.

LO1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
LO2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
AS1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]

AS4 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
PR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
PR3 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
SR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]

Last updated: Fri Jan 28 10:34:19 2022

  2642   Fri Feb 26 01:00:07 2010 JenneUpdateCOCSuspension Progress

This is going to be a laundry list of the mile markers achieved so far:

* Guiderod and wire standoff glued to each ITMX and ITMY

* Magnets glued to dumbbells (4 sets done now).  ITMX has 244 +- 3 Gauss, ITMY has 255 +- 3 Gauss.  The 2 sets for SRM and PRM are 255 +- 3 G and 264 +- 3 G.  I don't know which set will go with which optic yet.

* Magnets glued to ITMX.  There were some complications removing the optic from the magnet gluing fixture.  The way the optic is left with the glue to dry overnight is with "pickle picker" type grippers holding the magnets to the optic.  After the epoxy had cured, Kiwamu and I took the grippers off, in preparation to remove the optic from the fixture.  The side magnet (thankfully the side where we won't have an OSEM) and dumbbell assembly snapped off.  Also, on the UL magnet, the magnet came off of the dumbbell (the dumbbell was still glued to the glass).  We left the optic in the fixture (to maintain the original alignment), and used one of the grippers to glue the magnet back to the UL dumbbell.  The gripper in the fixture has very little slop in where it places the magnet/dumbbell, so the magnet was reglued with very good axial alignment.  Since after the side magnet+dumbbell came off the glass, the 2 broke apart, we did not glue them back on to the optic.  They were reattached, so that we can in the future put the extra side magnet on, but I don't think that will be necessary, since we already know which side the OSEM will be on.

* Magnets glued to ITMY.  This happened today, so it's drying overnight.  Hopefully the grippers won't be sticky and jerky like last time when we were removing them from the fixture, so hopefully we won't lose any magnets when I take the optic out of the fixture.

* ITMX has been placed in its suspension cage.  The first step, before getting out the wire, is to set the optic on the bottom EQ stops, and get the correct height and get the optic leveled, to make things easier once the wire is in place.  Koji and I did this step, and then we clamped all of the EQ stops in place to leave it for the night.

* The HeNe laser has been leveled, to a beam height of 5.5inches, in preparation for the final leveling of the optics, beginning tomorrow.  The QPD with the XY decoder is also in place at the 5.5 inch height for the op lev readout.  The game plan is to leave this set up for the entire time that we're hanging optics.  This is kind of a pain to set up, but now that it's there, it can stay out of the way huddled on the side of the flow bench table, ready for whenever we get the ETMs in, and the recoated PRM. 

* Koji and Steve got the ITMX OSEMs from in the vacuum, and they're ready for the hanging and balancing of the optic tomorrow.  Also, they got out the satellite box, and ran the crazy-long cable to control the OSEMs while they're on the flow bench in the clean room.


Koji and I discovered a problem with the small EQ stops, which will be used in all of the SOS suspensions for the bottom EQ stops.  They're too big.  :(  The original document (D970312-A-D) describing the size for these screws was drawn in 1997, and it calls for 4-40 screws.  The updated drawing, from 2000 (D970312-B-D) calls for 6-32 screws.  I naively trusted that updated meant updated, and ordered and prepared 6-32 screws for the bottom EQ stops for all of the SOSes.  Unfortunately, the suspension towers that we have are tapped for 4-40.  Thumbs down to that.  We have a bunch of vented 4-40 screws in the clean room cabinets, which I can drill, and have Bob rebake, so that Zach and Mott can make viton inserts for them, but that will be a future enhancement.  For tonight, Koji and I put in bare vented 4-40 screws from the clean room supply of pre-baked screws.  This is consistent with the optics in our chambers having bare screws for the bottom EQ stops, although it might be nicer to have cushy viton for emergencies when the wire might snap.  The real moral of this story is: don't trust the drawings.  They're good for guidelines, but I should have confirmed that everything fit and was the correct size.

  14499   Thu Mar 28 23:29:00 2019 KojiUpdateSUSSuspension PD whitening and I/F boards modified for susaux replacement

Now the sus PD whitening bards are ready to move the back plane connectoresto the lower row and to plug the acromag interface board to the upper low.

Sus PD whitening boards on 1X5 rack (D000210-A1) had slow and fast channels mix in a single DIN96 connector. As we are going to use the rear-side backplane connector for Acromag access, we wanted to migrate the fast channel somewhere. For this purpose, the boards were modified to duplicate the fast signals to the lower DIN96 connector.

The modification was done on the back layer of the board (Attachment 1).
The 28A~32A and 28C~32C of P1 are connected to the corresponding pins of P2 (Attachment 2). The connections were thouroughly checked by a multimeter.

After the modification the boards were returned to the same place of the crate. The cables, which had been identified and noted before disconnection, were returned to the connectors.

The functionarity of the 40 (8sus*5ch) whitening switches were confimred using DTT one by one by looking at the transfer functions between SUS LSC EXC to the PD input filter IN1. All the switches showed the proper whitening in the measurments.

The PD slow mon (like C1:SUS-XXX_xxPDMon) channels were also checked and they returned to the values before the modification, except for the BS UL PD. As the fast version of the signal returned to the previous value, the monitor circuit was suspicious. Therefore the opamp of the monitor channels (LT1125) were replaced and the value came back to the previous value (attachment 3).


  9982   Wed May 21 13:18:47 2014 ericqUpdateCDSSuspension MEDM Bug

I fixed a bug in the SUS_SINGLE screen, where the total YAW output was incorrectly displayed (TO_COIL_3_1 instead of TO_COIL_1_3). I noticed this by seeing that the yaw bias slider had no effect on the number that claimed to be the yaw sum. The first time I did this, I accidently changed the screen size a bit which smushed things together, but that's fixed now.

I committed it to the svn, along with some uncommitted changed to the oplev servo screen.

  3527   Mon Sep 6 20:38:58 2010 KojiUpdateCDSSusension model reviewed

I have reviewed the suspension model of C1SUS and refined it.

It is comaptible to the current one but has minor additions.

  3528   Mon Sep 6 21:08:44 2010 ranaUpdateCDSSusension model reviewed

We must remember that we are using the Rev.B SOS Coil Drivers and not the Rev. A. 

The main change from A->B was the addition of the extra path for the bias inputs. These inputs were previously handled by the slow EPICS system and not a part of the front end. So we used to have a separate bias screen for these than the bias which is in the front end. The slow bias is what was used for the alignment to avoid overloading the range of the main coil driver path.

  11377   Thu Jun 25 15:07:50 2015 SteveUpdatesafetySurfs Safety 2015

Jessica Pena, Megan Kelly, Eve Chase and Ignacio Magana received 40m specific basic safety traning today.

  3833   Mon Nov 1 10:28:41 2010 steveBureaucracySAFETYSuresh received 40m safety training

Our new postdoc Suresh Doravari received 40m specific safety training last week.

  14820   Wed Jul 31 14:44:11 2019 gautamUpdateComputersSupermicro inventory

Chub brought the replacement Supermicro we ordered to the 40m today. I stored it at the SW entrance to the VEA, along with the other Supermicro. At the time of writing, we have, in hand, two (unused) Supermicro machines. One is meant for EY and the other is meant for c1psl/c1iool0. DDR3 RAM and 120 GB SSD drives have also been ordered, but have not yet arrived (I think, Chub, please correct me if I'm wrong).

Update 20190802: The DDR3 RAM and 120 GB SSD drives arrived, and are stored in the FE hardware cabinet along the east arm. So at the time of writing, we have 2 sets of (Supermicro + 120GB HD + 4GB RAM).


We should ask Chub to reorder several more SuperMicro rackmount machines, SSD drives, and DRAM cards. Gautam has the list of parts from Johannes' last order.

  1344   Mon Mar 2 03:57:44 2009 YoichiUpdateLockingSunday night locking
Tonight's locking started with a boot fest of the FE computers which were all red when I came in.
It also took me sometime to realize that C1:IOO-MC_F was returning always zero to tdsavg, causing the offloadMCF script to do nothing.
I fixed this by rebooting c1iovme and c1iool0.

Like Rob on the thursday night, I was only able to reach arm power around 10.
This time, I turned down the MC WFS gain to 0.02 (from 0.3).
I also checked gains of most of the loops (MICH, PRC, SRC, DARM, CARM-MCL, CARM-AO).
All the loops looked fine until the lock was lost suddenly. Also the spectrum of MC_F did not change as the arm power was ramped up.
Actually, I was able to reach arm power=10 only once because I spent a long time checking the loop gains and spectrum at fine steps of the arm power.
So it is quite possible that this loss of lock was just caused by a seismic kick.
  11269   Sun May 3 19:40:51 2015 ranaUpdateASCSunday maintenance: alignment, OL center, seismo, temp sensors

X arm was far out in yaw, so I reran the ASS for Y and then X. Ran OK; the offload from ASS outputs to SUS bias is still pretty violent - needs smoother ramping.

After this I recentered the ITMX OL- it was off by 50 microradians in pitch. Just like the BS/PRM OLs, this one has a few badly assembled & flimsly mounts. Steve, please prepare for replacing the ITMX OL mirror mounts with the proper base/post/Polaris combo. I think we need ~3 of them. Pit/yaw loop measurements attached.

Based on the PEM-SEIS summary page, it looked like GUR1 was oscillating (and thereby saturating and suppressing the Z channel). So I power cycled both Guralps by turning off the interface box for ~30 seconds and the powering back on. Still not fixed; looks like the oscillations at 110 and 520 Hz have moved but GUR2_X/Y are suppressed above 1 Hz, and GUR1_Z is suppressed below 1 Hz. We need Jenne or Zach to come and use the Gur Paddle on these things to make them OK.

From the SUS-WatchDog summary page, it looked like the PRM tripped during the little 3.8 EQ at 4AM, so I un-tripped it.

Caryn's temperature sensors look like they're still plugged in. Does anyone know where they're connected?

  11654   Wed Sep 30 15:44:06 2015 SteveUpdateGeneralSun Fire X4600

Gautam and Steve,

The decommissioned server from LDAS is retired to the 40m   with 32 cores and 128GB of memory in rack 1X7   http://docs.oracle.com/cd/E19121-01/sf.x4600/

  4956   Fri Jul 8 09:53:49 2011 Nicole SummarySUSSummer Progress Report 1

A copy of my summer progress report 1 has been uploaded to ligodcc 7/711 and I have just added a copy to the TTsuspension wiki


PDF copy of Summer Progress Report

  6833   Tue Jun 19 20:26:50 2012 JenneHowToLockingSummer Plan

Jenne and Yuta's Summer Plan

These are the things that we'd like to accomplish, hopefully before Yuta leaves in mid-July

* Yarm mode scan

  ~ Measure residual motion of Yarm cavity when ALS is engaged

* Xarm mode scan

  ~ Align Xarm IR

  ~ Align Xarm green to cavity

  ~ Do mode scan (similar to Yarm)

  ~ Measure residual motion of Xarm cavity when ALS is engaged

* Hold both arms on IR resonance simultaneously (quick proof that we can)

  ~ Modify beatbox so we can use both X and Y at the same time (Jamie will do this Wednesday morning - we've already discussed)

* PRMI + Arms

  ~ Lock the PRMI (which we already know we can do) holding arms off resonance, bring both arms into resonance using ALS

* PRC mode matching - figure out what the deal is

  ~ Look at POP camera with video capture - use software that Eric the Tall wrote with JoeB to measure spot size

* DRMI glitches

  ~ Why can't we keep the DRMI locked stably?

* DRMI + Arms

  ~ Full lock!!

  ~ Make lots of useful diagnostics for aLIGO, measure sensing matricies, etc.

  5428   Thu Sep 15 22:31:44 2011 ManuelUpdateSUSSummary screen

I changed some colors on the Summary of Suspension Sensor  using my italian creativity.

I wrote a script in Python to change the thresholds for the "alarm mode" of the screen.

The script takes a GPS-format start time as the 1st argument and a duration time as the second argument.

For every channel shown in the screen, it compute the mean value during this time.

The 3rd argument is the ratio between the mean and the LOW threshold. The 4th argument is the ratio between the mean and the LOLO threshold.

Then it sets the thresholds simmetrycally for HIGH and HIHI threshold.

It does that for all channels skipping the Gains and the Off Sets because this data are not stored.

For example is ratio are 0.9 and 0.7 and the mean is 10, thresholds will be LOLO=7, LOW=9, HIGH=11, HIHI=13.

You can run the script on pianosa writing on a terminal '/opt/rtcds/caltech/c1/scripts/SUS/set_thresholds.py' and the arguments.

I already run my program with those arguments: 1000123215 600 0.9 0.7

The time is of this morning at 5:00 for 10 minutes


This is the help I wrote

HELP: This program set the thresholds for the "alarm mode" of the C1SUS_SUMMARY.adl medm screen.

 Written by Manuel Marchio`, visiting student from University of Pisa - INFN for the 2011 summer at Ligo-Caltech. Thrusday, 15th September 2011.

The 1st argument is the time in gps format when you want to START the mean

The 2nd argument is the DURATION

The 3rd argument is the ratio of the LOW and the HIGH thresholds. It must be in the range [0,1]

The 4th argument is the ratio of the LOLO and the HIHI thresholds. It must be in the range [0,1]

Example: path/set_thresholds.py 1000123215 600 0.9 0.7

and if the the mean is 10, thresholds will be set as LOLO=7, LOW=9, HIGH=11, HIHI=13


  5467   Mon Sep 19 18:05:27 2011 ranaUpdateSUSSummary screen


I changed some colors on the Summary of Suspension Sensor  using my italian creativity.

I wrote a script in Python to change the thresholds for the "alarm mode" of the screen.

I've started to fix up the script somewhat (as a way to teach myself some more python):

* moved all of the SUS Summary screen scripts into SUS/SUS_SUMMARY/

* removed the hardcoded channel names (a list of 190 hand-typed names !!!!!!!)

* fixed it to use NDS2 instead of try to use the NDS2 protocol on fb:8088 (which is an NDS1 only machine)

* it was trying to set alarms for the SUS gains, WDs, Vmons, etc. using the same logic as the OSEM PD values. This is non-sensical. We'll need to make a different logic for each type of channel.

New script is called setSensors.py. There are also different scripts for each of the different kinds of fields (gains, sensors, vmons, etc.)

Some Examples:

pianosa:SUS_SUMMARY 0> ./setDogs.py 3 5
Done writing new values.


  5500   Wed Sep 21 16:22:14 2011 ranaUpdateSUSSummary screen

The SUS SUMMARY screen is now fully activated. You should keep it open at all times as a diagnostic of the suspensions.

No matter how cool you think you are, you are probably doing something bad when trying to lock, measure any loop gains, set matrices, etc. Use the screen.


This is the link to the automatic snapshot of the SUS SUMMARY screen. You can use it to check the Suspensions status with your jPhone.

Auto SUS SUMMARY Snapshot

When the values go yellow its near the bad level. When its red, it means the optic is misaligned or not damped or has the wrong gain, etc.

So don't ignore it Steve! If you think the thresholds are set too low then change them to the appropriate level with the scripts is SUS/

  12394   Wed Aug 10 17:30:26 2016 Max IsiUpdateGeneralSummary pages status
Summary pages are currently empty due to a problem with the code responsible for locating frame files in the cluster. This should be fixed soon and the
pages should go back to normal automatically at that point. See Dan Kozak's email below for details.

Date: Wed, 10 Aug 2016 13:28:50 -0700
From: Dan Kozak <dkozak@ligo.caltech.edu>

> Dan, maybe it's a gw_data_find problem?

Almost certainly that's the problem. The diskcache program that finds
new data died on Saturday and no one noticed. I couldn't restart it,
but fortunately it's author just returned from several weeks vacation
today. Smile He's working on it and I'll let you know when it's back up.

Dan Kozak
  12399   Thu Aug 11 11:09:52 2016 Max IsiUpdateGeneralSummary pages status
This problem has been fixed.

> Summary pages are currently empty due to a problem with the code responsible for locating frame files in the cluster. This should be fixed soon and the
> pages should go back to normal automatically at that point. See Dan Kozak's email below for details.
> Date: Wed, 10 Aug 2016 13:28:50 -0700
> From: Dan Kozak <dkozak@ligo.caltech.edu>
> > Dan, maybe it's a gw_data_find problem?
> Almost certainly that's the problem. The diskcache program that finds
> new data died on Saturday and no one noticed. I couldn't restart it,
> but fortunately it's author just returned from several weeks vacation
> today. Smile He's working on it and I'll let you know when it's back up.
> --
> Dan Kozak
> dkozak@ligo.caltech.edu
  11434   Tue Jul 21 21:33:22 2015 Max IsiUpdateGeneralSummary pages moved to 40m LDAS account

The summary pages are now generated from the new 40m LDAS account. The nodus URL (https://nodus.ligo.caltech.edu:30889/detcharsummary/) is the same and there are no changes to the way the configuration files work. However, the location on LDAS has changed to https://ldas-jobs.ligo.caltech.edu/~40m/summary/ and the config files are no longer version-controlled on the LDAS side (this was redundant, as they are under VCS in nodus).

I have posted a more detailed description of the summary page workflow, as well as instructions to run your own jobs and other technical minutiae, on the wiki: https://wiki-40m.ligo.caltech.edu/DailySummaryHelp

  12432   Tue Aug 23 09:50:17 2016 Max IsiUpdateGeneralSummary pages down due to cluster maintenance

Summary pages down today due to schedulted LDAS cluster maintenance. The pages will be back automatically once the servers are back (by tomorrow).

  12440   Thu Aug 25 08:19:25 2016 Max IsiUpdateGeneralSummary pages down due to cluster maintenance

The system is back from maintenance and the pages for last couple of days will be filled retroactively by the end of the week.


Summary pages down today due to schedulted LDAS cluster maintenance. The pages will be back automatically once the servers are back (by tomorrow).


  11401   Fri Jul 10 17:57:38 2015 Max IsiUpdateGeneralSummary pages down

The summary pages are currently unstable due to priority issues on the cluster*. The plots had been empty ever since the CDS updated started anyway. This issue will (presubmably) disappear once the jobs are moved to the new 40m shared LDAS account by the end of next week.

*namely, the jobs are put on hold (rather, status "idle") because we have low priority in the processing queue, making the usual 30min latency impossible.

  12910   Mon Mar 27 20:29:05 2017 ranaSummaryDetCharSummary pages broken again

Going to the summary pages and looking at 'Today' seems to break it and crash the browser. Other tabs are OK, but 'summary' is our default page.

I've noticed this happening for a couple of days now. Today, I moved the .ini files which define the config for the pages from the old chans/ location into the /users/public_html/detcharsummary/ConfigFiles/ dir. Somehow, we should be maintaining version control of detcharsummary, but I think right now its loose and free.

  11546   Sun Aug 30 13:55:09 2015 IgnacioUpdateIOOSummary pages MCF

The summary pages show the effect of the MCL FF on MCF (left Aug 26, right Aug 30):


I'm not too sure what you meant by plotting the X & Y arm control signals with only the MCL filter ON/OFF. Do you mean plotting the control signals with ONLY the T-240Y MCL FF filter on/off? The one that reduced noise at 1Hz?



  11382   Mon Jun 29 17:40:56 2015 Max IsiUpdateGeneralSummary pages "Code status" page fixed

It was brought to my attention that the "Code status" page (https://nodus.ligo.caltech.edu:30889/detcharsummary/status.html) had been stuck showing "Unknown status" for a while.
This was due to a sync error with LDAS and has now been fixed. Let me know if the issue returns.

  11257   Sun Apr 26 20:10:10 2015 max isiHowToGeneralSummary pages

I have set up new summary pages for the 40m: http://www.ligo.caltech.edu/~misi/summary/
This website shows plots (time series, spectra, spectrograms, Rayleigh statistics) of relevant channels and is updated with new data every 30 min.

The content and structure of the pages is determined by configuration files stored in nodus:/users/public_html/gwsumm-ini/ . The code looks at all files in that directory matching c1*.ini. You can look at the c1hoft.ini file to see how this works. Besides, a quick guide to the format can be found here http://www.ligo.caltech.edu/~misi/iniguide.pdf

Please look at the pages and edit the config files to make them useful to you. The files are under version control, so don’t worry about breaking anything.

Do let me know if you have any questions (or leave a comment in the pages).

  11261   Mon Apr 27 21:42:07 2015 ranaUpdateVACSummary pages

We want to have a VAC page in the summaries, so Steve - please put a list of important channel names for the vacuum system into the elog so that we can start monitoring for trouble.

Also, anyone that has any ideas can feel free to just add a comment to the summary pages DisQus comment section with the 40m shared account or make your own account.

  11279   Mon May 11 12:17:19 2015 max isiHowToGeneralSummary pages

I have created a wiki page with introductory info about the summary page configuration: https://wiki-40m.ligo.caltech.edu/Daily summary help

We can also use that to collect tips for editing the configuration files, etc.


I have set up new summary pages for the 40m: http://www.ligo.caltech.edu/~misi/summary/
This website shows plots (time series, spectra, spectrograms, Rayleigh statistics) of relevant channels and is updated with new data every 30 min.

The content and structure of the pages is determined by configuration files stored in nodus:/users/public_html/gwsumm-ini/ . The code looks at all files in that directory matching c1*.ini. You can look at the c1hoft.ini file to see how this works. Besides, a quick guide to the format can be found here http://www.ligo.caltech.edu/~misi/iniguide.pdf

Please look at the pages and edit the config files to make them useful to you. The files are under version control, so don’t worry about breaking anything.

Do let me know if you have any questions (or leave a comment in the pages).


  15370   Wed Jun 3 11:20:19 2020 gautamUpdateDetCharSummary pages


The 40m summary pages have been revived. I've not had to make any manual interventions in the last 5 days, so things seem somewhat stable, but I'm sure there will need to be multiple tweaks made. The primary use of the pages right now are for vacuum, seismic and PSL diagnostics.



  • Intermittent failures of cron jobs
    • The status page relies on the condor_q command executing successfully on the cluster end. I have seen this fail a few times, so the status page may say the pages are dead whereas they're actually still running.
    • Similarly, the rsync of the pages to nodus where they're hosted can sometimes fail.
    • Usually, these problems are fixed on the next iteration of the respective cron jobs, so check back in ~half hour.
  • I haven't really looked into it in detail, but I think our existing C1:IFO-STATE word format is not compatible with what gwsumm wants - I think it expects single bits that are either 0 or 1 to indicate a particular state (e.g. MC locked, POX and POY locked etc). So if we want to take advantage of that infrastructure, we may need to add a few soft EPICS channels that take care of some logic checking (several such bits could also be and-ed together) and then assume either 0 or 1 value. Then we can have the nice duty cycle plots for the IMC (for example).
  • I commented out the obsolete channels (e.g. PEM MIC channels). We can add them back later if we so desire.
  • For some reason, the jobs that download the data are pretty memory-heavy: I have to request for machines on the cluster with >100 GB (yes 💯GB) ! of memory for the jobs to execute to completion. The frame-data certainly isn't so large, so I wonder what's going on here - is GWPy/GWsumm so heavy? The site summary pages run on a dedicated cluster, so probably the code isn't built for efficiency...
  • Weather tab in PEM is still in there but none of those channels mean anything right now.
  • The MEDM screenshot stuff is commented out for now too. This should be redone in a better way with some modern screen grab utilities, I'm sure there are plenty of python based ones.
  • There seems to be a problem with the condor .dag lockfile / rescue file not being cleared correctly sometimes - I am looking into this.
  15812   Wed Feb 17 13:59:35 2021 gautamUpdateDetCharSummary pages

The summary pages had failed because of a conda env change. We are dependent on detchar's conda environment setup to run the scripts on the cluster. However, for some reason, when they upgraded to python3.9, they removed the python3.7 env, which was the cause of the original failure of the summary pages a couple of weeks ago. Here is a list of thoughts on how the pipeline can be made better.

  1. The status checking is pretty hacky at the moment.
    • I recommend not using shell via python to check if any condor jobs are "held".
    • Better is to use the dedicated python bindings. I have used this to plot the job durations, and it has worked well.
    • One caveat is that sometimes, there is a long delay between making a request via a python command, and condor actually returning the status. So you may have to experiment with execution times and build some try/except logic to catch "failures" that are just the condor command timing out and not an actual failure of the summare jobs.
  2. The status check should also add a mailer which emails the 40m list when the job is held. 
    • With htcondor and python, I think it's easy to also get the "hold reason" for the job and add that to the mailer.
  3. The job execution time command is not executing correctly anymore - for whatever reason, the condor_history command can't seem to apply the constraint of finding only jobs run by "40m", although running it without the constraint reveals that these certainly exist. Probably has to do with some recent upgrade of condor version or something. This should be fixed.
  4. We should clear the archive files regularly. 
    • The 40m home directory on the cluster was getting full. 
    • The summary page jobs generate a .h5 archive of all the data used to generate the plots. Over ~1 year, this amounts to ~1TB.
    • I added the cleanArchive job to the crontab, but it should be checked.
    • Do we even need these archives beyond 1 day? I think they make the plotting faster by saving data already downloaded locally, but maybe we should have the cron delete all archive 
  5. Can we make our own copy of the conda env and not be dependent on detchar conda env? The downside is that if something dramatic changes in gwsumm, we are responsible for debugging ourselves.

Remember that all the files are to be edited on nodus and not on the cluster.

  11411   Tue Jul 14 16:47:18 2015 EveUpdateSummary PagesSummary page updates continue during upgrade

I've continued to make changes to the summary pages on my own environment, which I plan on implementing on the main summary pages when they are back online.



I created my own summary page environment and manipulated data from June 30 to make additional plots and change already existing plots. The main summary pages (https://nodus.ligo.caltech.edu:30889/detcharsummary/ or https://ldas-jobs.ligo.caltech.edu/~max.isi/summary/) are currently down due to the CDS upgrade, so my own summary page environment acts as a temporary playground to continue working on my SURF project. My summary pages can be found here (https://ldas-jobs.ligo.caltech.edu/~eve.chase/summary/day/20150630/); they contian identical plots to the main summary pages, except for the Summary tab. I'm open to suggestions, so I can make the summary pages as useful as possible.


What I did:

  • SUS OpLev: For every already existing optical lever timeseries, I created a corresponding spectrum, showing all channels present in the original timeseries. The spectra are now placed to the right of their corresponding timeseries. I'm still playing with the axes to make sure I set the best ranges.
  • SUSdrift: I added two new timeseries, DRMI SUS Pitch and DRMI SUS Yaw, to add to the four already-existing timeseries in this tab. These plots represent channels not previously displayed on the summary pages
  • Minor changes
    • Added axis labels on IOO plot 6
    • Changes axis ranges of IOO: MC2 Trans QPD and IOO: IMC REFL RFPD DC
    • Changes axis label on PSL plot 6



So far, all of these changes have been properly implemented into my personal summary page environment. I would like some feedback as to how I can improve the summary pages.



  11375   Thu Jun 25 12:03:42 2015 Max IsiUpdateGeneralSummary page status

The summary pages have been down due to incompatibilities with a software update and problems with the LDAS cluster. I'm working at the moment to fix the former and the LDAS admins are looking into the latter. Overall, we can expect the pages will be fully functional again by Monday.

  11376   Thu Jun 25 14:18:46 2015 Max IsiUpdateGeneralSummary page status

The pages are live again. Please allow some time for the system to catch up and process missed days. If there are any further issues, please let me know.
URL reminder: https://nodus.ligo.caltech.edu:30889/detcharsummary/


The summary pages have been down due to incompatibilities with a software update and problems with the LDAS cluster. I'm working at the moment to fix the former and the LDAS admins are looking into the latter. Overall, we can expect the pages will be fully functional again by Monday.


  15309   Wed Apr 22 13:52:05 2020 gautamUpdateDetCharSummary page revival

Covid 19 motivated me to revive the summary pages. With Alex Urban's help, the infrastructure was modernized, the wiki is now up to date. I ran a test job for 2020 March 17th, just for the IOO tab, and it works, see here. The LDAS rsync of our frames is still catching up, so once that is up, we can start the old condor jobs and have these updated on a more regular basis.

  15696   Wed Dec 2 18:35:31 2020 gautamUpdateDetCharSummary page revival

The summary pages were in a sad state of disrepair - the daily jobs haven't been running for > 1 month. I only noticed today because Jordan wanted to look at some vacuum trends and I thought summary pages is nice for long term lookback. I rebooted it just now, seems to be running. @Tega, maybe you want to set up some kind of scripted health check that also sends an alert.

  12135   Wed May 25 14:21:29 2016 Max IsiUpdateGeneralSummary page configuration

I have modified the c1summary.ini and c1lsc.ini configuration files slightly to avoid overloading the system and remove the errors that were preventing plots from being updated after certain time in the day.

The changes made are the following:
1- all high-resolution spectra from the Summary and LSC tabs are now computed for each state (X-arm locked, Y-arm locked, IFO locked, all);
2- I've removed MICH, PRCL & SRCL from the summary spectrum (those can still be found in the LSC tab);
3- I've split LSC into two subtabs.

The reason for these changes is that having high resolution (raw channels, 16kHz) spectra for multiple (>3) channels on a single tab requires a *lot* of memory to process. As a result, those jobs were failing in a way that blocked the queue, so even other "healthy" tabs could not be updated.

My changes, reflected from May 25 on, should hopefully fix this. As always, feel free to re organize the ini files to make the pages more useful to you, but keep in mind that we cannot support multiple high resolution spectra on a single tab, as explained above.

  11431   Mon Jul 20 16:45:15 2015 Max IsiConfigurationGeneralSummary page c1sus.ini error corrected

Bad syntax errors in the c1sus.ini config file were causing the summary pages to crash: a plot type had not been indicated for plots 5 and 6, so I've made these "timeseries."
In the future, please remember to always specify a plot type, e.g.:




       C1:SUS-ITMY_SUSPIT_INMON.mean timeseries

By the way, the pages will continue to be unavailable while I transfer them to the new shared account.

  17372   Thu Dec 22 15:52:48 2022 KojiSummaryDetCharSummary on the summary pages

[Tega Koji]

Last week, Tega gave me a brief introduction to the configuration of the summary pages. So this is the summary of the conversation.

1. General info resources

40m wiki https://wiki-40m.ligo.caltech.edu/DailySummaryHelp
40m wiki https://wiki-40m.ligo.caltech.edu/40mLDASaccount
40m git lab https://git.ligo.org/40m/40m-summary-pages

2. How the frame files are transfered

The frame files are created in /frames on fb1. ==> If the frame files are not found on fb1, it's the problem of FB/CDS.
This directory is exported to nodus (as /frames) via NFS. ==> If the frame files are not found on nodus (i.e. /frames is not found), it's the NFS problem between fb1 and nodus.
rsync executed on LDAS comes to nodus to pull the files to /hdfs/40m on LDAS. ==> If the frame files are not found on ldas, it's the rsync problem. We are not controllying this file transfer. Contact with Dan Kozak

3. How to get into LDAS

Look at the LDAS account wiki page on the 40m wiki. For me "ssh -J" approach didn't work. So I used "multi-step login", starting from "ssh albert.einstein@ssh.ligo.org"

4. How the proess is running

This summarizes the workflow very well: https://wiki-40m.ligo.caltech.edu/DailySummaryHelp#Technical_info

I am still not sure how the main script "gw_daily_summary_40m" is running every 30min as well as gw_daily_summary_40m_rerun.
Otherwise, the following scripts are running via crontab of the 40m account
0,30 * * * * /home/40m/DetectorChar/bin/plot-summary-job-duration ~/public_html/detcharsummary/summaryDiag.pdf
5,35 * * * * /home/40m/DetectorChar/bin/checkstatus
6,36 * * * * /home/40m/DetectorChar/bin/plot-temperature
7,37 * * * * /home/40m/DetectorChar/bin/pushnodus
27 18 * * * /home/40m/DetectorChar/bin/cleanLogs

The produced files are not in /home/40m/public_html/detcharsummary , the data processing has a problem.
If the files are there but not pushed to the 40m (by pushnodus), the file transfer has an issue.

  5127   Fri Aug 5 20:37:34 2011 jamieSummaryGeneralSummary of today's in-vacuum work

[Jamie, Suresh, Jenne, Koji, Kiwamu]

After this morning's hiccup with the east end crane, we decided to go ahead with work on ETMX.

Took pictures of the OSEM assemblies, we laid down rails to mark expected new position of the suspension base.

Removed two steering mirrors and a windmill that were on the table but where not being used at all.

Clamped the test mass and moved the suspension to the edge of the table so that we could more easily work on repositioning the OSEMs.  Then leveled the table and released the TM.

Rotated each OSEM so that the parallel LED/PD holder plates were oriented in the vertical direction.  We did this in the hopes that this orientation would minimize SD -> POS coupling.

For each OSEM, we moved it through it's full range, as read out by the C1:SUS-ETMX_{UL,UR,LL,LR,SD}PDMon channels, and attempted to adjust the positions so that the read out was in the center of the range (the measured ranges, mid values, and ultimate positions will be noted in a follow-up post).  Once we were satisfied that all the OSEMs were in good positions, we photographed them all (pictures also to follow).

Re-clamped the TM and moved it into it's final position, using the rails as reference and a ruler to measure as precisely as possible :

ETMX position change: -0.2056 m = -20.56 cm = -8.09 in (away from vertex)

Rebalanced the table.

Repositioned the mirror for the ETMX face camera.

Released TM clamps.

Rechecked OSEM centering.

Unblocked the green beam, only to find that it was displaced horizontally on the test mass about half an inch to the west (-y).  Koji determined that this was because the green beam is incident on the TM at an angle due to the TM wedge.  This presents a problem, since the green beam can no longer be used as a reference for the arm cavity.  After some discussion we decided to go with the TM position as is, and to realign the green beam to the new position and relock the green beam to the new cavity.  We should be able to use the spot position of the green beam exiting the vacuum at the PSL table as the new reference.  If the green X beam exiting at the PSL table is severely displaced, we may decide to go back in and move ETMX to tweak the cavity alignment.

At this point we decided that we were done for the day.  Before closing up, we put a piece of foil with a hole in it in front of the the TM face, to use as an alignment aperture when Kiwamu does the green alignment.

Kiwamu will work on the green alignment over the weekend.  Assuming everything works out, we'll try the same procedure on ETMY on Monday.

  5130   Sat Aug 6 03:10:05 2011 SureshSummaryGeneralSummary of today's in-vacuum work

The table below gives the OSEM positions as seen on the slow chanels C1:SUS-ETMX_{UL,UR,LL,LR,SD}PDMon


Note that the side OSEM has the fast channel (OUTPUT) available and we used that to locate it.

When we began work the OSEMs were photographed so that we have a record of their locations till now.  It was difficult to get accurate estimate of the magnet offset inside the OSEM we could not see the screen on the camera while clicking.  We then took some pictures after finishing the work. These are given below


          Before_osem_adj.JPG         After_osem_adjustment.JPG


The picture of the left is from before OSEMs were moved. It can be seen that OSEMs are rotated to make sure that the magnets avoid touching the teflon sheets which hold the shadow sensors.  The picture on the right shows the positions of the OSEMs after we adjusted their positions.  This time we kept the teflon sheets vertical as shown to minimise the coupling between the Side and Axial directions.

We needed to reposition them once again after we moved the tower to the center of the table.

Pictures with more detail will be posted to the wiki later.





  5131   Sat Aug 6 13:38:02 2011 ranaSummaryGeneralSummary of today's in-vacuum work

This OSEM placement is just the OPPOSITE of what the proper placement is.

Usually, we want to put them in so that the LED beam is vertical. This makes the OSEM immune to the optic's vertical mode.

The orientation with the horizontal LED beam makes the immunity to the side mode better, but may spoil the vertical.

In reality, neither of these assumptions is quite right. The LED beam doesn't come out straight. That's why Osamu and I found that we have to put in some custom orientations.

Also, the magnet gluings relative to the OSEM bracket centers are not perfectly aligned. So...I am saying that the OSEMs have to be oriented empirically to reduce the couplings which we want to reduce.


  6521   Wed Apr 11 17:56:26 2012 JenneUpdateGeneralSummary of things to figure out with the IFO


Power recycling gain

   * It should be ~40, but we observe/measure it to be ~7.  Even if mode matching of ~50% is assumed, gain is calculated to be ~15

   * Would like to measure PR gain independent of mode matching, if possible

Power recycling cavity mode matching

   * Reflectivity of PRMI was measured to be ~50%.  That's pretty high.  What's going on?

   * Even if we're mode matched to the arm, are we appropriately mode matched to the PRC?

Is beam from MC clipped in the Faraday?

   * We had to use MC axis for input pointing since PZTs aren't totally working.

   * Need to measure IPPOS beam for different MC alignments to see if horizontal waist measurement stays constant.

PRM flipped?

   * Not likely, but it can't hurt to confirm for sure.

   * Want to know, since it could give us a different plan for MMT moving than if the PRM is correct.

Thick optic non-normal incidence in IPPO - does this exaggerate astigmatism, which would help explain IPPOS measurement?

Is PRC waist same size / position as arm cavity waist, given the current "known" positions of all the optics?

   * How is this effected by moving the PRM?


Measurements to take and what information they give us:

IPPOS beam scan, with MC as-is

   * Confirm (or not) IPPOS measurements from last week

IPPOS beam scan with different MC alignments

   * Will tell us about Faraday clipping, if any

AS beam scan, misaligned PRM, misaligned SRM, misaligned ITMX, single bounce from ITMY

   * Can only take this measurement if beam is bright enough, so we'll just have to try

   * Will confirm IPPOS measurement, but includes going through the thick PRM, so can compare to calculated intra-PRC mode

REFL beam scan (already done....is the data satisfactory? If so, no need to redo), single bounce off of PRM

   * Will tell us about the potential PRM flipping

   * Need to compare with calculated mode at REFL port for flipped or non-flipped PRM

Look at POP camera, see 2nd pass through cavity

   * Try to match 1st and 2nd pass.  If they don't match, we're not well matched to PRC mode

Look at beam directly on ETMY cage, then beam from ETM, bounce off ITM, back to ETM cage

   * If the beams are the same size, we're well matched to arm cavity mode

   * Use fancy new frame-grabber.


MMT code things to calculate, and what information it gives us:

REFL beam path, for PRM flipping comparison

Thick IPPO non-normal incidence - I'm not sure how to do this yet, since I only know how non-normal incidence changes effective radii of curvature, and this is a flat optic, so *cos(theta) or /cos(theta)  won't do anything to an infinite RoC

Compare PRC waist to arm cavity waist, using "known" optic positions

Mode matching sensitivity to MC waist measurements

Mode matching sensitivity to PRM position

  3738   Mon Oct 18 18:33:46 2010 KojiSummaryCOCSummary of the main mirrors & their phasemap measurement

I have made a summary web page for the 40m upgrade optics.


I made a bunch of RoC calculations along with the phase maps we measured.
Those are also accommodated under this directory structure.

Probably.... I should have used the wiki and copy/paste the resultant HTML?

  1783   Thu Jul 23 10:05:38 2009 AlbertoUpdatePSLSummary of the latest adventures with the alignment of the mode cleaner

Alberto, Koji,

Summary of the latest adventures with the alignment of the mode cleaner

Prior events.

  • Last week, on July 12th the RFM network crashed (elog entry 1736). I don't know for sure which one was the cause and which one the effect, but also C1DAQADW was down and it didn't want to restart. Alex fixed it the day after.
  • On the evening of Sunday July 20th I noticed that the mode cleaner was unlocked. A closer inspection showed me that MCL was frozen at -32768 counts. To fix that I rebooted C1DCUEPICS and burtrestored to snapshots from the day before.
  • On Tuesday July 21st another failure of the RFM Network made necessary a reboot of the frame builder and of all front end computers (entry 1772). As a consequence, the mode cleaner couldn't get locked anymore, even if the mirror's sliders in the MC-Align MEDM screen were in the proper positions. At that time I missed to check the MC suspension positions as a way to ensure that the MC hadn't really changed. Although later, as it turned out, that would have been useless anyway since all the data record prior to the computers crash of that day somehow had been corrupted (entry 1774). Neither the MC2 LSC control or MC ASC control could engage so I (erroneously) thought that some tune of the periscope might help. So I did but, since the Mode Cleaner was misaligned, that had the effect of spoiling the good matching of the periscope to the MC cavity.
  • Yesterday, Wednesday July 22nd, I found out about the sticky slider effect (entry 1776). At that point we didn't have anymore a way to know that the MC optics were actually in their proper original alignment state because of the lack of a reference for those in the data record (as I wrote above). I had to go back to the periscope and fix the alignment.

Chronicles of periscope and MC alignment

Yesterday morning I started aligning the periscope but it turned out to be trickier than usual. With the ASC (Alignment Sensing Control) off and only the length controls on, the Mode Cleaner didn't lock easily, although I knew I wasn't very far from the sweet spot.

In the afternoon the struggle continued and the matching of the the beam to the MC cavity became just worse. At some point I noticed that the ASC inputs somehow had got on - although the ASC still looked disabled from the MClock MEDM main screen. So I was actually working against the Wave Front Sensors and further worsening the periscope alignment.

That hurled me to the weeds. After hours of rowing across the stormy waters of a four-dimensional universe I got to have occasional TEM00 flashes at the transmission but still, surprisingly, no MC locking. Confused, I kept tuning the periscope but that just kicked me off road again.

Then at about 7pm Koji came to my rescue and suggested a more clever and systematic way to solve the problems. He suggested to keep record of the MC mirrors alignment state and re-align the cavity to the periscope. Then we would gradually bring the cavity back to the original good position changing the periscope alignment
at the same time.


That would have worked straight away, if we hadn't been fighting against a subtle and cruel enemy: the 40m computer network. But I (as John Connor), and Koji (as the Terminator) didn't pull back.

Here's a short list of the kinds of weapons that the computers threw to us:

  1. After a while the FSS entered a funny state. It showed transmission: we had light at the MC (and even flashes) but the MEDM readout of the FSS transmitted power after the cavity was low (~0.019). Also the spot on the monitor showed a slightly different pattern from how I remembered it. On the other side the transmission camera didn't show that typical halo as usual.
  2. MCL was frozen at 32768. I ran the MCDown and MCUp script a couple of times and that unstuck it.
  3. On op340m we found that the MC autolocker script wasn't running. So I restarted it. Still nothing changed: bright and sharp flashes appeared on the monitor (sign of a not too bad alignment) but no lock.
  4. I rebooted C1IOO. No change.
  5. I rebooted C1DCUEPICS and burtrestored the EPICS computers to Jul 19th. No change.
  6. Then I burtrestored the c1psl.snapshot and that finally did something. The FSS reflected spot changed and the halo appeared again at its transmission camera. Soon after the MC got locked.

We then proceeded with Koji's plan. In an iterative process, we aligned the MC cavity maximizing the transmission and tuned the periscope in order to match the Faraday input of the interferometer. The last thing we did it by looking at the camera pointing at the Faraday isolator.

We found that we didn't have to tune the periscope much. That means that all afternoon I didn't really go too far, but the autolocker wasn't working properly, or it wasn't working at all.

Then we ran the alignment script for the X arm but it didn't work before we aligned the steering mirrors.

Then we ran it three times but could not get more than 0.87 at TRX. That means that there we still have to work on the alignment to the Faraday. That's job for today in the trenches of the lab.


  17824   Tue Sep 5 04:48:20 2023 HirokiSummaryGeneralSummary of the late submitted entries

After I came back to Japan, I wrote and revised some Elog entries that I was not able to finish during my stay.
I am sorry for the late submission and revision.

Toward locking PRFPMI

Flow sensor

  • elog #17765
    I additionaly attached the schematic of the wiring to this existing entry.
  • elog #17779
    Calibration of the flow sensors.


  • elog #17822
    Location of the beam chopper used in T&R measurement.
  • elog #17823
    Location of the diode laser (iFLEX-1000) that might be used as the replacement of the He-Ne laser for OPLEV.
  6422   Thu Mar 15 08:48:40 2012 RyanSummaryCDSSummary of Syracuse Visit to 40m Mar 5-9 2012

JIMS Channels in PEM Model

The PEM model has been modified now to include the JIMS(Joint Information Management System) channel processing. Additionally Jim added test points at the outputs of the BLRMS.

For each seismometer channel, five bands are compared to threshold values to produce boolean results.  Bands with RMS below threshold produce bits with value 1, above threshold results in 0.  These bits are combined to produce one output channel that contains all of the results.

A persistent version of the channel is generated by a new library block that called persist which holds the value at 0 for a number of time steps equal to an EPICS variable setting from the time the boolean first drops to zero. The persist allows excursions shorter than the timestep of a downsampled timeseries to be seen reliably.

The EPICS variables for the thresholds are of the form (in order of increasing frequency):
The EPICS variables for the persist step size are of the form:

The JIMS Channels are being recorded and written to frames:

The two JIMS channels at 2048:
[C1:PEM-JIMS_CH1_DQ] Persistent version of JIMS channel. When bit drops to zero indicating something bad (BLRMS threshold exceeded) happens the bit stays at zero  for >= the value of the persist EPICS variable.
[C1:PEM-JIMS_CH2_DQ] Non-persistent version of JIMS channel.

And all of the BLRMS channels at 256:
Names are of the form:

For additional details about the JIMS Channels and the implementation, please see the previous elog entries by Jim.



I have a working aLIGO Conlog/EPICS Log installed and running on megatron.

Please see this wiki page for the details of use:

I also edited this page with restart instructions for megatron:

Please see Ryan's previous elog entries for installation details.

Future Work

  • Determine useful thresholds for each band
  • Generate MEDM Screens for JIMS Channels
  • Add a decimation option to channels
  • Add EPICS Strings in PEM model to describe bits in JIMS Channels
  • Add additional JIMS Channels: Testing additional characterization methods
  • Implement a State Log on Megatron: Will Provide a 1Hz index into JIMS Channels
  • Generate a single web page that allows access to aLIGO Conlog/EPICS Log and State Log


ELOG V3.1.3-