40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 238 of 344  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  6412   Wed Mar 14 05:26:39 2012 interferomter tack forceUpdateGeneraldaytime tasks

The following tasks need to be done in the daytime tomorrow.

  • Hook up the DC output of the Y green BBPD on the PSL table to an ADC channel (Jamie / Steve)
  • Install fancy suspension matrices on PRM and ITMX [#6365] (Jenne)
  • Check if the REFL165 RFPD is healthy or not (Suresh / Koji)
    • According to a simulation the REFL165 demod signal should show similar amount of the signal to that of REFL33.
    • But right now it is showing super tiny signals [#6403]
  6416   Wed Mar 14 14:09:01 2012 interferomter tack forceUpdateGeneraldaytime tasks


The following tasks need to be done in the daytime tomorrow.

  • Hook up the DC output of the Y green BBPD on the PSL table to an ADC channel (Jamie / Steve)
  • Install fancy suspension matrices on PRM and ITMX [#6365] (Jenne)
  • Check if the REFL165 RFPD is healthy or not (Suresh / Koji)
    • According to a simulation the REFL165 demod signal should show similar amount of the signal to that of REFL33.
    • But right now it is showing super tiny signals [#6403]

 For ITMX, I used the values from the conlog:

2011/08/12,20:10:12 utc 'C1:SUS[-_]ITMX[-_]INMATRIX'
These are the latest values in the conlog that aren't the basic matricies.  Even though we did a round of diagonalization in Sept, and the 
matricies are saved in a .mat file, it doesn't look like we used the ITMX matrix from that time.

For PRM, I used the matricies that were saved in InputMatricies_16Sept2011.mat, in the peakFit folder, since I couldn't find anything in the Conlog other than the basic matricies.


UPDATE:  I didn't actually count the number of oscillations until the optics were damped, so I don't have an actual number for the Q, but I feel good about the damping, after having kicked POS of both ITMX and PRM and watching the sensors.

  407   Mon Mar 31 14:01:40 2008 jamieSummaryLSCSummary of DC readout PD non-linearity measurements
From March 21-26, I conducted some measurements of the response non-linearity of some mock-up DC readout photodetectors. The detectors are simple:
Vbias ---
        |-------- output
This is a description of the final measurement.

The laser current modulation input was given a 47Hz sine wave at 20mV. A constant small fraction of the beam was shown onto the reference detector, and a beam that was varied in DC power level was incident on the test detector. Spectra were taken from both detectors at the same time, 0.25Hz bandwidth, over 100 averages.

At each incident power level on the test detector, the Vpk in all multiples of the modulation frequency were measured (ie. V[i*w]). The difference between the 2f/1f ratio in the test and reference was then calculated, ie:
V_test[2*w]/V_test[1*w] - V_ref[2*w]/V_ref[1*w]
This is the solid black line in the plot ("t21-r21_v_power.png").

The response of a simulated non-linear detector was also calculated based on the Vpk measured at each harmonic in the reference detector, assuming that the reference detector had a purely linear response, ie:
V_nl[beta,2*w]/V_nl[beta,1*w] - V_l[2*w]/V_l[1*w]
these are the dashed colored lines in the plot ("t21-r21_v_power.png").

The result of the measurement seems to indicate that the non-linearity in the test detector is less than beta=-1.

The setup that was on the big optics table south of the laser, adjacent to the mode cleaner, is no longer needed.
  4549   Wed Apr 20 23:20:49 2011 jamieSummaryComputersinstallation of CDS tools on pianosa

This is an overview of how I got (almost) all the CDS tools running on pianosa, the new Ubuntu 10.04 control room work station.

This is machine is experiment in minimizing the amount of custom configuration and source code compiling. I am attempting to install as many tools as possible from existing packages in

available packages

I was able to install a number of packages directly from the ubuntu archives, including fftw, grace, and ROOT:

apt-get install \
libfftw3-dev \
grace \


I installed all needed LSCSOFT packages (framecpp, libframe, metaio) from the well-maintained UWM LSCSOFT repository.

$ cat /etc/apt/sources.list.d/lscsoft.list
deb http://www.lsc-group.phys.uwm.edu/daswg/download/software/debian/ squeeze
deb-src http://www.lsc-group.phys.uwm.edu/daswg/download/software/debian/ squeeze contrib
sudo apt-get install lscsoft-archive-keyring
sudo apt-get update
sudo apt-get install ldas-tools-framecpp-dev libframe-dev libmetaio-dev lscsoft-user-en

You then need to source /opt/lscsoft/lscsoft-user-env.sh to use these packages.


There actually appear to be a couple of projects that are trying to provide debs of EPICS. I was able to actually get epics working from one of them, but it didn't include some of the other needed packages (such as MEDM and BURT) so I fell back to using Keith's pre-build binary tarball.


apt-get install \
libmotif-dev \
libxt-dev \
libxmu-dev \
libxprintutil-dev \
libxpm-dev \
libz-dev \
libxaw7-dev \
libpng-dev \
libgd2-xpm-dev \
libbz2-dev \
libssl-dev \
liblapack-dev \

Pulled Keith's prebuild binary:

cd /ligo/apps
wget https://llocds.ligo-la.caltech.edu/daq/software/binary/apps/ubuntu/epics-3.14.10-ubuntu.tar.gz
tar zxf epics-3.14.10-ubuntu.tar.gz


I built GDS from svn, after I fixed some broken stuff [0]:

cd ~controls/src/gds
svn co https://redoubt.ligo-wa.caltech.edu/svn/gds/trunk
cd trunk
#fixed broken stuff [0]
source /opt/lscsoft/lscsoft-user-env.sh
export GDSBUILD=online
export ROOTSYS=/usr
./configure --prefix=/ligo/apps/gds --enable-only-dtt --with-epics=/ligo/apps/epics-3.14.10
make install


I installed dataviewer from source:

cd ~controls/src/advLigoRTS
svn co https://redoubt.ligo-wa.caltech.edu/svn/advLigoRTS/trunk
cd trunk/src/dv
#fix stupid makefile /opt/rtapps --> /ligo/apps
make install

I found that the actual dataviewer wrapper script was also broken, so I made a new one:

$ cat /ligo/apps/dv/dataviewer
export DVPATH=/ligo/apps/dv
mkdir $DCDIR
trap "rm -rf $DCDIR" EXIT
$DVPATH/dc3 -s ${NDSSERVER} -a $ID -b $DVPATH "$@"


Finally, I made a environment definer file:

$ cat /ligo/apps/cds-user-env.sh
# source the lscsoft environment
. /opt/lscsoft/lscsoft-user-env.sh

# source the gds environment
. /ligo/apps/gds/etc/gds-user-env.sh

# special local epics setup
export LD_LIBRARY_PATH=${EPICS}/base/lib/linux-x86_64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=${EPICS}/extensions/lib/linux-x86_64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=${EPICS}/modules/seq/lib/linux-x86_64:$LD_LIBRARY_PATH
export PATH=${EPICS}/base/bin/linux-x86_64:$PATH
export PATH=${EPICS}/extensions/bin/linux-x86_64:$PATH
export PATH=${EPICS}/modules/seq/bin/linux-x86_64:$PATH

# dataviewer path
export PATH=/ligo/apps/dv:${PATH}

# specify the NDS server
export NDSSERVER=fb

[0] GDS was not compiling, because of what looked like bugs. I'm not sure why I'm the first person to catch these things. Stricter compiler?

To fix the following compile error:

TLGExport.cc:1337: error: ‘atoi’ was not declared in this scope

I made the following patch:

Index: /home/controls/src/gds/trunk/GUI/dttview/TLGExport.cc
--- /home/controls/src/gds/trunk/GUI/dttview/TLGExport.cc (revision 6423)
+++ /home/controls/src/gds/trunk/GUI/dttview/TLGExport.cc (working copy)
@@ -31,6 +31,7 @@
#include <iomanip>

#include <string.h>

#include <strings.h>

+#include <stdlib.h>

namespace ligogui {
using namespace std;

To fix the following compile error:

TLGPrint.cc:264: error: call of overloaded ‘abs(Int_t&)’ is ambiguous

I made the following patch:

Index: /home/controls/src/gds/trunk/GUI/dttview/TLGPrint.cc
--- /home/controls/src/gds/trunk/GUI/dttview/TLGPrint.cc (revision 6423)
+++ /home/controls/src/gds/trunk/GUI/dttview/TLGPrint.cc (working copy)
@@ -22,6 +22,7 @@
#include <fstream>

#include <map>
#include <cmath>

+#include <cstdlib>

namespace ligogui {
using namespace std;

  4732   Tue May 17 17:01:22 2011 jamieConfigurationCDSUpdate LSC channels from _DAQ to _DQ

As of RCG version 2.1, recorded channels use the suffix "_DQ", instead of "_DAQ".  I just rebuilt and installed the c1lsc model, which changed the channel names, therefore hosing the old daq channel ini file.  Here's what I did, and how I fixed it:

$ ssh c1lsc
$ cd /opt/rtcds/caltech/c1/core/trunk
$ make c1lsc
$ make install-c1lsc
$ cd /opt/rtcds/caltech/c1/scripts
$ ./startc1lsc
$ cd /opt/rtcds/caltech/c1/chans/daq
$ cat archive/C1LSC_110517_152411.ini | sed "s/_DAQ/_DQ/g" >C1LSC.ini
$ telnet fb 8087
daqd> shutdown


  5049   Wed Jul 27 15:49:13 2011 jamieConfigurationCDSdataviewer now working on pianosa

Not exactly sure what the problem was, but I updated to the head of the SVN and rebuilt and it seems to be working fine now.

  5060   Fri Jul 29 12:39:26 2011 jamieUpdateCDSc1iscex mysteriously crashed

c1iscex was behaving very strangely this morning.  Steve earlier reported that he was having trouble pulling up some channels from the c1scx model.  I went to investigate and noticed that indeed some channels were not responding.

While I was in the middle of poking around, c1iscex stopped responding altogether, and became completely unresponsive.  I walked down there and did a hard reset.  Once it rebooted, and I did a burt restore from early this morning, everything appeared to be working again.

The fact that problems were showing up before the machine crashed worries me.  I'll try to investigate more this afternoon.

  5094   Tue Aug 2 16:43:23 2011 jamieUpdateCDSNDS2 server on mafalda restarted for access to new channels

In order to get access to new DQ channels from the NDS2 server, the NDS2 server needs to be told about the new channels and restarted.  The procedure is as follows:

ssh mafalda
cd /users/jzweizig/nds2-mafalda
pkill nds2
# wait a few seconds for the process to quit and release the server port

This procedure needs to be run every time new _DQ channels are added.

We need to set this up as a proper service, so the restart procedure is more elegant.

An additional comment from John Z.:

    The --end-gps parameter in ./build_channel_history seems to be causeing
    some trouble. It should work without this parameter, but there is a
    directory with a gps time of 1297900000 (evidently a test for GPS1G)
    that might screw up the channel list generation. So, it appears that
    the end time requires a time for which data already exists. this
    wouldn't seem to be a big deal, but it means that it has to be modified
    by hand before running. I haven't fixed this yet, but I think that I
    can probably pick out the most recent frame and use that as an end-time
    point. I'll see if I can make that work...

  5127   Fri Aug 5 20:37:34 2011 jamieSummaryGeneralSummary of today's in-vacuum work

[Jamie, Suresh, Jenne, Koji, Kiwamu]

After this morning's hiccup with the east end crane, we decided to go ahead with work on ETMX.

Took pictures of the OSEM assemblies, we laid down rails to mark expected new position of the suspension base.

Removed two steering mirrors and a windmill that were on the table but where not being used at all.

Clamped the test mass and moved the suspension to the edge of the table so that we could more easily work on repositioning the OSEMs.  Then leveled the table and released the TM.

Rotated each OSEM so that the parallel LED/PD holder plates were oriented in the vertical direction.  We did this in the hopes that this orientation would minimize SD -> POS coupling.

For each OSEM, we moved it through it's full range, as read out by the C1:SUS-ETMX_{UL,UR,LL,LR,SD}PDMon channels, and attempted to adjust the positions so that the read out was in the center of the range (the measured ranges, mid values, and ultimate positions will be noted in a follow-up post).  Once we were satisfied that all the OSEMs were in good positions, we photographed them all (pictures also to follow).

Re-clamped the TM and moved it into it's final position, using the rails as reference and a ruler to measure as precisely as possible :

ETMX position change: -0.2056 m = -20.56 cm = -8.09 in (away from vertex)

Rebalanced the table.

Repositioned the mirror for the ETMX face camera.

Released TM clamps.

Rechecked OSEM centering.

Unblocked the green beam, only to find that it was displaced horizontally on the test mass about half an inch to the west (-y).  Koji determined that this was because the green beam is incident on the TM at an angle due to the TM wedge.  This presents a problem, since the green beam can no longer be used as a reference for the arm cavity.  After some discussion we decided to go with the TM position as is, and to realign the green beam to the new position and relock the green beam to the new cavity.  We should be able to use the spot position of the green beam exiting the vacuum at the PSL table as the new reference.  If the green X beam exiting at the PSL table is severely displaced, we may decide to go back in and move ETMX to tweak the cavity alignment.

At this point we decided that we were done for the day.  Before closing up, we put a piece of foil with a hole in it in front of the the TM face, to use as an alignment aperture when Kiwamu does the green alignment.

Kiwamu will work on the green alignment over the weekend.  Assuming everything works out, we'll try the same procedure on ETMY on Monday.

  5128   Fri Aug 5 20:44:26 2011 jamieMetaphysicsTreasureFilm crew here Monday morning

Just a reminder that a film crew will be here Monday morning, filming Christian Ott for some Discovery channel show.

They are slated to be here from 8am to 12:30pm or so.  They will take a couple of shots inside the lab, and the rest of the filming should be of Christian in the control room (which they will "clean up" and fit with "sexy lighting").  I will try to be here the whole time to oversee everything.

  5143   Mon Aug 8 19:45:27 2011 jamieUpdateCDSactivateDQ script run; SUS channels being acquired again

> Also the BS is missing its DAQ channels again (JAMIE !) so we can't diagnose it with the free swinging method.

I'm not sure why the BS channels were not being acquired.  I reran the activateDQ script, which seemed to fix everything.  The BS DQ channels are now there.

I also noticed that for some reason there were SUS-BS-ASC{PIT,YAW}_IN1_DQ channels, even though they had their acquire flags set to 0.  This means that they were showing up like test point channels, but not being written to frames by the frame builder.  This is pretty unusual, so I'm not sure why they were there.  I removed them.

  5161   Wed Aug 10 00:11:39 2011 jamieUpdateSUScheck of input diagnolization of ETMX after OSEM tweaking

Suresh and I tweaked the OSEM angles in ETMX yesterday.  Last night the ETMs were left free swinging, and today I ran Rana's peakFit scripts on ETMX to check the input diagnolization:


It's well inverted, but the matrix elements are not great:

       pit       yaw       pos       side      butt
UL    0.3466    0.4685    1.6092    0.3107    1.0428
UR    0.2630   -1.5315    1.7894   -0.0706   -1.1859
LR   -1.7370   -1.5681    0.3908   -0.0964    0.9392
LL   -1.6534    0.4319    0.2106    0.2849   -0.8320
SD    1.0834   -2.6676   -0.9920    1.0000   -0.1101

The magnets are all pretty well centered in the OSEMS, and we worked at rotating the OSEMS such that the bounce mode was minimized.

Rana and Koji are working on ETMY now.  Maybe they'll come up with a better procedure.

  5162   Wed Aug 10 00:21:10 2011 jamieUpdateCDSupdates to peakFit scripts

I updated the peakFit routines to make them a bit more user friendly:

  • modified so that any subset of optics can be processed at a time, instead of just all
  • broke out tweakable fit parameters into a separate parameters.m file
  • added a README that describes use

These changes were committed to the 40m svn.

  5176   Wed Aug 10 15:39:33 2011 jamieUpdateSUScurrent SUS input diagonalization overview

Below is the overview of all the core IFO suspension input diagonalizatidons.

Summary: ITMY, PRM, BS are really bad (in that order) and are our top priorities.


I had originally put the condition number of the calculated input matrix (M) in the last column.  However, after some discussion we decided that this is not in fact what we want to look at.  The condition number of a matrix is unity if the matrix is completely diagonal.  However, even our ideal input matrix is not diagonal, so the "best" condition number for the input matrix is unclear.

What instead we do know is that the matrix, B, that describes the difference between the calculated input matrix, M, and the ideal input matrix, M0: should be diagonal (identity, in fact):

M = M0 B

B should be diagonal (identity, in fact), and it's condition number should ideally be 1.  So now we calculate B-1, since it can be calculated from the pre-inverted input matrix:

B-1 = M-1 * M0

From that we calculate cond(B) == cond(B-1).

cond(B) is our new measure of the "badness" of the OSEMS.

new summary: ITMY, PRM, BS are really bad (in that order) and are our top priorities.


 cond(M) cond(B)
 PRM PRM.png        pit     yaw     pos     side    butt
UL   -2.000  -2.000  -2.000  -0.345   2.097 
UR   -0.375  -0.227  -0.312  -0.060   0.247 
LR    1.060   1.075   0.971   0.143  -0.984 
LL   -0.565  -0.698  -0.717  -0.141   0.672 
SD    1.513   1.485   1.498   1.000  -1.590
 75.569 106.756
 SRM SRM.png  

      pit     yaw     pos     side    butt
UL    0.791   1.060   1.114  -0.133   1.026 
UR    1.022  -0.940   1.052  -0.061  -1.027 
LR   -0.978  -0.987   0.886  -0.031   0.903 
LL   -1.209   1.013   0.948  -0.103  -1.043 
SD    0.286   0.105   1.249   1.000   0.030

 2.6501 3.90776
 BS  BS.png  

      pit     yaw     pos     side    butt
UL    1.420   0.818  -0.069   0.352   1.038 
UR    0.276  -1.182   1.931  -0.217  -0.905 
LR   -1.724  -0.274   1.940  -0.254   0.862 
LL   -0.580   1.726  -0.060   0.315  -1.194 
SD    0.560   0.171  -3.535   1.000   0.075 

9.8152 7.28516
ITMX ITMX.png       pit     yaw     pos     side    butt
UL    0.437   1.015   1.050  -0.065   0.714 
UR    0.827  -0.985   1.129  -0.221  -0.957 
LR   -1.173  -1.205   0.950  -0.281   1.245 
LL   -1.563   0.795   0.871  -0.125  -1.084 
SD   -0.581  -0.851   2.573   1.000  -0.171 
 4.08172 4.69811
 ITMY  ITMY.png  

      pit     yaw     pos     side    butt
UL    0.905  -0.884  -0.873   0.197   0.891 
UR   -1.095   1.088   1.127  -0.252  -1.115 
LR   -0.012  -0.028   0.002   0.001   0.030 
LL    1.988  -2.000  -1.998   0.451   1.964 
SD    4.542  -4.608  -4.621   1.000   4.517 

 801.453 774.901
 ETMX  ETMX.png        pit     yaw     pos     side    butt
UL    0.344   0.475   1.601   0.314   1.043 
UR    0.283  -1.525   1.786  -0.071  -1.181 
LR   -1.717  -1.569   0.399  -0.102   0.938 
LL   -1.656   0.431   0.214   0.283  -0.837 
SD    0.995  -2.632  -0.999   1.000  -0.110 
 4.26181 4.33518
 ETMY  ETMY.png        pit     yaw     pos     side    butt
UL   -0.212   1.272   1.401  -0.127   0.941 
UR    0.835  -0.728   1.534  -0.101  -1.054 
LR   -0.953  -1.183   0.599  -0.066   0.827 
LL   -2.000   0.817   0.466  -0.092  -1.177 
SD   -0.172   0.438   2.238   1.000  -0.008 
 4.04847 4.33725


  5207   Fri Aug 12 15:16:56 2011 jamieUpdateSUStoday's SUS overview

Here's an update of the suspensions, after yesterdays in-vacuum work and OSEM tweaking:

  • PRM and ETMY are completely messed up.  The spectra are so bad that I'm not going to bother posting anything.   ETMY has extreme sensor voltages that indicate that it's maybe stuck to one of the OSEMS.  PRM voltages look nominal, so I have no idea what's going on there.
  • ITMY is much improved, but it could still use some work
  • SRM is a little worse than what it was yesterday, but we've done a lot of work on the ITMY table, such as moving ITMY suspension and rebalancing the table.
  • BS looks for some reason slightly better than it did yesterday
TM   M cond(B)

      pit     yaw     pos     side    butt
UL    0.828   1.041   1.142  -0.135   1.057 
UR    1.061  -0.959   1.081  -0.063  -1.058 
LR   -0.939  -0.956   0.858  -0.036   0.849 
LL   -1.172   1.044   0.919  -0.108  -1.035 
SD    0.196  -0.024   1.861   1.000   0.043

ITMY  ITMY.png       pit     yaw     pos     side    butt
UL    1.141   0.177   1.193  -0.058   0.922 
UR    0.052  -1.823   0.766  -0.031  -0.974 
LR   -1.948  -0.082   0.807  -0.013   1.147 
LL   -0.859   1.918   1.234  -0.040  -0.957 
SD   -1.916   2.178   3.558   1.000   0.635 
BS  BS.png       pit     yaw     pos     side    butt
UL    1.589   0.694   0.182   0.302   1.042 
UR    0.157  -1.306   1.842  -0.176  -0.963 
LR   -1.843  -0.322   1.818  -0.213   0.957 
LL   -0.411   1.678   0.158   0.265  -1.038 
SD    0.754   0.298  -3.142   1.000   0.053


  5240   Mon Aug 15 17:23:55 2011 jamieUpdateSUSfreeswing script updated

I have updated the freeswing scripts, combining all of them into a single script that takes arguments to specify the optic to kick:

pianosa:SUS 0> ./freeswing
usage: freeswing SET
usage: freeswing OPTIC [OPTIC ...]

Kick and free-swing suspended optics.
Specify optics (i.e. 'MC1', 'ITMY') or a set:
'mc'  = (MC1 MC2 MC3)
pianosa:SUS 0>

I have removed all of the old scripts, and committed the new one to the SVN.

  5241   Mon Aug 15 17:36:10 2011 jamieUpdateSUSStrangeness with ETMY (was: Monday SUS update)

For some reason ETMY has changed a lot.  Not only does it now have the worst "badness" (B matrix condition number) at ~10, but the frequency of all the modes have shifted, some considerably.  I did accidentally bump the optic when Jenne and I were adjusting the OSEMs last week, but I didn't think it was that much.  The only thing I can think of that would cause the modes to move so much is that the optic has been somehow reseated in it's suspension.  I really don't know how that would have happened, though.

Jenne and I went in to investigate ETMY, to see if we could see anything obviously wrong.  Everything looks to be ok.  The magnets are all well centered in the OSEMs, and the PDMon levels look ok.

We rechecked the balance of the table, and tweaked it a bit to make it more level.  We then tweaked the OSEMs again to put them back in the center of their range.  We also checked the response by using the lockin method to check the response to POS and SIDE drive in each of the OSEMs (we want large POS response and minimal SIDE response).  Everything looked ok.

We're going to take another freeswing measurement and see how things look now.  If there are any suggestions what should be done (if anything), about the shifted modes, please let us know.

  5242   Mon Aug 15 17:38:07 2011 jamieUpdateGeneralFoil aperture placed in front of ETMY

We have placed a foil aperture in front of ETMY, to aid in aligning the Y-arm, and then the PRC.  It obviously needs to be removed before we close up.

  5247   Tue Aug 16 10:59:06 2011 jamieUpdateSUSSUS update

Data taken from: 997530498+120

Things are actually looking ok at the moment.  "Badness" (cond(B)) is below 6 for all optics.

  • We don't have results from PRM since its spectra looked bad, as if it's being clamped by the earthquake stops.
  • The SRM matrix definitely looks the nicest, followed by ITMX.  All the other matrices have some abnormally high or low elements.
  • cond(B) for ETMY is better than that for SRM, even though the ETMY matrix doesn't look as nice.  Does this mean that cond(B) is not necessarily the best figure of merit, or is there something else that our naive expectation for the matrix doesn't catch?

We still need to go through and adjust all the OSEM ranges once the IFO is aligned and we know what our DC biases are.  We'll repeat this one last time after that.

TM   M cond(B)
BS  BS.png       pit     yaw     pos     side    butt
UL    1.456   0.770   0.296   0.303   1.035 
UR    0.285  -1.230   1.773  -0.077  -0.945 
LR   -1.715  -0.340   1.704  -0.115   0.951 
LL   -0.544   1.660   0.227   0.265  -1.070 
SD    0.612   0.275  -3.459   1.000   0.046
SRM  SRM.png       pit     yaw     pos     side    butt
UL    0.891   1.125   0.950  -0.077   0.984 
UR    0.934  -0.875   0.987  -0.011  -0.933 
LR   -1.066  -1.020   1.050   0.010   1.084 
LL   -1.109   0.980   1.013  -0.056  -0.999 
SD    0.257  -0.021   0.304   1.000   0.006 
ITMX  ITMX.png       pit     yaw     pos     side    butt
UL    0.436   1.035   1.042  -0.068   0.728 
UR    0.855  -0.965   1.137  -0.211  -0.969 
LR   -1.145  -1.228   0.958  -0.263   1.224 
LL   -1.564   0.772   0.863  -0.120  -1.079 
SD   -0.522  -0.763   2.495   1.000  -0.156
ITMY  ITMY.png       pit     yaw     pos     side    butt
UL    1.375   0.095   1.245  -0.058   0.989 
UR   -0.411   1.778   0.975  -0.022  -1.065 
LR   -2.000  -0.222   0.755   0.006   1.001 
LL   -0.214  -1.905   1.025  -0.030  -0.945 
SD    0.011  -0.686   0.804   1.000   0.240 
ETMX  ETMX.png       pit     yaw     pos     side    butt
UL    0.714   0.191   1.640   0.404   1.052 
UR    0.197  -1.809   1.758  -0.120  -1.133 
LR   -1.803  -1.889   0.360  -0.109   0.913 
LL   -1.286   0.111   0.242   0.415  -0.902 
SD    1.823  -3.738  -0.714   1.000  -0.130 
ETMY  ETMY.png       pit     yaw     pos     side    butt
UL    1.104   0.384   1.417   0.351   1.013 
UR   -0.287  -1.501   1.310  -0.074  -1.032 
LR   -2.000   0.115   0.583  -0.045   0.777 
LL   -0.609   2.000   0.690   0.380  -1.179 
SD    0.043  -0.742  -0.941   1.000   0.338 


  5260   Thu Aug 18 00:58:40 2011 jamieUpdateSUSoptics kicked and left free swinging

ALL optics (including MC) were kicked and left free swinging at:


The "opticshutdown" script was also run, which should turn the watchdogs back on in 5 hours (at 6am).


  5263   Thu Aug 18 12:22:37 2011 jamieUpdateSUSsuspension update

Most of the suspension look ok, with "badness" levels between 4 and 5.  I'm just posting the ones that look slightly less ideal below.

  • PRM, SRM, and BS in particular show a lot of little peaks that look like some sort of intermodulations.
  • ITMY has a lot of elements with imaginary components
  • The ETMY POS and SIDE modes are *very* close together, which is severely adversely affecting the diagonalization
 PRM  PRM.png        pit     yaw     pos     side    butt
UL    0.466   1.420   1.795  -0.322   0.866  
UR    1.383  -0.580   0.516  -0.046  -0.861  
LR   -0.617  -0.978   0.205   0.011   0.867  
LL   -1.534   1.022   1.484  -0.265  -1.407  
SD    0.846  -0.632  -0.651   1.000   0.555


SRM SRM.png       pit     yaw     pos     side    butt
UL    0.783   1.046   1.115  -0.149   1.029 
UR    1.042  -0.954   1.109  -0.060  -1.051 
LR   -0.958  -0.926   0.885  -0.035   0.856 
LL   -1.217   1.074   0.891  -0.125  -1.063 
SD    0.242   0.052   1.544   1.000   0.029 
 BS BS.png        pit     yaw     pos     side    butt
UL    1.536   0.714   0.371   0.283   1.042  
UR    0.225  -1.286   1.715  -0.084  -0.927  
LR   -1.775  -0.286   1.629  -0.117   0.960  
LL   -0.464   1.714   0.285   0.250  -1.070  
SD    0.705   0.299  -3.239   1.000   0.023 
 ITMY  ITMY.png        pit     yaw     pos     side    butt
UL    1.335   0.209   1.232  -0.071   0.976  
UR   -0.537   1.732   0.940  -0.025  -1.068  
LR   -2.000  -0.268   0.768   0.004   1.046  
LL   -0.129  -1.791   1.060  -0.043  -0.911  
SD   -0.069  -0.885   1.196   1.000   0.239 
ETMY ETMY.png       pit     yaw     pos     side    butt
UL    1.103   0.286   1.194  -0.039   0.994 
UR   -0.196  -1.643  -0.806  -0.466  -1.113 
LR   -2.000   0.071  -0.373  -0.209   0.744 
LL   -0.701   2.000   1.627   0.217  -1.149 
SD    0.105  -1.007   3.893   1.000   0.290 


  5265   Thu Aug 18 22:24:08 2011 jamieOmnistructureVIDEOUpdated 'videoswitch' script

I have updated the 'videoswitch' program that controls the video MUX.  It now includes the ability to query the video mux for the channel mapping:

controls@pianosa:~ 0$ /opt/rtcds/caltech/c1/scripts/general/videoswitch -h
videoswitch [options] [OUT]      List current output/input mapping [for OUT]
videoswitch [options] OUT IN     Set output OUT to be input IN

  -h, --help            show this help message and exit
  -i, --inputs          List input channels and exit
  -o, --outputs         List output channels and exit
  -l, --list            List all input and output channels and exit
  -H HOST, --host=HOST  IP address/Host name
controls@pianosa:~ 0$

  5286   Tue Aug 23 10:38:27 2011 jamieUpdateSUSSUS update

SUS update before closing up:

  • MC1, MC2, ITMX look good
  • MC3, PRM look ok
  • SRM pos and side peaks are too close together to distinguish, so the matrix is not diagnalizable.  I think with more data it should be ok, though.
  • all ITMY elements have imaginary components
  • ITMY, ETMX, ETMY appear to have modest that swapped position:
    • ITMY: pit/yaw
    • ETMX: yaw/side
    • ETMY: pos/side
  • MC3, ETMX, ETMY have some very large/small elements

Not particularly good.  We're going to work on ETMY at least, since that one is clearly bad.

OPTIC   M cond(B)
MC1 MC1.png       pit     yaw     pos     side    butt
UL    0.733   1.198   1.168   0.050   1.057 
UR    1.165  -0.802   0.896   0.015  -0.925 
LR   -0.835  -1.278   0.832  -0.002   0.954 
LL   -1.267   0.722   1.104   0.032  -1.064 
SD    0.115   0.153  -0.436   1.000  -0.044
MC2 MC2.png        pit     yaw     pos     side    butt
UL    1.051   0.765   1.027   0.128   0.952  
UR    0.641  -1.235   1.089  -0.089  -0.942  
LR   -1.359  -0.677   0.973  -0.097   1.011  
LL   -0.949   1.323   0.911   0.121  -1.096  
SD   -0.091  -0.147  -0.792   1.000  -0.066 
MC3  MC3.png        pit     yaw     pos     side    butt
UL    1.589   0.353   1.148   0.170   1.099  
UR    0.039  -1.647   1.145   0.207  -1.010  
LR   -1.961  -0.000   0.852   0.113   0.896  
LL   -0.411   2.000   0.855   0.076  -0.994  
SD   -0.418   0.396  -1.624   1.000   0.019
PRM  PRM.png        pit     yaw     pos     side    butt
UL    0.532   1.424   1.808  -0.334   0.839  
UR    1.355  -0.576   0.546  -0.052  -0.890  
LR   -0.645  -0.979   0.192   0.015   0.881  
LL   -1.468   1.021   1.454  -0.267  -1.391  
SD    0.679  -0.546  -0.674   1.000   0.590 
BS  BS.png        pit     yaw     pos     side    butt
UL    1.596   0.666   0.416   0.277   1.037  
UR    0.201  -1.334   1.679  -0.047  -0.934  
LR   -1.799  -0.203   1.584  -0.077   0.952  
LL   -0.404   1.797   0.321   0.247  -1.077  
SD    0.711   0.301  -3.397   1.000   0.034 
ITMX  ITMX.png        pit     yaw     pos     side    butt
UL    0.458   1.025   1.060  -0.065   0.753  
UR    0.849  -0.975   1.152  -0.199  -0.978  
LR   -1.151  -1.245   0.940  -0.243   1.217  
LL   -1.542   0.755   0.848  -0.109  -1.052  
SD   -0.501  -0.719   2.278   1.000  -0.153
ITMY  ITMY.png        pit     yaw     pos     side    butt
UL    0.164   1.320   1.218  -0.086   0.963  
UR    1.748  -0.497   0.889  -0.034  -1.043  
LR   -0.252  -2.000   0.782  -0.005   1.066  
LL   -1.836  -0.183   1.111  -0.058  -0.929  
SD   -0.961  -0.194   1.385   1.000   0.239 
ETMX ETMX.png        pit     yaw     pos     side    butt
UL    0.623   1.552   1.596  -0.033   1.027  
UR    0.194  -0.448   1.841   0.491  -1.170  
LR   -1.806  -0.478   0.404   0.520   0.943  
LL   -1.377   1.522   0.159  -0.005  -0.860  
SD    1.425   3.638  -0.762   1.000  -0.132 
ETMY ETMY.png        pit     yaw     pos     side    butt
UL    0.856   0.007   1.799   0.241   1.005  
UR   -0.082  -1.914  -0.201  -0.352  -1.128  
LR   -2.000   0.079  -0.104  -0.162   0.748  
LL   -1.063   2.000   1.896   0.432  -1.119  
SD   -0.491  -1.546   2.926   1.000   0.169 


  5291   Tue Aug 23 17:45:22 2011 jamieUpdateSUSITMX, ITMY, ETMX clamped and moved to edge of tables

In preparation for tomorrow's drag wiping and door closing, I have clamped ITMX, ITMY, and ETMX with their earthquake stops and moved the suspension cages to the door-edge of their respective tables.  They will remain clamped through drag wiping.

ETMY was left free-swinging, so we will clamp and move it directly prior to drag wiping tomorrow morning.

  5293   Tue Aug 23 18:25:56 2011 jamieUpdateSUSSRM diagnalization OK

By looking at a longer data stretch for the SRM (6 hours instead of just one), we were able to get enough extra resolution to make fits to the very close POS and SIDE peaks.  This allowed us to do the matrix inversion.  The result is that SRM looks pretty good, and agrees with what was measured previously:

SRM SRM.png        pit     yaw     pos     side    butt
UL    0.869   0.975   1.140  -0.253   1.085  
UR    1.028  -1.025   1.083  -0.128  -1.063  
LR   -0.972  -0.993   0.860  -0.080   0.834  
LL   -1.131   1.007   0.917  -0.205  -1.018  
SD    0.106   0.064   3.188   1.000  -0.011 


  5294   Wed Aug 24 09:11:19 2011 jamieUpdateSUSETMY SUS update: looks good. WE'RE READY TO CLOSE

We ran one more free swing test on ETMY last night, after the last bit of tweaking on the SIDE OSEM.  It now looks pretty good:

ETMY ETMY.png       pit     yaw     pos     side    butt
UL   -0.323   1.274   1.459  -0.019   0.932 
UR    1.013  -0.726   1.410  -0.050  -1.099 
LR   -0.664  -1.353   0.541  -0.036   0.750 
LL   -2.000   0.647   0.590  -0.004  -1.219 
SD    0.021  -0.035   1.174   1.000   0.137 


  5297   Wed Aug 24 12:08:56 2011 jamieUpdateSUSITMX, ETMX, ETMY free swinging

ITMX: 998245556

ETMX, ETMY: 998248032

  5317   Mon Aug 29 12:05:32 2011 jamieUpdateCDSRe : fb down

fb was requiring manual fsck on it's disks because it was sensing filesystem errors.  The errors had to do with the filesystem timestamps being in the future.  It turned out that fb's system date was set to something in 2005.  I'm not sure what caused the date to be so off (motherboard battery problem?)  But I did determine after I got the system booting that the NTP client on fb was misconfigured and was therefore incapable of setting the system date.  It seems that it was configured to query a non-existent ntp server.  Why the hell it would have been set like this I have no idea.

In any event, I did a manual check on /dev/sdb1, which is the root disk, and postponed a check on /dev/sda1 (the RAID mounted at /frames) until I had the system booting.  /dev/sda1 is being checked now, since there are filesystems errors that need to be corrected, but it will probably take a couple of hours to complete.  Once the filesystems are clean I'll reboot fb and try to get everything up and running again.

  5319   Mon Aug 29 18:16:10 2011 jamieUpdateCDSRe : fb down

fb is now up and running, although the /frames raid is still undergoing an fsck which is likely take another day.  Consequently there is no daqd and no frames are being written to disk.  It's running and providing the diskless root to the rest of the front end systems, so, so the rest of the IFO should be operational.

I burt restored the following (which I believe is everything that was rebooted), from Saturday night:



  5320   Mon Aug 29 18:24:11 2011 jamieUpdateSUSITMY stuck to OSEMs?

ITMY, which is supposed to be fully free-swinging at the moment, is displaying the tell-tale signs of  being stuck to one of it's OSEMs.  This is indicated by the PDMon values, one of which is zero while the others are max:

UL: 0.000
UR: 1.529
LR: 1.675
LL: 1.949
SD: 0.137

Do we have a procedure for remotely getting it unstuck?  If not, we need to open up ITMYC and unstick it before we pump.


  5323   Tue Aug 30 11:28:56 2011 jamieUpdateCDSframebuilder back up

The fsck on the framebuilder (fb) raid array (/dev/sda1) completed overnight without issue.  I rebooted the framebuilder and it came up without problem.

I'm now working on getting all of the front-end computers and models restarted and talking to the framebuilder now.

  5324   Tue Aug 30 11:42:29 2011 jamieUpdateCDStestpoint.par file found to be completely empty

The testpoint.par file, located at /opt/rtcds/caltech/c1/target/gds/param/testpoint.par, which tells GDS processes where to find the various awgtpman processes, was completely empty.  The file was there but was just 0 bytes.  Apparently the awgtpman processes themselves also consult this file when starting, which means that none of the awgtpman processes would start.

This file is manipulated in the "install-daq-%" target in the RCG Makefile, ultimately being written with output from the src/epics/util/updateTestpointPar.pl script, which creates a stanza for each front-end model.  Rebuilding and installing all of the models properly regenerated this file.

I have no idea what would cause this file to get truncated, but apparently this is not the first time: elog #3999.  I'm submitting a bug report with CDS.


  5325   Tue Aug 30 14:33:52 2011 jamieUpdateCDSall front-ends back up and running

All the front-ends are now running.  Many of them came back on their own after the testpoint.par was fixed and the framebuilder was restarted.  Those that didn't just needed to be restarted manually.

The c1ioo model is currently in a broken state: it won't compile.  I assume that this was what Suresh was working on when the framebuilder crash happened.  This model needs to be fixed.

  5356   Wed Sep 7 09:21:57 2011 jamieUpdateSUSSUS spectra before close up

Here are all suspension diagonalization spectra before close up. Notes:

  • TMX looks the worst, but I think we can live with it. The large glitch in the UL sensor at around 999423150 (#5355) is worrying. However, it seemed to recover. The spectra below were taken from data before the glitch.
  • ITMY has a lot of imaginary components. We previously found that this was due to a problem with one of it's whitening filters (#5288). I assume we're seeing the same issue here.
  • SRM needs a little more data to be able to distinguish the POS and SIDE peaks, but otherwise it looks ok.
ITMX ITMX.png        pit     yaw     pos     side    butt
UL    0.355   0.539   0.976  -0.500   0.182 
UR    0.833  -1.406  -0.307  -0.118   0.537 
LR   -1.167   0.055   0.717  -0.445   0.286 
LL   -1.645   2.000   2.000  -0.828  -2.995 
SD   -0.747   0.828   2.483   1.000  -1.637 
ITMY  ITMY.png        pit     yaw     pos     side    butt
UL    1.003   0.577   1.142  -0.038   0.954  
UR    0.582  -1.423   0.931  -0.013  -1.031  
LR   -1.418  -0.545   0.858   0.008   1.081  
LL   -0.997   1.455   1.069  -0.017  -0.934  
SD   -0.638   0.797   1.246   1.000   0.264
BS  BS.png        pit     yaw     pos     side    butt
UL    1.612   0.656   0.406   0.277   1.031  
UR    0.176  -1.344   1.683  -0.058  -0.931  
LR   -1.824  -0.187   1.594  -0.086   0.951  
LL   -0.388   1.813   0.317   0.249  -1.087  
SD    0.740   0.301  -3.354   1.000   0.035 
PRM  PRM.png        pit     yaw     pos     side    butt
UL    0.546   1.436   1.862  -0.345   0.866  
UR    1.350  -0.564   0.551  -0.055  -0.878  
LR   -0.650  -0.977   0.138   0.023   0.858  
LL   -1.454   1.023   1.449  -0.268  -1.398  
SD    0.634  -0.620  -0.729   1.000   0.611
ETMX ETMX.png        pit     yaw     pos     side    butt
UL    0.863   1.559   1.572   0.004   1.029  
UR    0.127  -0.441   1.869   0.480  -1.162  
LR   -1.873  -0.440   0.428   0.493   0.939  
LL   -1.137   1.560   0.131   0.017  -0.871  
SD    1.838   3.447  -0.864   1.000  -0.135 
ETMY  ETMY.png        pit     yaw     pos     side    butt
UL   -0.337   1.275   1.464  -0.024   0.929  
UR    1.014  -0.725   1.414  -0.055  -1.102  
LR   -0.649  -1.363   0.536  -0.039   0.750  
LL   -2.000   0.637   0.586  -0.007  -1.220  
SD    0.057  -0.016   1.202   1.000   0.142 
MC1  MC1.png        pit     yaw     pos     side    butt
UL    0.858   0.974   0.128   0.053  -0.000  
UR    0.184  -0.763   0.911   0.018   0.001  
LR   -1.816  -2.000   1.872   0.002   3.999  
LL   -1.142  -0.263   1.089   0.037   0.001  
SD    0.040   0.036  -0.216   1.000  -0.002 
MC2  MC2.png        pit     yaw     pos     side    butt
UL    1.047   0.764   1.028   0.124   0.948  
UR    0.644  -1.236   1.092  -0.088  -0.949  
LR   -1.356  -0.680   0.972  -0.096   1.007  
LL   -0.953   1.320   0.908   0.117  -1.095  
SD   -0.092  -0.145  -0.787   1.000  -0.065 
MC3  MC3.png        pit     yaw     pos     side    butt
UL    1.599   0.343   1.148   0.168   1.101  
UR    0.031  -1.647   1.139   0.202  -1.010  
LR   -1.969   0.010   0.852   0.111   0.893  
LL   -0.401   2.000   0.861   0.077  -0.995  
SD   -0.414   0.392  -1.677   1.000   0.018 


  5408   Wed Sep 14 20:04:05 2011 jamieUpdateCDSUpdate to frame builder wiper.pl script for GPS 1000000000

I have updated the wiper.pl script (/opt/rtcds/caltech/c1/target/fb/wiper.pl) that runs on the framebuilder (in crontab) to delete old frames in case of file system overloading.  The point of this script is to keep the file system from overloading by deleting the oldest frames.  As it was, it was not properly sorting numbers which would have caused it to delete post-GPS 1000000000 frames first.  This issue was identified at LHO, and below is the patch that I applied to the script.

--- wiper.pl.orig  2011-04-11 13:54:40.000000000 -0700
+++ wiper.pl       2011-09-14 19:48:36.000000000 -0700
@@ -1,5 +1,7 @@
+use File::Basename;
 print "\n" .  `date` . "\n";
 # Dry run, do not delete anything
 $dry_run = 1;
@@ -126,14 +128,23 @@
 if ($du{$minute_trend_frames_dir} > $minute_frames_keep) { $do_min = 1; };
+# sort files by GPS time split into prefixL-T-GPS-sec.gwf
+# numerically sort on 3rd field
+sub byGPSTime {
+    my $c = basename $a;
+    $c =~ s/\D+(\d+)\D+(\d+)\D+/$1/g;
+    my $d = basename $b;
+    $d =~ s/\D+(\d+)\D+(\d+)\D+/$1/g;
+    $c <=> $d;
 # Delete frame files in $dir to free $ktofree Kbytes of space
 # This one reads file names in $dir/*/*.gwf sorts them by file names
 # and progressively deletes them up to $ktofree limit
 sub delete_frames {
        ($dir, $ktofree) = @_;
        # Read file names; Could this be inefficient?
-       @a= <$dir/*/*.gwf>;
-       sort @a;
+       @a = sort byGPSTime <$dir/*/*.gwf>;
        $dacc = 0; # How many kilobytes we deleted
        $fnum = @a;
        $dnum = 0;
@@ -145,6 +156,7 @@
          if ($dacc >= $ktofree)  { last; }
          $dnum ++;
          # Delete $file here
+         print "- " . $file . "\n";
          if (!$dry_run) {     

  5424   Thu Sep 15 20:16:15 2011 jamieUpdateCDSNew c1oaf model installed and running

[Jamie, Jenne, Mirko]

New c1oaf model installed

We have installed the new c1oaf (online adaptive feed-forward) model.  This model is now running on c1lsc.  It's not really doing anything at the moment, but we wanted to get the model running, with all of it's interconnections to the other models.

c1oaf has interconnections to both c1lsc and c1pem via the following routes:

c1lsc ->SHMEM-> c1oaf
c1oaf ->SHMEM-> c1lsc
c1pem ->SHMEM-> c1rfm ->PCIE-> c1oaf

Therefore c1lsc, c1pem, and c1rfm also had to be modified to receive/send the relevant signals.

As always, when adding PCIx senders and receivers, we had to compile all the models multiple times in succession so that the /opt/rtcds/caltech/c1/chans/ipc/C1.ipc would be properly populated with the channel IPC info.


There were a couple of issues that came up when we installed and re/started the models:

c1oaf not being registered by frame builder

When the c1oaf model was started, it had no C1:DAQ-FB0_C1OAF_STATUS channel, as it's supposed to.  In the daqd log (/opt/rtcds/caltech/c1/target/fb/logs/daqd.log.19901) I found the following:

Unable to find GDS node 22 system c1oaf in INI files

It turns out this channel is actually created by the frame builder, and it could not find the channel definition file for the new model, so it was failing to create the channels for it.  The frame builder "master" file (/opt/rtcds/caltech/c1/target/fb/master) needs to list the c1oaf daq ini files:


These were added, and the framebuilder was restarted.  After which the C1:DAQ-FB0_C1OAF_STATUS appeared correctly.

SHMEM errors on c1lsc and c1oaf

This turned out to be because of an oversight in how we wired up the skeleton c1oaf model.  For the moment the c1oaf model has only the PCIx sends and receives.  I had therefore grounded the inputs to the SHMEM parts that were meant to send signals to C1LSC.  However, this made the RCG think that these SHMEM parts were actually receivers, since it's the grounding of the inputs to these parts that actually tells the RCG that the part is a receiver.  I fixed this by adding a filter module to the input of all the senders.

Once this was all fixed, the models were recompiled, installed, and restarted, and everything came up fine.

All model changes were of course committed to the cds_user_apps svn as well.

  5651   Tue Oct 11 17:32:05 2011 jamieHowToEnvironment40m google maps link

Here's another useful link:


  5710   Thu Oct 20 09:54:53 2011 jamieUpdateComputer Scripts / Programspynds working on pianosa again



Doesn't work on pianosa either. Has someone changed the python environment?

pianosa:SUS_SUMMARY 0> ./setSensors.py 1000123215 600 0.1 0.25
Traceback (most recent call last):
  File "./setSensors.py", line 2, in <module>
    import nds
ImportError: No module named nds

 So I found that the NDS2 lib directory in (/ligo/apps/nds2/lib) was completely empty.  I reinstalled NDS2 and pynds, and they are now available again by default on pianosa (it should "just work", assuming you don't break your environment).

Why the NDS2 lib directory was completely empty is definitely a concern to me.  The contents of directories don't just disappear.  I can't imagine how this would happen other than someone doing it, either on purpose or accidentally.  If someone actually deleted the contents of this directory on purpose they need to speak up, explain why they did this, and come see me for a beating.

  5736   Tue Oct 25 18:09:44 2011 jamieUpdateCDSNew DEMOD part

I forgot to elog (bad Jamie) that I broke out the demodulator from the LOCKIN module to make a new DEMOD part:


The LOCKIN part now references this part, and the demodulator can now be used independently.  The 'LO SIN in' and 'LO COS in' should receive their input from the SIN and COS outputs of the OSCILLATOR part.

  5755   Fri Oct 28 12:47:38 2011 jamieUpdateCDSCSS/BOY installed on pianosa

I've installed Control System Studio (CSS) on pianosa, from the version 3.0.2 Red Hat binary zip.  It should be available as "css" from the command line.

CSS is a new MEDM replacement. It's output is .opi files, instead of .adl files.  It's supposed to include some sort of converter, but I didn't play with it enough to figure it out.

Please play around with it and let me know if there are any issues.


  5833   Mon Nov 7 15:43:25 2011 jamieUpdateLSCLSC model recompiled


While trying to compile, there was something wrong with the lockins that were there...it complained about the Q OUTs being unconnected.  I even reverted to the before-I-touched-it-today version of c1lsc from the SVN, and it had the same problem.  So, that means that whomever put those in the LSC model did so, and then didn't check to see if the model would compile.  Not so good.

Anyhow, I just terminated them, to make it happy.  If those are actually supposed to go somewhere, whoever is in charge of LSC lockins should take a look at it.

 This was totally my fault.  I'm very sorry.  I modified the lockin part to output the Q phase, and forgot to modify the models that use that part appropriately.  BAD JAMIE!  I'll check to make sure this won't bite us again.

  5834   Mon Nov 7 15:49:26 2011 jamieUpdateIOOWFS output matrix measured (open loop)


 The scripts used to make the WFS outmatrix measurement live in /cvs/cds/rtcds/caltech/c1/scripts/MC/WFS

 I assume you mean /opt/rtcds/caltech/c1/scripts/MC/WFS.

As I've tried to reitterate many times: we do not use /cvs/cds anymore.  Please put all new scripts into the proper location under /opt/rtcds.

  5836   Mon Nov 7 17:27:28 2011 jamieUpdateComputersISCEX IO chassis timing slave dead

It appears that the timing slave in the c1iscex IO chassis is dead.  It's front "link" lights are dark, although there appears to be power to the board (other on-board leds are lit).  These front lights should either be on and blinking steadily if the board is talking to the timing system, or blinking fast if there is no connection to the timing distribution box.  This likely indicates that the board has had some sort of internal failure.

Unfortunately Downs has no spare timing slave boards lying around at the moment; they're all stuffed in IO chassis awaiting shipping.  I'm going to email Rolf about stealing one, and if he agrees we'll work with Todd Etzel to pull one out for a transplant

  5845   Wed Nov 9 10:35:30 2011 jamieUpdateSUSETMX oplev is down and sus damping restored


The ETMX oplev returning beam is well centered on the qpd. We lost this signal 2 days ago. I will check on the qpd.

ETMX sus damping restored

 As reported a couple days ago, the ETMX IO chassis has no timing signal.  This is why we there is no QPD signal and why we turned off the ETMX watchdog.  In fact, I believe that there is probably nothing coming out of the ETMX suspension controller at all.

I'm working on getting the timing board replaced.  Hopefully today.

  5854   Wed Nov 9 18:02:42 2011 jamieUpdateCDSISCEX front-end working again (for the moment...)

The c1iscex IO chassis seems to be working again, and the iscex front-end is running again.

However, I can't say that I actually fixed the problem.

Originally I thought the timing slave board had died by the fact that the front LED indicators next to the fiber IO were out.  I didn't initially consider this a power supply problem since there were other leds on the board that were lit.  I finally managed to track down Rolf to give downs the OK to pull the timing boards out of a spare IO chassis for us to use.  However, when I replaced the timing boards in the chassis with the new ones, they showed the exact same behavior.

I then checked the power to the timing boards, which comes off a 2-pin connector from the backplane board in the back of the IO chassis.  Apparently it's supposed to be 12V, but it was only showing ~2.75V.  Since it was showing the same behavior for both timing boards, I assumed that the issue was on the IO chassis backplane.

I (with the help of Todd Etzel) started pulling cards out of the IO chassis (while power cycling appropriately, of course) to see if that changed anything.  After pulling out both the ADC and DAC cards, the timing system then came up fine, with full power.  The weird part is that everything then stayed fine after we started plugging all the cards back in.  We eventually got back to the fully assembled configuration with everything working.  But, nothing was changed, other than just re-seating all the cards.

Clearly there's some sort of flaky connection on the IO chassis board.  Something is prone to shorting, or something, that overloads the power supply and causes the voltage supply to the timing card to drop.

All I can do at this point is keep an eye on it and go through another round of debugging if it happens again.

If it does happen again, I ask that everyone please not touch the IO chassis and let me look at it first.  I want to try to poke around before anyone giggles any cables so I can track down where the issue might be.

  5873   Fri Nov 11 13:26:24 2011 jamieUpdateSUSMusings on SUS dewhitening, and MC ELP28's


For the whitening on the OSEM sensor input, FM1 is linked to the Contec binary I/O.  FM1 is the inverse whitening filter.  Turn it on, and the analog whitening is on (bit in the binary I/O screen turns red).  Turn it off, and the analog whitening is bypassed (bit in the binary I/O screen turns gray).  Good.  Makes sense.  Either way, the net transfer function is flat.

The dewhitening is not so simple.  In FM9 of the Coil Output filter bank, we have "SimDW", and in FM10, we have "InvDW".  Clicking SimDW on makes the bit in the binary I/O screen gray (off?), while clicking it off makes it red (on?).  Clicking InvDW does nothing to the I/O bits.  So.  I think that for dewhitening, the InvDW is always supposed to be on, and you either have Simulated DW, or analog DW enabled, so that either way your transfer function is flat.  Fine.  I don't know why we don't just tie the analog to the InvDW filter module, and delete the SimDW, but I'm sure there's a reason.

The input/whitening filters used to be in a similarly confused state as the output filters, but they have been corrected.  There might have been a reason for this setup in the past, but it's not what we should be doing now.  The output filters all need to be fixed.  We just haven't gotten to it yet.

As with the inputs, all output filters should be set up so that the full output transfer function is always flat, no matter what state it's in.  The digital anti-dewhitening ("InvDW") and analog dewhitening should always be engaged and disengaged simultaneously.   The "SimDW" should just be removed.

This is on my list of things to do.

  5896   Tue Nov 15 15:56:23 2011 jamieUpdateCDSdataviewer doesn't run


Dataviewer is not able to access to fb somehow.

I restarted daqd on fb but it didn't help.

Also the status screen is showing a blank while form in all the realtime model. Something bad is happening.

 So something very strange was happening to the framebuilder (fb).  I logged on the fb and found this being spewed to the logs once a second:

[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory

Apparently /bin/gcore was trying to be called by some daqd subprocess or thread, and was failing since that file doesn't exist.  This apparently started at around 5:52 AM last night:

[Tue Nov 15 05:46:52 2011] main profiler warning: 1 empty blocks in the buffer
[Tue Nov 15 05:46:53 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:46:54 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:46:55 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:46:56 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:52:43 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:52:44 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:52:45 2011] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1005400026 to 1005400379
[Tue Nov 15 05:52:46 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 05:52:46 2011] going down on signal 11
sh: /bin/gcore: No such file or directory

The gcore I believe it's looking for is a debugging tool that is able to retrieve images of running processes.  I'm guessing that something caused something int the fb to eat crap, and it was stuck trying to debug itself.  I can't tell what exactly happend, though.  I'll ping the CDS guys about it.  The daqd process was continuing to run, but it was not responding to anything, which is why it could not be restarted via the normal means, and maybe why the various FB0_*_STATUS channels were seemingly dead.

I manually killed the daqd process, and monit seemed to bring up a new process with no problem.  I'll keep an eye on it.

  5908   Wed Nov 16 10:13:13 2011 jamieUpdateelogrestarted


Basically, elog hangs up by the visit of googlebot.

Googlebot repeatedly tries to obtain all (!) of entries by specifying  "page0" and "mode=full".
Elogd seems not to have the way to block an access from the specified IP.
We might be able to use http-proxy via apache. (c.f. http://midas.psi.ch/elog/adminguide.html#secure)

 There are much more simple ways to prevent page indexing by googlebot: http://www.google.com/support/webmasters/bin/answer.py?answer=156449

However, I really think that's a less-than-idea solution to get around the actual problem which is that the elog software is a total piece of crap.  If google does not index the log then it won't appear in google search results.

But if there is one url that when requested causes the elog to crash then maybe it's a better solution to cut of just that url.

  5979   Tue Nov 22 18:15:39 2011 jamieUpdateCDSc1iscex ADC found dead. Replaced, c1iscex working again

c1iscex has not been running for a couple of days (since the power shutdown at least). I was assuming that the problem was recurrence of the c1iscex IO chassis issue from a couple weeks ago (5854).  However, upon investigation I found that the timing signals were all fine.  Instead, the IOP was reporting that it was finding now ADC, even though there is one in the chassis.

Since I had a spare ADC that was going to be used for the CyMAC, I decided to try swapping it out to see if that helped.  Sure enough, the system came up fine with the new ADC.  The IOP (c1x01) and c1scx are now both running fine.

I assume the issue before might have been caused by a failing and flaky ADC, which has now failed.  We'll need to get a new ADC for me to give back to the CyMAC.

  6037   Tue Nov 29 15:30:01 2011 jamieUpdateCDSlocation of currently used filter function

So I tracked down where the currently-used filter function code is defined (the following is all relative to /opt/rtcds/caltech/c1/core/release):

Looking at one of the generated front-end C source codes (src/fe/c1lsc/c1lsc.c) it looks like the relevant filter function is:


which is defined in:


and an associated header file is:

ELOG V3.1.3-