fb:controls>VMIC RFM 5565 (0) found, mapped at 0x2868c90
VMIC RFM 5579 (1) found, mapped at 0x2868c90
Could not open 5565 reflective memory in /dev/daqd-rfm1
16 kHz system
Spawn testpoint manager
Channel list length for node 0 is 4168
Test point manager (31001001 / 1): node 0
Matt logged in and rebuilt the TDS stuff for us on Mafalda in /cvs/cds/caltech/apps/linux/tds_090304.
He says that he can't build his stuff on 64-bit because there's not a sanctioned 64-bit build of GDS yet.
This should have all the latest fixes in it. I tried using both the old and new code from allegra and they both are fine:
./tdsdata 16384 2 C1:IOO-MC_F > /users/rana/test.txt
I loaded the data I got with the above command and there were no data dropouts. Possibly the dropout problem is only
associated with testpoints and so we have to wait for the TP fix.
Because of the network interference we've had from the CLIO system for the past 3-4 days, I asked the guys to remove
the test stand from the 40m lab area. It is now in the 40m control room. Since it needed an ethernet connection to get out
for some reason we've let them hook into GC. Also, instead of using a real timing signal slaved to the GPS, Jay suggested
just skipping it and having the Timing Slave talk to itself by looping back the fiber with the timing signal. Osamu will enter
more details, but this is just to give a status update.
The seisBLRMS has been running on megatron via an open terminal ssh'd into there from allegra with matlab running. This
is because I couldn't get the compiled matlab functionality to work.
Even so, this running script has been dying lately because of some bogus 'NDS' error. So for today I
have set the NDS server for mDV on megatron to be fb40m:8088 instead of nodus.ligo.caltech.edu. If this seems to fix the problem
I will make this permanent by putting in a case statement to check whether or not the mDV'ing machine is a 40m-martian or not.
On Monday evening, I ran this command: trianglewave C1:IOO-MC_REFL_OFFSET 0 4 120 600;ezcawrite C1:IOO-MC_REFL_OFFSET 1.76
trianglewave C1:IOO-MC_REFL_OFFSET 0 4 120 600;ezcawrite C1:IOO-MC_REFL_OFFSET 1.76
which I thought (from the syntax help) would move that offset slider with a period of 120 seconds for 600 seconds. In actuality, the last argument is the
run time in number of periods. So the offset slider has been changing by 8 Vpp for most of the last day. Oops. The attached image shows what effect
this had in the MC transmitted power (not negligible). This would also make the locking pretty difficult.
In the second plot you can see the zoom in view for ~30 minutes. During the first part, the MCWFS are on and there are large fluctuations
in the transmitted power as the WFS offset changes. This implies that the large TEM00 carrier offset we induce with the slider couples into
the WFS signals because of imbalances in the quadrant gains - we need someone to balance the RF gains in the WFS quadrants by injecting
an AM laser signal and adjusting the digital gains.
Since there is still a modulation of the MC RFPD DC with the WFS on, we can use this to optimize the REFL OFFSET slider. The third plot
shows a 8 minute second trend of this. Looks like the slider offset of zero would be pretty good.
The ndsproxy tcl task on nodus was eating up all the CPU and making the elog slow. I killed it and restarted it.
It looks like it hasn't been making a log file since January. Someone who has some skill in decoding the cryptic csh stdout redirection
syntax should look into this (its in target/ndsproxy/)
The LSC time had gone too high. I deleted ~20 filters and rebooted. CPU time came down to 50 usec.
The filters all looked like old trash to me, but its possible they were used.
I didn't delete anything from the DARM, CARM, etc. banks but did from the PD and TM filter banks. You can always go back in time by using the
While waiting for the installation of the 32-bit Matlab 2009a to finish, I tried updating our seisBLRMS.m code.
Although DMF is in SVN, we forgot to check it out and so the directory where we have been doing our mods is not a working copy and our changes have not been captured: Shame.
We will probably have to wipe out the existing SVN trunk of DMF and re-import the directory after checking with Yoichi for SVN compliance.
Also wrote a script: LSC/x2mc, which will transition from regular ETM based X Arm locking to the MC2 based locking. It ran once OK, but I get a segfault on the 'trianglewave' which was trying to run the 'ezcastep' perl script which was calling 'ezcastep.bin'.
I also restarted the seisBLRMS.m on a terminal on Mafalda in the new Matlab 2009a to see if it loses its NDS connection like it did with 2007a. I also reduced the 'delay' parameter to 4 minutes and the 'interval' to 1 minute. This should be so that the total delay is now 5 minutes between seismic noise and seismic trend.
I tried to play an .avi file on allegra. In a normal universe this would be easy, but because its linux I was foiled.
The default video player (Totem) doesn't play .avi or .wmv format. The patches for this work in Suse but not Fedora. Kubuntu but not CentOS, etc.I also tried installing Kplayer, Kaffeine, mplayer, xine, Aktion, Realplay, Helix, etc. They all had compatibility issues with various things but usuallylibdvdread or some gstreamer plugin.So I pressed the BIG update button. This has now started and allegra may never recover. The auto update wouldn't work in default mode becauseof the libdvdread and gstreamer-ugly plugins, so I unchecked those boxes. I think we're going to have this problem as long as we used any kind ofadvanced gstreamer stuff for the GigE cameras (which is unavoidable).
I hereby award the previous rainbow transfer functions the plot innovation of the month award for its use of optical frequency to denote CARM offset.
The attached movie here shows the sensing matrix (minus MICH) as a function of CARM offset. There are 3 CARM signals plotted:
GREEN - tonights starting CARM signal - REFL_DC
RED - my favorite CARM signal - REFL 166 I
CYAN - runner up CARM signal - POX 33 I
nodus was hanging because it was trying to mount the cit40m account from rigel and rigel was not responding.
Neither I nor Yoichi can recall what the cit40m account does for us on nodus anymore and so I commented it out of the nodus /etc/vfstab.
nodus may still need a boot to make it pay attention. I was unable to do a 'umount' to get rid of the rigel parasite. But mainly I don't want anything in
the 40m to depend on the LIGO GC system if at all possible.
compiled using the 'gcc' compiler instead of the 'ANSI C' compiler that is recommended in the README (which, I notice,
is now missing from Ben Johnsons web page!). Let's see how long this runs.
BLACK - raw ground motion measured by the Guralp
MAGENTA - motion after passive STACIS (20 Hz harmonic oscillator with a Q~2)
GREEN - difference between ground and top of STACIS
YELLOW - EUCLID noise in air
BLUE - STACIS top motion with loop on (60 Hz UGF, 1/f^2 below 30 Hz)
CYAN - same as BLUE, w/ 10x lower noise sensor
CYAN - cryo ON
BLACK - cryo OFF
BLUE - no crappy lens + mount
Yoichi's final words on what do next with the interferometer (as of 5 PM on May 21, 2009):
My personal sub-comments to these bullets:
Sun Jun 21 00:06:43 PDT 2009
Mar 6 15:46:32 nodus sshd: [ID 800047 auth.crit] fatal: Timeout before authentication for 188.8.131.52
Mar 10 11:11:32 nodus sshd: [ID 800047 auth.crit] fatal: Timeout before authentication for 184.108.40.206
Mar 11 13:27:37 nodus sshd: [ID 800047 auth.error] error: connect_to 220.127.116.11 port 7000: Connection refused
Mar 11 13:27:37 nodus sshd: [ID 800047 auth.error] error: connect_to nodus port 7000: failed.
Mar 11 13:31:40 nodus sshd: [ID 800047 auth.error] error: connect_to 18.104.22.168 port 7000: Connection refused
Mar 11 13:31:40 nodus sshd: [ID 800047 auth.error] error: connect_to nodus port 7000: failed.
Mar 11 13:31:45 nodus sshd: [ID 800047 auth.error] error: connect_to 22.214.171.124 port 7000: Connection refused
Mar 11 13:31:45 nodus sshd: [ID 800047 auth.error] error: connect_to nodus port 7000: failed.
Mar 11 13:34:58 nodus sshd: [ID 800047 auth.error] error: connect_to 126.96.36.199 port 7000: Connection refused
Mar 11 13:34:58 nodus sshd: [ID 800047 auth.error] error: connect_to nodus port 7000: failed.
Mar 12 16:09:23 nodus sshd: [ID 800047 auth.crit] fatal: Timeout before authentication for 188.8.131.52
Mar 14 20:14:42 nodus sshd: [ID 800047 auth.crit] fatal: Timeout before authentication for 184.108.40.206
Mar 25 19:47:19 nodus sudo: [ID 702911 local2.alert] controls : 3 incorrect password attempts ; TTY=pts/2 ; PWD=/cvs/cds ; USER=root ; COMMAND=/usr/bin/rm -rf kamioka/
Mar 25 19:48:46 nodus su: [ID 810491 auth.crit] 'su root' failed for controls on /dev/pts/2
Mar 25 19:49:17 nodus last message repeated 2 times
Mar 25 19:51:14 nodus sudo: [ID 702911 local2.alert] controls : 1 incorrect password attempt ; TTY=pts/2 ; PWD=/cvs/cds ; USER=root ; COMMAND=/usr/bin/rm -rf kamioka/
Mar 25 19:51:22 nodus su: [ID 810491 auth.crit] 'su root' failed for controls on /dev/pts/2
Jun 8 16:12:17 nodus su: [ID 810491 auth.crit] 'su root' failed for controls on /dev/pts/4
12:06am up 150 day(s), 11:52, 1 user, load average: 0.05, 0.07, 0.07
12:06am up 150 day(s), 11:52, 1 user, load average: 0.05, 0.07, 0.07