SRM, ITMX, ETMX, ITMY and ETMY lost damping at 4:55am this morning from 4.8 magnitude earthquake.
Their damping were restored.
C1:SUS-ITMX_URSEN_OUTPUT swich was found in off position. It was turned on.
MZehnder and MC were locked.
The WFS qpd spot needs recentering
The LSC time had gone too high. I deleted ~20 filters and rebooted. CPU time came down to 50 usec.
The filters all looked like old trash to me, but its possible they were used.
I didn't delete anything from the DARM, CARM, etc. banks but did from the PD and TM filter banks. You can always go back in time by using the
CES Mezzanine is beeing rebuilt to accommodate our new neighbor: the 20ft high water slide...& .jacuzzi
All our ac power transformers are up there. Yesterday we labelled the power switch of 480VAC on the mezz
that we need to keep to run the 3 cranes in the lab.
The outside particle counts for 0.5 micron are 3 million this morning at 9am. Low clouds, foggy condition with low inversion layer.
This makes the 40m lab 30-50K
I just turned on the HEPA filter at the PSL enclosure.
Please, leave it on high
I borrowed SR785 to measure AA, AI noise and TF.
We found c1lsc, c1iscex, c1iscey, c1susvme, c1asc and c1sosvme are dead.
We turned off all watchdogs and turned off all lock of suspensions.
Then, I tried to reboot these machines from terminal, but I couldn't login to all of these machines.
So, we turned off and on key switches of these machines physically, and login to them to run startup scripts.
Then we turned on all watchdogs and restored all IFO.
Now they look like they are working fine.
ITMX Pitch: 142 microrad/counts
ITMX Yaw: 145 microrad/counts
ITMY Pitch: 257 microrad/counts
ITMY Yaw: 206 microrad/counts
ETMX Pitch: 318 microrad/counts
ETMX Yaw: 291 microrad/counts
ETMY Pitch: 309 microrad/counts
ETMY Yaw: 299 microrad/counts
BS Pitch: 70.9 microrad/counts
BS Yaw: 96.3 microrad/counts
PRM Pitch: 78.5 microrad/counts
PRM Yaw: 79.9 microrad/counts
SRM Pitch: 191 microrad/counts
SRM Yaw: 146 microrad/counts
I compiled seisBLRMS.
The tricks were the following:
(1) Don't add path in a deployed command.
It does not make sense to add paths in a compiled command because it may be moved to anywhere. Moreover, it can cause some weird side effects. Therefore, I enclosed the addpath part of mdv_config.m in a "if ~isdeployed ... end" clause to avoid adding paths when deployed. Instead of adding paths in the code, we have to add paths to necessary files with -I options at the compilation time. This way, mcc will add all the necessary files into the CTF archive.
(2) Add mex files to the CTF archive by -a options.
For some reason, mcc does not add necessary mex files into the CTF archive even though those files are called in the m-file which is being compiled. We have to add those files by -a options.
(3) NDS_GetData() is slow for nodus when compiled.
NDS_GetData(), which is called by get_data() stops for a few minutes when using nodus as an NDS server.
This problem does not happen when not compiled. I don't know the reason. To avoid this, I modified seisBLRMS.m so that when an environmental variable $NDS is defined, it will use an NDS server defined in this variable.
I wrote a Makefile to compile seisBLRMS. You can read the file to see the details of the tricks.
I also wrote a script start_seisBLRMS, which can be found in /cvs/cds/caltech/apps/DMF/compiled_matlab/seisblrms/. To start seisBLRMS, you can just call this script.
At this moment, seisBLRMS is running on megatron. Let's see if it continues to run without crashing.
The seisBLRMS has been running on megatron via an open terminal ssh'd into there from allegra with matlab running. This
is because I couldn't get the compiled matlab functionality to work.
Even so, this running script has been dying lately because of some bogus 'NDS' error. So for today I
have set the NDS server for mDV on megatron to be fb40m:8088 instead of nodus.ligo.caltech.edu. If this seems to fix the problem
I will make this permanent by putting in a case statement to check whether or not the mDV'ing machine is a 40m-martian or not.
We found the MC reflection was distorted . And WFC beam went to upward of QPD
We recentered WFC beam and these problems were fixed
Kakeru and Kiwamu
We placed a QPD on the PSL bench for PSL angle monitor.
I checked a broken QPD, which was placed for PSL angle monitor, and finally I cocluded one segment of the quadrant diode was broken.
The broken segment has a offset voltage of -0.7V after 1st I-V amplifier. It means the diode segment has a current offset without any injection of light.
Tomorrow I will check a new QPD for replacement.
As we mentioned before, old QPD which used to be placed is broken.
And we put broken QPD into the "photodiodes" box under the soldering table.
The spare M126N-1064-700, sn 5519 of Dec 2006 rebuilt NPRO's power output
measured 750mW at DC2.06A with Ohpir meter.
Alberto's controller unit 125/126-OPN-PS, sn516m was disconnected from lenght measurment NPRO on the AP table.
5519 NPRO was clamp to the optical table without heatsink and it was on for 15 minutes.
This morning, MC alignment was gone and MC wasn't lock.
We checked old value of pitch, yaw, and position offset of each MC mirror, and found they were jumped.
We don't know the reason of this jump, but we restore each offset value and MC backed to lock.
I modified the Video.db file used by c1aux located in /cvs/cds/caltech/target/c1aux.
I added the following channels to the db file, intended for either read in or read out by the digital camera scripts.
A better naming scheme can probably be devised, but these will do for now.
The ndsproxy tcl task on nodus was eating up all the CPU and making the elog slow. I killed it and restarted it.
It looks like it hasn't been making a log file since January. Someone who has some skill in decoding the cryptic csh stdout redirection
syntax should look into this (its in target/ndsproxy/)
On Monday evening, I ran this command: trianglewave C1:IOO-MC_REFL_OFFSET 0 4 120 600;ezcawrite C1:IOO-MC_REFL_OFFSET 1.76
trianglewave C1:IOO-MC_REFL_OFFSET 0 4 120 600;ezcawrite C1:IOO-MC_REFL_OFFSET 1.76
which I thought (from the syntax help) would move that offset slider with a period of 120 seconds for 600 seconds. In actuality, the last argument is the
run time in number of periods. So the offset slider has been changing by 8 Vpp for most of the last day. Oops. The attached image shows what effect
this had in the MC transmitted power (not negligible). This would also make the locking pretty difficult.
In the second plot you can see the zoom in view for ~30 minutes. During the first part, the MCWFS are on and there are large fluctuations
in the transmitted power as the WFS offset changes. This implies that the large TEM00 carrier offset we induce with the slider couples into
the WFS signals because of imbalances in the quadrant gains - we need someone to balance the RF gains in the WFS quadrants by injecting
an AM laser signal and adjusting the digital gains.
Since there is still a modulation of the MC RFPD DC with the WFS on, we can use this to optimize the REFL OFFSET slider. The third plot
shows a 8 minute second trend of this. Looks like the slider offset of zero would be pretty good.
This morning there was a confliction of tpman running on fb40m and kami1. Alex fixed it temporary but Rana suggested it was better to move both PCs outside martian. We moved both PCs physically to the control room and connected to general network with a local router. I believe it won't conflict anymore but if you guess these PC might have trouble please feel free to shutdown.
Today's work summary:
*connected expansion chassis to bscteststand
*obtained signals on dataviewer, dtt for both realtime and past data on bscteststand with 64kHz timing signal
Excitation channels are not shown, only "other" is shown.
qts.mdl should run with 16kHz but 16kHz timing causes a slow speed on dataviewer and failing data aquisition on dtt. We are using 64kHz timing but is it really correct?
We confirmed that new tds(/cvs/cds/caltech/apps/linux/tds_090304/) works well on linux 64, and replaced it to /cvs/cds/caltech/apps/linux/tds/
The old /cvs/cds/caltech/apps/linux/tds is put in /cvs/cds/caltech/apps/linux/tds.bak
The tdscntr.pl in the new tds was probably the one from LLO, which is actually the version I sent to Tobin. It had paths and channel names defined for the LLO. So I copied back my original 40m version.
Because of the network interference we've had from the CLIO system for the past 3-4 days, I asked the guys to remove
the test stand from the 40m lab area. It is now in the 40m control room. Since it needed an ethernet connection to get out
for some reason we've let them hook into GC. Also, instead of using a real timing signal slaved to the GPS, Jay suggested
just skipping it and having the Timing Slave talk to itself by looping back the fiber with the timing signal. Osamu will enter
more details, but this is just to give a status update.
this afternoon we centered the optical levers for all the optics.
To do that we first ran the alignment scripts for all the cavities.
I tested new tdsdata and found it was working well.
I excited C1:SUS-ITMY_SUSPIT_EXC with tdssine, and get data from C1:LSC-TRY_OUT (testpoint) and C1:SUS- ITMY_OPLEV_PERROR (recorded point) with new and old tdsdata.
With old tdsdata (/cvs/cds/caltech/apps/linux/tds/bin/tdsdata), I found some jumps of datapoint, which is a same problem with before (Attachment 1).
With new tdsdata (/cvs/cds/caltech/apps/linux/tds_090304/bin/tdsdata), there looks to be no jumps (Attachment 2; taken about 10 minutes after Attachment 1).
The problem of old tdsdata looks to be remaining even for recordedpoints.
You should use /cvs/cds/caltech/apps/linux/tds_090304/bin/tdsdata.
Matt logged in and rebuilt the TDS stuff for us on Mafalda in /cvs/cds/caltech/apps/linux/tds_090304.
He says that he can't build his stuff on 64-bit because there's not a sanctioned 64-bit build of GDS yet.
This should have all the latest fixes in it. I tried using both the old and new code from allegra and they both are fine:
./tdsdata 16384 2 C1:IOO-MC_F > /users/rana/test.txt
I loaded the data I got with the above command and there were no data dropouts. Possibly the dropout problem is only
associated with testpoints and so we have to wait for the TP fix.
fb:controls>VMIC RFM 5565 (0) found, mapped at 0x2868c90
VMIC RFM 5579 (1) found, mapped at 0x2868c90
Could not open 5565 reflective memory in /dev/daqd-rfm1
16 kHz system
Spawn testpoint manager
Channel list length for node 0 is 4168
Test point manager (31001001 / 1): node 0
After the boot-fest, the nightly backup to Powell-Booth failed, and an automatic email got sent to me. I restarted the ssh agent, following the instructions in /cvs/cds/caltech/scripts/backup/000README.txt .