The control room video is showing us a false ETMY image. Who worked on the ETMY camera or video today??!!
I checked a broken QPD, which was placed for PSL angle monitor, and finally I cocluded one segment of the quadrant diode was broken.
The broken segment has a offset voltage of -0.7V after 1st I-V amplifier. It means the diode segment has a current offset without any injection of light.
Tomorrow I will check a new QPD for replacement.
While continuing our efforts to lock, we noticed the procedure failed at a point it had gotten past last night: turning on the bounce/roll filters in MICH, PRC, and SRC. We checked the MICH transfer function and noticed that the unity gain point was ~10 Hz, well below the bounce modes. We tried increasing the gain but found saturation, and Rob suggested that there could be misalignment on the AP table, which Steve worked on today. We went out and found two of the PDs (ASDD133 and AS166) to be badly misaligned probably due to a bumped optic upstream. We re-aligned.
I found a strange jump of value in my data taken with tdsdata.
I couldn't find same jump in a playback of DataViewer, so I think this is a problem of tdsdata.
Be careful when you use tdsdata!
The attached file is an example of jumped data.
I try to get data with allegra and op440m, and both has same kind of jump.
(A downsampling or interpolation may be wrong.)
Rana said there is a fixed version of tdsdata in some PC, but 64bit linux may not have.
I try it tomorrow.
Quick update on my wiener filtering status:
Joe has been helping me get on the GRID, so I now have a grid certificate, and accounts on most/all of the clusters.
Joe also helped me get menkar to get S5 data so that I can do wiener filtering to the back-data.
I've been running the wiener filtering algorithm, and right now, it doesn't do anything to improve the DARM_CTRL data. I am confident that this is because something is funky in the wiener filtering algorithm somewhere. The indicator of this is that the wiener filtering calculation takes the same amount of time (~95 seconds) to calculate a filter for 64 seconds of data as for 1 hour of data (both for N = 2000 taps).
For reference, attached are my plots for the wiener filtering result for (1) 64 seconds of S5 data, and for (2) 3600 seconds of S5 data.
These plots were made using H1:DARM_CTRL as the signal to minimize, with 4 seismometers as the witness channels (EX_SEISX, EY_SEISY, LVEA_SEISX, LVEA_SEISY)
I'm working on figuring out what's going on with the filtering algorithm, and why it does work for C1:MC_L minimization, but does not work for H1:DARM_CTRL minimization.
Could not get past arm power of ~11 or so. I was suspicious of the transmon high-gain/low-gain PD handover, so I ran the matchTransMon scripts, but that did not help. I also removed the line in the cm_step script that increased the CM gain by 18dB at an arm power of 4. The gain of the CM servo will increase naturally as the power in the IFO builds up, so it may not be good to crank it right away. I tried several other CM gains, and watched the DARM loop, but still could not get past an arm power of ~10-11. I'm not sure what's wrong, but it may be that mysterious CM-servo/McWFS conspiracy, so we can try turning down the McWFS gain next time.
The c1lsc has been unstable since last night. Its status on the DAQ screen was oscillating from green to red every minute.
Yesterday, I power recycled it. That brought it back but the MC got unclocked and the autolocker could not get engaged. I think it's because the power recycling also turned c1iscaux2 off which occupies the same rack crate.
Killing the autolocker on op340 e restarting didn't work. So I rebooted also c1dcuepis and burt-restored almost all snapshot files. To do that, as usual, I had to edit the snapshot files of c1dcuepics to move the quotes from the last line.
After that I restarted the autolocker that time it worked.
This morning c1lsc was again in the same unstable status as yesterday. This time I just reset it (no power recycling) and then I restarted it. It worked and now everything seems to be fine.
Both the Upgrade and the Old40m's signals look anomalous since the zero-crossing point does not change with the demodulation phases.
I suspect there's is a problem with the optickle model of the 40m.
I've updated the digital camera python code as well as changed the network topology.
At the moment, both cameras are connected to a small gigabit switch which only talks to Ottavia. This means all camera servers must be run on Ottavia, allow camera output is still UDP multicast so any machine capable of running gstreamer can pick up the images.
The server and client programs now have the ability to read a configuration file for the setup of the cameras. They default to pcameraSettings.ini, but this can default can be changed with a -c or --config option
For example, "serverV3.py --config pcam1.ini" will run the server using the pcam1.ini settings file. Similarly, "client.py --config pcam1.ini" will also take the IP settings from the config file so that it knows at which port and IP to listen.
These programs and .ini files have been placed in /cvs/cds/caltech/apps/linux64/python/pcamera/
I've updated the cshrc.40m aliases so that it uses the new configuration file options, so now pcam1 calls "client.py -c pcam1.ini" in the above directory.
So to start a client use pcam1 or pcam2 (for the 32223 camera in PSL looking at MC trans or 44026 looking at an analog moniter in the control room respectively). These can be run on Allegra, Rosalba or Ottavia at the moment.
To start a server, use pserv1 or pserv2. These *must* be run on Ottavia.
I've also added a -n or --no-gui option at Yoichi's request, one which just starts up and plays, with no graphical gui.
Lastly, I've made some changes to the base pcamerasrc.py file, which should make display more robust. After a failed transmission of an image from the camera to Ottavia, it should re-attempt up to 10 times before giving up. I'm hoping this will make it more robust against packet loss. The change in network topology has also helped this, allowing 640x480 to be transmitted on both cameras before tens of minutes before a packet loss causes a stop.
We found that the MC REFL image was no longer round and that the MCWFS DC quadrant spots were mostly
in one quadrant. So we re-centered the MCWFS beams in the following way:
1) We unlocked the MZ and adjusted the PZT voltage to keep the beam on the WFS from saturating.
2) Re-aligned the black hole beam dump to center its beam in its aperture.
3) centered the beam on the MCWFS optics and MCWFS QPD displays.
4) Relocked MC.
Below is the image of the IOO Strip tool. You can see that the MC REFL DC is now more flat. The
MC pointing has also been changed (see the MC TRANS HOR & VERT channels). The MC transmitted
light is also now more stable and higher.
We tried to center the QPD, and we found that there were a few hundred mV of dark offset for each
quadrant of QPD. We adjusted them with this scripts:
>> mcc -v -m -R -nojvm seisBLRMS.m
Warning: Duplicate directory name: /cvs/cds/caltech/apps/linux/matlab/toolbox/local.
Compiler version: 4.6 (R2007a)
Warning: an error occurred while parsing class FilterDesignDialog.AbstractEditor:
Undefined function or variable 'DAStudio.Object'.
> In /cvs/cds/caltech/apps/linux/matlab/toolbox/shared/filterdesignlib/@FilterDesignDialog/@CoeffEditor/schema.p>schema at 9
Warning: an error occurred while parsing class FilterDesignDialog.CoeffEditor:
Invalid superclass handle.
terminate called after throwing an instance of 'ApplicationRedefinedException*'
Abort (core dumped)
"/cvs/cds/caltech/apps/linux/matlab/bin/mcc" -E "/tmp/fileRnU5Qj_31324": Aborted
??? Error executing mcc, return status = 134.
fb:controls>VMIC RFM 5565 (0) found, mapped at 0x2868c90
VMIC RFM 5579 (1) found, mapped at 0x2868c90
Could not open 5565 reflective memory in /dev/daqd-rfm1
16 kHz system
Spawn testpoint manager
Channel list length for node 0 is 4168
Test point manager (31001001 / 1): node 0
Matt logged in and rebuilt the TDS stuff for us on Mafalda in /cvs/cds/caltech/apps/linux/tds_090304.
He says that he can't build his stuff on 64-bit because there's not a sanctioned 64-bit build of GDS yet.
This should have all the latest fixes in it. I tried using both the old and new code from allegra and they both are fine:
./tdsdata 16384 2 C1:IOO-MC_F > /users/rana/test.txt
I loaded the data I got with the above command and there were no data dropouts. Possibly the dropout problem is only
associated with testpoints and so we have to wait for the TP fix.
I tested new tdsdata and found it was working well.
I excited C1:SUS-ITMY_SUSPIT_EXC with tdssine, and get data from C1:LSC-TRY_OUT (testpoint) and C1:SUS- ITMY_OPLEV_PERROR (recorded point) with new and old tdsdata.
With old tdsdata (/cvs/cds/caltech/apps/linux/tds/bin/tdsdata), I found some jumps of datapoint, which is a same problem with before (Attachment 1).
With new tdsdata (/cvs/cds/caltech/apps/linux/tds_090304/bin/tdsdata), there looks to be no jumps (Attachment 2; taken about 10 minutes after Attachment 1).
The problem of old tdsdata looks to be remaining even for recordedpoints.
You should use /cvs/cds/caltech/apps/linux/tds_090304/bin/tdsdata.
We confirmed that new tds(/cvs/cds/caltech/apps/linux/tds_090304/) works well on linux 64, and replaced it to /cvs/cds/caltech/apps/linux/tds/
The old /cvs/cds/caltech/apps/linux/tds is put in /cvs/cds/caltech/apps/linux/tds.bak
The seisBLRMS has been running on megatron via an open terminal ssh'd into there from allegra with matlab running. This
is because I couldn't get the compiled matlab functionality to work.
Even so, this running script has been dying lately because of some bogus 'NDS' error. So for today I
have set the NDS server for mDV on megatron to be fb40m:8088 instead of nodus.ligo.caltech.edu. If this seems to fix the problem
I will make this permanent by putting in a case statement to check whether or not the mDV'ing machine is a 40m-martian or not.
The tdscntr.pl in the new tds was probably the one from LLO, which is actually the version I sent to Tobin. It had paths and channel names defined for the LLO. So I copied back my original 40m version.
This morning, MC alignment was gone and MC wasn't lock.
We checked old value of pitch, yaw, and position offset of each MC mirror, and found they were jumped.
We don't know the reason of this jump, but we restore each offset value and MC backed to lock.
The spare M126N-1064-700, sn 5519 of Dec 2006 rebuilt NPRO's power output
measured 750mW at DC2.06A with Ohpir meter.
Alberto's controller unit 125/126-OPN-PS, sn516m was disconnected from lenght measurment NPRO on the AP table.
5519 NPRO was clamp to the optical table without heatsink and it was on for 15 minutes.
Kakeru and Kiwamu
We placed a QPD on the PSL bench for PSL angle monitor.
As we mentioned before, old QPD which used to be placed is broken.
And we put broken QPD into the "photodiodes" box under the soldering table.
We found the MC reflection was distorted . And WFC beam went to upward of QPD
We recentered WFC beam and these problems were fixed