Both the Upgrade and the Old40m's signals look anomalous since the zero-crossing point does not change with the demodulation phases.
I suspect there's is a problem with the optickle model of the 40m.
I've updated the digital camera python code as well as changed the network topology.
At the moment, both cameras are connected to a small gigabit switch which only talks to Ottavia. This means all camera servers must be run on Ottavia, allow camera output is still UDP multicast so any machine capable of running gstreamer can pick up the images.
The server and client programs now have the ability to read a configuration file for the setup of the cameras. They default to pcameraSettings.ini, but this can default can be changed with a -c or --config option
For example, "serverV3.py --config pcam1.ini" will run the server using the pcam1.ini settings file. Similarly, "client.py --config pcam1.ini" will also take the IP settings from the config file so that it knows at which port and IP to listen.
These programs and .ini files have been placed in /cvs/cds/caltech/apps/linux64/python/pcamera/
I've updated the cshrc.40m aliases so that it uses the new configuration file options, so now pcam1 calls "client.py -c pcam1.ini" in the above directory.
So to start a client use pcam1 or pcam2 (for the 32223 camera in PSL looking at MC trans or 44026 looking at an analog moniter in the control room respectively). These can be run on Allegra, Rosalba or Ottavia at the moment.
To start a server, use pserv1 or pserv2. These *must* be run on Ottavia.
I've also added a -n or --no-gui option at Yoichi's request, one which just starts up and plays, with no graphical gui.
Lastly, I've made some changes to the base pcamerasrc.py file, which should make display more robust. After a failed transmission of an image from the camera to Ottavia, it should re-attempt up to 10 times before giving up. I'm hoping this will make it more robust against packet loss. The change in network topology has also helped this, allowing 640x480 to be transmitted on both cameras before tens of minutes before a packet loss causes a stop.
We found that the MC REFL image was no longer round and that the MCWFS DC quadrant spots were mostly
in one quadrant. So we re-centered the MCWFS beams in the following way:
1) We unlocked the MZ and adjusted the PZT voltage to keep the beam on the WFS from saturating.
2) Re-aligned the black hole beam dump to center its beam in its aperture.
3) centered the beam on the MCWFS optics and MCWFS QPD displays.
4) Relocked MC.
Below is the image of the IOO Strip tool. You can see that the MC REFL DC is now more flat. The
MC pointing has also been changed (see the MC TRANS HOR & VERT channels). The MC transmitted
light is also now more stable and higher.
We tried to center the QPD, and we found that there were a few hundred mV of dark offset for each
quadrant of QPD. We adjusted them with this scripts:
>> mcc -v -m -R -nojvm seisBLRMS.m
Warning: Duplicate directory name: /cvs/cds/caltech/apps/linux/matlab/toolbox/local.
Compiler version: 4.6 (R2007a)
Warning: an error occurred while parsing class FilterDesignDialog.AbstractEditor:
Undefined function or variable 'DAStudio.Object'.
> In /cvs/cds/caltech/apps/linux/matlab/toolbox/shared/filterdesignlib/@FilterDesignDialog/@CoeffEditor/schema.p>schema at 9
Warning: an error occurred while parsing class FilterDesignDialog.CoeffEditor:
Invalid superclass handle.
terminate called after throwing an instance of 'ApplicationRedefinedException*'
Abort (core dumped)
"/cvs/cds/caltech/apps/linux/matlab/bin/mcc" -E "/tmp/fileRnU5Qj_31324": Aborted
??? Error executing mcc, return status = 134.
fb:controls>VMIC RFM 5565 (0) found, mapped at 0x2868c90
VMIC RFM 5579 (1) found, mapped at 0x2868c90
Could not open 5565 reflective memory in /dev/daqd-rfm1
16 kHz system
Spawn testpoint manager
Channel list length for node 0 is 4168
Test point manager (31001001 / 1): node 0
Matt logged in and rebuilt the TDS stuff for us on Mafalda in /cvs/cds/caltech/apps/linux/tds_090304.
He says that he can't build his stuff on 64-bit because there's not a sanctioned 64-bit build of GDS yet.
This should have all the latest fixes in it. I tried using both the old and new code from allegra and they both are fine:
./tdsdata 16384 2 C1:IOO-MC_F > /users/rana/test.txt
I loaded the data I got with the above command and there were no data dropouts. Possibly the dropout problem is only
associated with testpoints and so we have to wait for the TP fix.
I tested new tdsdata and found it was working well.
I excited C1:SUS-ITMY_SUSPIT_EXC with tdssine, and get data from C1:LSC-TRY_OUT (testpoint) and C1:SUS- ITMY_OPLEV_PERROR (recorded point) with new and old tdsdata.
With old tdsdata (/cvs/cds/caltech/apps/linux/tds/bin/tdsdata), I found some jumps of datapoint, which is a same problem with before (Attachment 1).
With new tdsdata (/cvs/cds/caltech/apps/linux/tds_090304/bin/tdsdata), there looks to be no jumps (Attachment 2; taken about 10 minutes after Attachment 1).
The problem of old tdsdata looks to be remaining even for recordedpoints.
You should use /cvs/cds/caltech/apps/linux/tds_090304/bin/tdsdata.
We confirmed that new tds(/cvs/cds/caltech/apps/linux/tds_090304/) works well on linux 64, and replaced it to /cvs/cds/caltech/apps/linux/tds/
The old /cvs/cds/caltech/apps/linux/tds is put in /cvs/cds/caltech/apps/linux/tds.bak
The seisBLRMS has been running on megatron via an open terminal ssh'd into there from allegra with matlab running. This
is because I couldn't get the compiled matlab functionality to work.
Even so, this running script has been dying lately because of some bogus 'NDS' error. So for today I
have set the NDS server for mDV on megatron to be fb40m:8088 instead of nodus.ligo.caltech.edu. If this seems to fix the problem
I will make this permanent by putting in a case statement to check whether or not the mDV'ing machine is a 40m-martian or not.
The tdscntr.pl in the new tds was probably the one from LLO, which is actually the version I sent to Tobin. It had paths and channel names defined for the LLO. So I copied back my original 40m version.
This morning, MC alignment was gone and MC wasn't lock.
We checked old value of pitch, yaw, and position offset of each MC mirror, and found they were jumped.
We don't know the reason of this jump, but we restore each offset value and MC backed to lock.
The spare M126N-1064-700, sn 5519 of Dec 2006 rebuilt NPRO's power output
measured 750mW at DC2.06A with Ohpir meter.
Alberto's controller unit 125/126-OPN-PS, sn516m was disconnected from lenght measurment NPRO on the AP table.
5519 NPRO was clamp to the optical table without heatsink and it was on for 15 minutes.
Kakeru and Kiwamu
We placed a QPD on the PSL bench for PSL angle monitor.
I checked a broken QPD, which was placed for PSL angle monitor, and finally I cocluded one segment of the quadrant diode was broken.
The broken segment has a offset voltage of -0.7V after 1st I-V amplifier. It means the diode segment has a current offset without any injection of light.
Tomorrow I will check a new QPD for replacement.
As we mentioned before, old QPD which used to be placed is broken.
And we put broken QPD into the "photodiodes" box under the soldering table.
We found the MC reflection was distorted . And WFC beam went to upward of QPD
We recentered WFC beam and these problems were fixed
I compiled seisBLRMS.
The tricks were the following:
(1) Don't add path in a deployed command.
It does not make sense to add paths in a compiled command because it may be moved to anywhere. Moreover, it can cause some weird side effects. Therefore, I enclosed the addpath part of mdv_config.m in a "if ~isdeployed ... end" clause to avoid adding paths when deployed. Instead of adding paths in the code, we have to add paths to necessary files with -I options at the compilation time. This way, mcc will add all the necessary files into the CTF archive.
(2) Add mex files to the CTF archive by -a options.
For some reason, mcc does not add necessary mex files into the CTF archive even though those files are called in the m-file which is being compiled. We have to add those files by -a options.
(3) NDS_GetData() is slow for nodus when compiled.
NDS_GetData(), which is called by get_data() stops for a few minutes when using nodus as an NDS server.
This problem does not happen when not compiled. I don't know the reason. To avoid this, I modified seisBLRMS.m so that when an environmental variable $NDS is defined, it will use an NDS server defined in this variable.
I wrote a Makefile to compile seisBLRMS. You can read the file to see the details of the tricks.
I also wrote a script start_seisBLRMS, which can be found in /cvs/cds/caltech/apps/DMF/compiled_matlab/seisblrms/. To start seisBLRMS, you can just call this script.
At this moment, seisBLRMS is running on megatron. Let's see if it continues to run without crashing.
ITMX Pitch: 142 microrad/counts
ITMX Yaw: 145 microrad/counts
ITMY Pitch: 257 microrad/counts
ITMY Yaw: 206 microrad/counts
ETMX Pitch: 318 microrad/counts
ETMX Yaw: 291 microrad/counts
ETMY Pitch: 309 microrad/counts
ETMY Yaw: 299 microrad/counts
BS Pitch: 70.9 microrad/counts
BS Yaw: 96.3 microrad/counts
PRM Pitch: 78.5 microrad/counts
PRM Yaw: 79.9 microrad/counts
SRM Pitch: 191 microrad/counts
SRM Yaw: 146 microrad/counts
We found c1lsc, c1iscex, c1iscey, c1susvme, c1asc and c1sosvme are dead.
We turned off all watchdogs and turned off all lock of suspensions.
Then, I tried to reboot these machines from terminal, but I couldn't login to all of these machines.
So, we turned off and on key switches of these machines physically, and login to them to run startup scripts.
Then we turned on all watchdogs and restored all IFO.
Now they look like they are working fine.
The outside particle counts for 0.5 micron are 3 million this morning at 9am. Low clouds, foggy condition with low inversion layer.
This makes the 40m lab 30-50K
I just turned on the HEPA filter at the PSL enclosure.
Please, leave it on high
SRM, ITMX, ETMX, ITMY and ETMY lost damping at 4:55am this morning from 4.8 magnitude earthquake.
Their damping were restored.
C1:SUS-ITMX_URSEN_OUTPUT swich was found in off position. It was turned on.
MZehnder and MC were locked.
The WFS qpd spot needs recentering
ITMX, ITMY, BS, SRM, PRM op levs were all recentered. ETM's looked okay enough to leave as-is.
[Rana, Jamie, Jenne]
SPOB DC hasn't been so good lately, so we installed a new PO DC PD on the PO table. We used a 30% reflecting beam splitter (BS1-1064-30-1025-someotherstuff). We didn't check with a power meter that it's a 30% BS, but it seems like that's about right. The beamsplitter is as close as we could get to the shutter immediately in front of the regular POB/SPOB PD's, since that's where the beam gets narrow. The new picked-off-pickoff beam goes to a Thorlabs 100A PD. We haven't yet checked for reflected beams off the PD, but there is a spare razor blade beam dump on the table which can be used for this purpose. The output of this PD goes to the LSC rack via a BNC cable. (This BNC cable was appropriated from it's previous "use" connecting a photodiode from the AP table to a bit of air just next to the LSC rack.) Our new cable is now connected where the old SPOB DC cable used to be, at the input of a crazy Pomona Box tee.
For reference, the new levels of POB DC and SPOB DC, as measured by their BNC DC out connections is ~4mV each. Since the beamsplitter is 70% transmissive, we used to be getting about 5.7mV on each PD.
The new photodiode puts out about 40mV, but it has an ND1.0 filter on, so if more gain is needed, we can take it off to get more volts.
SUS-MC1_SENSOR_SIDE and SUS-MC2_SENSOR_UL are glitching
Yesterday's 4.8mag earthquake at Salton Sea is shown on Channel 1
C1:IOO-MC_BOOST1 0 (You can turn it on if you want, but turn it off for locking)
C1:IOO-MC_POL 1 (Minus)
C1:IOO-MC_LIMITER 1 (Disable)
C1:PSL-FSS_SW1 0 (Test1 ON)
C1:PSL-FSS_FASTGAIN 14 (Do not increase it, at least while locking. Otherwise the phase lag from the PZT loop gets significant and the MC loop will be conditionally stable).
When all things fail (netgpibdata.py is giving me weird data. When I plot the data it has saved from the 4395A, it's some wierd other universe's version of my transfer function. I don't really know what's up. I'm pretty sure I'm getting the 'correct' data, since each TF looks vaguely like it should, but with some crazy humps. I'll talk to Yoichi in the morning about it maybe.) (also, we're low on emergeny floppy discs), you can always take a picture of the Agilent 4395's screen, as shown below.
* Mode cleaner and PMC are both relocked after my shenanigans, and I'll try again in the morning (I assume locking is going on tonight) to get real TF's with real data, as opposed to the photo method.
Note to self: post the data of the TFs in the elog along with the plots, for posterity.
These TFs are of the Mode Cleaner servo board, exciting IN1 (or the 3.7MHz notch pomona box which is connected to IN1), and measuring at the SERVO out of the board.
One with the box, one without the box, and one of just the box for good measure.
netgpibdata.py is giving me weird data. When I plot the data it has saved from the 4395A, it's some wierd other universe's version of my transfer function. I don't really know what's up.
Yoichi, in all his infinite wisdom, reminded me that the netgpibdata script saves the data as the REAL and IMAGINARY parts, not the Mag and Phase. Brilliant. Using that nugget of information, here are the TFs that I measured earlier:
The last attachment is the .dat and .par files which contain the data and measurement parameters for the 3 TFs in the plots.