CYAN - cryo ON
BLACK - cryo OFF
BLUE - no crappy lens + mount
BLACK - raw ground motion measured by the Guralp
MAGENTA - motion after passive STACIS (20 Hz harmonic oscillator with a Q~2)
GREEN - difference between ground and top of STACIS
YELLOW - EUCLID noise in air
BLUE - STACIS top motion with loop on (60 Hz UGF, 1/f^2 below 30 Hz)
CYAN - same as BLUE, w/ 10x lower noise sensor
This is the two arms locked, for an hour. No integrator in either loop, but from this it looks like ETMY may have a bigger length2angle problem than ETMX. I'll put some true integrators in the loops and do this again.
There appear to be at least two independent problems: the coil balancing for ETMY is bad, and something about ITMX is broken (maybe a coil driver).
The Y-arm becomes significantly misaligned during long locks, causing the arm power to drop. This misalignment tracks directly with the DC drive on ETMY. Power returns to the maximum after breaking and re-establishing lock.
ITMX alignment wanders around sporadically, as indicated by the oplevs and the X-arm transmitted power. Power returns to previous value (not max) after breaking and re-establishing lock.
Both loops have integrators.
At Rob's request I've added the following features to the camera code.
The camera server, which can be started on Ottavia by just typing pserv1 (for camera 1) or pserv2 (for camera 2), now has the ability to save individual jpeg snap shots, as well as taking a jpeg image every X seconds, as defined by the user.
The first text box is for the file name (i.e. ./default.jpg will save the file to the local directory and call it default.jpg). If the camera is running (i.e. you've pressed start), prsessing "Take Snapshot to" will take an image immediately and save it. If the camera is not running, it will take an image as soon as you do start it.
If you press "Start image capture every X seconds", it will do exactly that. The file name is the same as for the first button, but it appends a time stamp to the end of the file.
There is also a viedo recording client now. This is access by typing "pcam1-mov" or "pcam2-mov". The text box is for setting the file name. It is currently using the open source Theora encoder and Ogg format (.ogm). Totem is capable of reading this format (and I also believe vlc). This can be run on any of the Linux machines.
The viewing client is still accessed by "pcam1" or "pcam2".
I'll try rolling out these updates to the sites on Monday.
The configuration files for camera 1 and camera 2 can be found by typing in camera (which is aliased to cd /cvs/cds/caltech/apps/linux64/python/pcamera) and are called pcam1.ini, pcam2.ini, etc.
The Elog started crashing last night. It turns out I was the culprit, and whenever I tried to upload a certain 500kb .png picture, it would die. It has happened both when choosing "upload" of a picture, and when choosing "submit" after successfully uploading a picture. Both culprits were ~500kb .png files.
I checked the four rear coils on ETMX by exciting XXCOIL_EXC channel in DTT with amplitude 1000@ 500 Hz and observing the oplev PERROR and YERROR channels. Each coil showed a clear signal in PERROR, about 2e-6 cts. Anyway, the coils passed this test.
It seems that the MC3 problem is intermittent (one-day trend attached). I tried to take advantage of a "clean MC3" night, but the watch script would usually fail at the transition to DC CARM and DARM. It got past this twice and then failed later, during powering up. I need to check the handoff.
We were stymied tonight by a problem which began late this afternoon. The MC would periodically go angularly unstable, breaking lock and tripping the MC2 watchdogs. Suspicion fell naturally upon McWFS.
Eventually I traced the problem to the MC3 SIDE damping, which appeared to not work--it wouldn't actually damp, and the Vmon values did not correspond to the SDSEN outputs. Suspicion fell on the coil driver.
Looking at the LEMO monitors on the MC3 coil driver, with the damping engaged, showed clear bit resolution at the 100mV level, indicating a digital/DAC problem. Rebooting c1sosvme, which acquires all the OSEM sensor signals and actually does the side damping, resolved the issue.
Lies! The problem was not resolved. The plot shows a 2-day trend, with the onset of the problem yesterday clearly visible as well as the ineffectiveness of the soft-reboot done yesterday. So we'll try a hard-reboot.
At the request of people down at LLO I've been trying to work on the reliability and speed of the GigE camera code. In my testing, after several hours, the code would tend to lock up on the camera end. It was also reported at LLO after several minutes the camera display would slow down, but I haven't been able to replicate that problem.
I've recently added some additional error checking and have updated to a more recent SDK which seems to help. Attached are two plots of the frames per second of the code. In this case, the frames per second are measured as the time between calls to the C camera code for a new frame for gstreamer to encode and transmit. The data points in the first graph are actually the averaged time for sets of 1000 frames. The camera was sending 640x480 pixel frames, with an exposure time of 0.01 seconds. Since the FPS was mostly between 45 and 55, it is taking the code roughly 0.01 second to process, encode, and transmit a frame.
During the test, the memory usage by the server code was roughly 1% (or 40 megabytes out of 4 gigabytes) and 50% of a CPU (out a total of CPUs).
After looking at some oplev noise spectra in DTT, we discovered that the ETMY quad (serial number 115) was noisy. Particularly, in the XX_OUT and XX_IN1 channels, quadrants 2 (by a bit more than an order of magnitude over the ETMX ref) and 4 (by a bit less than an order of mag). We went out and looked at the signals coming out of the oplev interface board; again, channels 2 and 4 were noise compared to 1 and 3 by about these same amounts. I popped in the ETMX quad and everything looked fine. I put the ETMX quad back at ETMX, and popped in Steve's scatterometer quad (serial number 121 or possibly 151, it's not terribly legible), and it looks fine. We zeroed via the offsets in the control room, and I went out and centered both the ETMX and ETMY quads.
Attached is a plot. The reference curves are with the faulty quad (115). The others are with the 121.
I adjusted the ETMY quad gains up by a factor of 10 so that the SUM is similar to what it was before.
The 40m frame builder is currently being patched to be able utilize the full 14 TB of the new raid array (as opposed to being limited to 2 TB). This process is expected to take several hours, during which the frame builder will be unavailable.
ETMY damping restored.
Cryo interlock closed VC1 ~2 days ago. P1 is 6.3 mTorr. Cryo temp 12K stable, reset photoswitch and opened VC1
I unplugged Guralp EW1b and Guralp Vert1b and plugged in temp sensors temporarily. Guralp NS1b is still plugged in.
To include the plots that I've been working on in some form other than on my computer, here they are:
First is the big surface plot of all the amplitude spectra, taken in 10min intervals on one month of S5 data. The times when the IFO is unlocked are represented by vertical black stripes (white was way too distracting). For the paper, I need to recreate this plot, with traces only at selected times (once or twice a week) so that it's not so overwhelmingly large. But it's pretty cool to look at as-is.
Second is the same information, encoded in a pseudo-BLRMS. (Pseudo on the RMS part - I don't ever actually take the RMS of the spectra, although perhaps I should). I've split the data from the surface plot into bands (The same set of bands that we use for the DMF stuff, since those seem like reasonable seismic bands), and integrated under the spectra for each band, at each time. i.e. one power spectra gives me 5 data points for the BLRMS - one in each band. This lets us see how good the filter is doing at different times.
At the lower frequencies, after ~25 days, the floor starts to pick up. So perhaps that's about the end of how long we can use a given Wiener filter for. Maybe we have to recalculate them about every 3 weeks. That wouldn't be tragic.
I don't really know what the crazy big peak in the 0.1-0.3Hz plot is (it's the big yellow blob in the surface plot). It is there for ~2 days, and it seems awfully symmetric about it's local peak. I have not yet correlated my peaks to high-seismic times in the H1 elog. Clearly that's on the immediate todo list.
Also perhaps on the todo list is to indicate in some way (analagous to the black stripes in the surface plot) times when the data in the band-limited plot is just extrapolated, connecting the dots between 2 valid data points.
A few other thoughts: The time chosen for the training of the filter for these plots is 6:40pm-7:40pm PDT on Sept 9, 2007 (which was a Sunday night). I need to try training the filter on a more seismically-active time, to see if that helps reduce the diurnal oscillations at high frequency. If that doesn't do it, then perhaps having a "weekday filter" and an "offpeak" filter would be a good idea. I'll have to investigate.
the align script was run after the third lock here. it would have been interesting to see the arm powers in a 4th lock
Restarted backup since fb40m was rebooted.
attached plot shows MC_IN1/MC_IN2. needs work.
This is supposed to be a measurement of the relative gain of the MCL and AO paths in the CM servo. We expect there to
be a more steep slope (ideally 1/f). Somehow the magnitude is very shallow and so the crossover is not stable. Possible
causes? Saturations in the measurement, broken whitening filters, extremely bad delay in the digital system? needs work.
locks last for about an hour. this was true last night as well (see "arm power curve" entries). the second lock shown here evolves differently for unknown reasons. the jumps in the arm powers of the first lock are due to turning on DC readout. length-to-angle needs tuning.
Can't find hostname 'fb40m'
it only lasted a few hours
I've plotted TRX, TRY, PD12I and PD11Q. Arm powers after locking increase for a few tens of minutes, peak out, and then decrease before lock is lost.
I should have mentioned that the AS port camera image seems to get progressively uglier over the course of these locks. Maybe we can use the JoeCam to make a movie of it.
Having determined that Rana (the computer) was having to many issues with testing the new Raid array due to age of the system, we proceeded to test on fb40m.
We brought it down and up several times between 11 and noon. We eventually were able to daisy chain the old raid and the new raid so that fb40m sees both. At this time, the RAID arrays are still daisy chained, but the computer is setup to run on just the original raid, while the full 14 TB array is initialized (16 drives, 1 hot spare, RAID level 5 means 14 TB out of the 16 TB are actually available). We expect this to take a few hours, at which point we will copy the data from the old RAID to the new RAID (which I also expect to take several hours). In the meantime, operations should not be affected. If it is, contact one of us.
This afternoon the alignment script chrashed after returning sysntax errors. We found that the tpman wasn't running on the framebuilder becasue it had probably failed to get restarted in one of the several reboots executed in the morning by Alex and Jo.
Restarting the tpman was then sufficient for the alignment scripts to get back to work.