I just restarted the elog. It was crashed for unknown reasons. The restarting instructions are in the wiki.
we did a hard reboot of c1susvme1, c1susvme2, c1sosvme, and c1susaux. We are hoping this will fix some of the weird suspension issues we've been having (MC3 side coil, ITMX alignment).
The MOPA is taking the long weekend off.
Steve went out to wipe off the condensation inside the MOPA and found beads of water inside the NPRO box, perilously close to the PCB board. He then measured the water temperature at the chiller head, which is 6C. We decided to "reboot" the MOPA/chiller combo, on the off chance that would get things synced up. Upon turning off the MOPA, the neslab chiller display immediately started displaying the correct temperature--about 6C. The 22C number must come from the MOPA controller. We thus tentatively narrowed down the possible space of problems to: broken MOPA controller and/or clog in the cooling line going to the power amplifier. We decided to leave the MOPA off for the weekend, and start plumbing on Tuesday. It is of course possible that the controller is the problem, but we think leaving the laser off over the weekend is the best course of action.
Looks like something went nuts in late April. We have yet to try a hard reboot.
I edited the configure scripts (those called from the C1IFO_CONFIGURE screen) for restore XARM and YARM. These used to misalign the ITM of the unused arm, which is totally unnecessary here, as we have both POX and POY. They also used to turn off the drive to the unused ETM. I've commented out these lines, so now running the two restores in series will leave a state where both arms can be locked. This also means that the ITMs will never be deliberately mis-aligned by the restore scripts.
Yoichi's final words on what do next with the interferometer (as of 5 PM on May 21, 2009):
My personal sub-comments to these bullets:
I checked the four rear coils on ETMX by exciting XXCOIL_EXC channel in DTT with amplitude 1000@ 500 Hz and observing the oplev PERROR and YERROR channels. Each coil showed a clear signal in PERROR, about 2e-6 cts. Anyway, the coils passed this test.
I also made xfer fctns of the 4 piston coils on ETMY and ETMX with OL_PIT. (I looked at all 4 even though the attached plot only shows three.) So it looks ike the coils are OK.
I've disabled the alarm for PEM_count_half, using the mask in the 40m.alhConfig file. We can't do anything about it, and it's just annoying.
Rana suggested using OSEM sensing voltages as guide lines to look seismic activity.
As you see todays drilling and tumping activity was nothing compared to the EQ of mag 5 and 4
Optic level servos are turned back on.
What Steve means is that there is some drilling going on in the CES shop to accomodate the new water flume group. We want to
make sure that the mirrors don't move enough to break the magnets. On the dataviewer we should look to make sure that the
sensor channels stay between 0-2 V. -Rana
Wilcoxon 731A seismic accelerometers and Guralp CMG-40T-old seismometer at magnitude 5 and 4 erthquakes
All oplevs servos turned off to protect our suspentions from vibration due to drilling and pounding in CES high bay area.
This activity will be done from 10 am till 3 pm today.
Meanwhile our IFO-air conditions are turned off for maintenance.
Their performance of 6 months is shown on plot.
Recently the watch script was having difficulty grabbing a lock for more than a few seconds. Rob discovered that the violin notch filters which were activated in the script were causing the instability. We're not sure why yet. The script seems significantly more stable with that step commented out.
I found some neat signal analysis software for my mac (http://www.faberacoustical.com/products/), and took a spectrum of the ambient noise coming from the cryopump. The two main noise peaks from that bad boy were nowhere near 3.7 kHz.
Earthquake mag 4.0 at Lennox, Ca trips MC2 watchdogs http://quake.usgs.gov/recenteqs/Quakes/ci10411545.html
See 40m accelerometers as they see it.
Even more plots for the Wiener filtering!
We have a set of spectrograms, which show (in color) the amplitude spectrum, at various times during a one month stretch of time, during S5. Each vertical data-'stripe' is 10min long.
We also have a set of band-limited plots, which take the spectra at each time, and integrate under it, for different frequency bands.
Each set of plots has the following 3 plots: The raw DARM spectrum, a ratio of residual/raw, and the residuals, normalized to the first one (on which the wiener filter was trained).
The residuals are the DARM spectrum, after subtracting the Wiener-filtered seismometer witness data.
From the ratio plots, it looks like the wiener filter is pretty much equally effective at the time on which the filter was trained, as one month later. Static filters may be okey-dokey for a long period of time with for the seismic stuff.
CYAN - cryo ON
BLACK - cryo OFF
BLUE - no crappy lens + mount
BLACK - raw ground motion measured by the Guralp
MAGENTA - motion after passive STACIS (20 Hz harmonic oscillator with a Q~2)
GREEN - difference between ground and top of STACIS
YELLOW - EUCLID noise in air
BLUE - STACIS top motion with loop on (60 Hz UGF, 1/f^2 below 30 Hz)
CYAN - same as BLUE, w/ 10x lower noise sensor
This is the two arms locked, for an hour. No integrator in either loop, but from this it looks like ETMY may have a bigger length2angle problem than ETMX. I'll put some true integrators in the loops and do this again.
There appear to be at least two independent problems: the coil balancing for ETMY is bad, and something about ITMX is broken (maybe a coil driver).
The Y-arm becomes significantly misaligned during long locks, causing the arm power to drop. This misalignment tracks directly with the DC drive on ETMY. Power returns to the maximum after breaking and re-establishing lock.
ITMX alignment wanders around sporadically, as indicated by the oplevs and the X-arm transmitted power. Power returns to previous value (not max) after breaking and re-establishing lock.
Both loops have integrators.
At Rob's request I've added the following features to the camera code.
The camera server, which can be started on Ottavia by just typing pserv1 (for camera 1) or pserv2 (for camera 2), now has the ability to save individual jpeg snap shots, as well as taking a jpeg image every X seconds, as defined by the user.
The first text box is for the file name (i.e. ./default.jpg will save the file to the local directory and call it default.jpg). If the camera is running (i.e. you've pressed start), prsessing "Take Snapshot to" will take an image immediately and save it. If the camera is not running, it will take an image as soon as you do start it.
If you press "Start image capture every X seconds", it will do exactly that. The file name is the same as for the first button, but it appends a time stamp to the end of the file.
There is also a viedo recording client now. This is access by typing "pcam1-mov" or "pcam2-mov". The text box is for setting the file name. It is currently using the open source Theora encoder and Ogg format (.ogm). Totem is capable of reading this format (and I also believe vlc). This can be run on any of the Linux machines.
The viewing client is still accessed by "pcam1" or "pcam2".
I'll try rolling out these updates to the sites on Monday.
The configuration files for camera 1 and camera 2 can be found by typing in camera (which is aliased to cd /cvs/cds/caltech/apps/linux64/python/pcamera) and are called pcam1.ini, pcam2.ini, etc.
The Elog started crashing last night. It turns out I was the culprit, and whenever I tried to upload a certain 500kb .png picture, it would die. It has happened both when choosing "upload" of a picture, and when choosing "submit" after successfully uploading a picture. Both culprits were ~500kb .png files.
It seems that the MC3 problem is intermittent (one-day trend attached). I tried to take advantage of a "clean MC3" night, but the watch script would usually fail at the transition to DC CARM and DARM. It got past this twice and then failed later, during powering up. I need to check the handoff.
We were stymied tonight by a problem which began late this afternoon. The MC would periodically go angularly unstable, breaking lock and tripping the MC2 watchdogs. Suspicion fell naturally upon McWFS.
Eventually I traced the problem to the MC3 SIDE damping, which appeared to not work--it wouldn't actually damp, and the Vmon values did not correspond to the SDSEN outputs. Suspicion fell on the coil driver.
Looking at the LEMO monitors on the MC3 coil driver, with the damping engaged, showed clear bit resolution at the 100mV level, indicating a digital/DAC problem. Rebooting c1sosvme, which acquires all the OSEM sensor signals and actually does the side damping, resolved the issue.
Lies! The problem was not resolved. The plot shows a 2-day trend, with the onset of the problem yesterday clearly visible as well as the ineffectiveness of the soft-reboot done yesterday. So we'll try a hard-reboot.
At the request of people down at LLO I've been trying to work on the reliability and speed of the GigE camera code. In my testing, after several hours, the code would tend to lock up on the camera end. It was also reported at LLO after several minutes the camera display would slow down, but I haven't been able to replicate that problem.
I've recently added some additional error checking and have updated to a more recent SDK which seems to help. Attached are two plots of the frames per second of the code. In this case, the frames per second are measured as the time between calls to the C camera code for a new frame for gstreamer to encode and transmit. The data points in the first graph are actually the averaged time for sets of 1000 frames. The camera was sending 640x480 pixel frames, with an exposure time of 0.01 seconds. Since the FPS was mostly between 45 and 55, it is taking the code roughly 0.01 second to process, encode, and transmit a frame.
During the test, the memory usage by the server code was roughly 1% (or 40 megabytes out of 4 gigabytes) and 50% of a CPU (out a total of CPUs).
After looking at some oplev noise spectra in DTT, we discovered that the ETMY quad (serial number 115) was noisy. Particularly, in the XX_OUT and XX_IN1 channels, quadrants 2 (by a bit more than an order of magnitude over the ETMX ref) and 4 (by a bit less than an order of mag). We went out and looked at the signals coming out of the oplev interface board; again, channels 2 and 4 were noise compared to 1 and 3 by about these same amounts. I popped in the ETMX quad and everything looked fine. I put the ETMX quad back at ETMX, and popped in Steve's scatterometer quad (serial number 121 or possibly 151, it's not terribly legible), and it looks fine. We zeroed via the offsets in the control room, and I went out and centered both the ETMX and ETMY quads.
Attached is a plot. The reference curves are with the faulty quad (115). The others are with the 121.
I adjusted the ETMY quad gains up by a factor of 10 so that the SUM is similar to what it was before.
The 40m frame builder is currently being patched to be able utilize the full 14 TB of the new raid array (as opposed to being limited to 2 TB). This process is expected to take several hours, during which the frame builder will be unavailable.