Quote: |
To see if Caryn's data dropouts were happening, I looked at a trend of all of our temperature channels. Looks OK now.
Although you can't see it because I zoomed in, there's a ~24 hour relaxation happening before Caryn's sensors equilibrate.
I guess that's the insulating action of the cooler? We need a picture of the cooler in the elog for posterity.[/quote
Dropouts can't been seen with a minute trend, only a second trend. No big deal, but they are still occurring. See plot below.
The 24hr relaxation period is due to the cooler and some metal blocks that were cooled in the freezer and then put in the cooler to see if the relationship between the temp sensors changed with temperature. The relationship is not linear, which probably means there is some non-linearity in each temperature sensor's relationship to temperature. So, when calibrating them with Bob's temp sensor, more than 2 data points need to be collected.
Picture of cooler for posterity is attached |
Attachment 1: datadropout.png
|
|
Attachment 2: coolerpic1.jpg
|
|
Attachment 3: coolerpic2.jpg
|
|
1598
|
Mon May 18 02:18:17 2009 |
rana | Summary | SEI | Using STACIS w/ a good position sensor |
WE turned off STACIS a few years ago because we noticed that it was causing noise below a few Hz and making
the overall velocity between the ends higher than with them off. I'm pretty sure they were causing noise
because they use little geophones which are noisy. Below ~0.2 Hz the horizontal geophones are also probably
limited by tilt-horizontal coupling.
Another concept (based on discussion with Brian Lantz and Matt Evans) is to instead put a good position sensor
between the ground and then blue support beam. Since the the STACIS rubber acts like a Q~2 passive resonance at
20 Hz, the whole seismic system (including the blue beams, in-vac tubes, and internal stack) act like a proof
mass of a seismometer.
So, in principle, if we use a very good position sensor and feedback to the STACIS piezo actuators, we can cancel
the ground motion before it enters the stacks. The initial LIGO OSEMs have a noise of 10^-10 m/rHz above 10 Hz
and going up like 1/f below 10 Hz. The AdvLIGO BOSEMs have a noise of ~2x better. Even better, however, are the
UK's EUCLID interferometric OSEMs (developed by Stuart Aston and Clive Speake).
In the attached plot, I show what we can get if we use these EUCLIDs make a ~60 Hz BW feedback loop w/ STACIS.
BLACK - raw ground motion measured by the Guralp
MAGENTA - motion after passive STACIS (20 Hz harmonic oscillator with a Q~2)
GREEN - difference between ground and top of STACIS
YELLOW - EUCLID noise in air
BLUE - STACIS top motion with loop on (60 Hz UGF, 1/f^2 below 30 Hz)
CYAN - same as BLUE, w/ 10x lower noise sensor
One of the SURF projects this summer is to put together a couple different sensors like EUCLID to understand the noise. |
Attachment 1: stacis40.png
|
|
1597
|
Mon May 18 01:54:35 2009 |
rana | Update | PEM | Unplugged Guralp channels |
To see if Caryn's data dropouts were happening, I looked at a trend of all of our temperature channels. Looks OK now.
Although you can't see it because I zoomed in, there's a ~24 hour relaxation happening before Caryn's sensors equilibrate.
I guess that's the insulating action of the cooler? We need a picture of the cooler in the elog for posterity. |
Attachment 1: Untitled.png
|
|
1596
|
Sun May 17 23:22:19 2009 |
rana | Update | Environment | seisBLRMS for the past 3 weeks |
Looks like Chris Wipf's fix of using fclose worked for the NDS client.
The attached plot shows the minute trend RMS - we should put the calibration for these into the .m file
so that the EPICS values are in something useful like microns or microns/sec.
I also now see why Nodus seems really slow with the elog sometimes. When we load a page with an attached
PDF, it runs 'gs' to try to generate the PNG preview. Because its on Solaris it often fails because it
can't find some font. We should probably disable the preview or fix the font issue. |
Attachment 1: Untitled.png
|
|
1595
|
Sun May 17 21:45:40 2009 |
rob | Update | ASC | ITMX oplev centered |
|
1594
|
Sun May 17 20:50:38 2009 |
rob | Omnistructure | Environment | mag 5.0 earthquake in inglewood |
2009 May 18 03:39:36 UTC
Earthquake Details
Magnitude |
5.0 |
Date-Time |
- Monday, May 18, 2009 at 03:39:36 UTC
- Sunday, May 17, 2009 at 08:39:36 PM at epicenter
|
Location |
33.940°N, 118.338°W |
Depth |
13.5 km (8.4 miles) |
Region |
GREATER LOS ANGELES AREA, CALIFORNIA |
Distances |
- 2 km (1 miles) E (91°) from Lennox, CA
- 2 km (1 miles) SSE (159°) from Inglewood, CA
- 3 km (2 miles) NNE (22°) from Hawthorne, CA
- 7 km (4 miles) ENE (72°) from El Segundo, CA
- 15 km (10 miles) SSW (213°) from Los Angeles Civic Center, CA
|
Location Uncertainty |
horizontal +/- 0.4 km (0.2 miles); depth +/- 0.9 km (0.6 miles) |
Parameters |
Nph=139, Dmin=7 km, Rmss=0.42 sec, Gp= 40°,
M-type=local magnitude (ML), Version=C |
Source |
|
Event ID |
ci10410337 |
|
1593
|
Sun May 17 14:35:52 2009 |
Yoichi | Update | VAC | VC1 opened |
I found the VC1 was closed and the pressure was 4.5e-3 torr.
I tweaked the optical sensor (cryopump temperature), and opened VC1. |
1592
|
Sat May 16 16:20:33 2009 |
rob | Update | LSC | arms, coils, locks, #2 |
Quote: |
This is the two arms locked, for an hour. No integrator in either loop, but from this it looks like ETMY may have a bigger length2angle problem than ETMX. I'll put some true integrators in the loops and do this again.
|
There appear to be at least two independent problems: the coil balancing for ETMY is bad, and something about ITMX is broken (maybe a coil driver).
The Y-arm becomes significantly misaligned during long locks, causing the arm power to drop. This misalignment tracks directly with the DC drive on ETMY. Power returns to the maximum after breaking and re-establishing lock.
ITMX alignment wanders around sporadically, as indicated by the oplevs and the X-arm transmitted power. Power returns to previous value (not max) after breaking and re-establishing lock.
Both loops have integrators. |
Attachment 1: twoproblems.png
|
|
Attachment 2: coil_imbalanceETMY.png
|
|
Attachment 3: ITMXalignment.png
|
|
1591
|
Fri May 15 17:30:00 2009 |
rob | Update | LSC | arms, coils, locks |
This is the two arms locked, for an hour. No integrator in either loop, but from this it looks like ETMY may have a bigger length2angle problem than ETMX. I'll put some true integrators in the loops and do this again.
|
Attachment 1: armslock_no_int.png
|
|
1590
|
Fri May 15 16:47:44 2009 |
josephb | Update | Cameras | Improved camera code |
At Rob's request I've added the following features to the camera code.
The camera server, which can be started on Ottavia by just typing pserv1 (for camera 1) or pserv2 (for camera 2), now has the ability to save individual jpeg snap shots, as well as taking a jpeg image every X seconds, as defined by the user.
The first text box is for the file name (i.e. ./default.jpg will save the file to the local directory and call it default.jpg). If the camera is running (i.e. you've pressed start), prsessing "Take Snapshot to" will take an image immediately and save it. If the camera is not running, it will take an image as soon as you do start it.
If you press "Start image capture every X seconds", it will do exactly that. The file name is the same as for the first button, but it appends a time stamp to the end of the file.
There is also a viedo recording client now. This is access by typing "pcam1-mov" or "pcam2-mov". The text box is for setting the file name. It is currently using the open source Theora encoder and Ogg format (.ogm). Totem is capable of reading this format (and I also believe vlc). This can be run on any of the Linux machines.
The viewing client is still accessed by "pcam1" or "pcam2".
I'll try rolling out these updates to the sites on Monday.
The configuration files for camera 1 and camera 2 can be found by typing in camera (which is aliased to cd /cvs/cds/caltech/apps/linux64/python/pcamera) and are called pcam1.ini, pcam2.ini, etc.
|
1589
|
Fri May 15 14:05:14 2009 |
Dmass | HowTo | Computers | How To: Crash the Elog |
The Elog started crashing last night. It turns out I was the culprit, and whenever I tried to upload a certain 500kb .png picture, it would die. It has happened both when choosing "upload" of a picture, and when choosing "submit" after successfully uploading a picture. Both culprits were ~500kb .png files. |
1588
|
Fri May 15 00:02:34 2009 |
pete | Update | SUS | ETMX coils look OK |
I checked the four rear coils on ETMX by exciting XXCOIL_EXC channel in DTT with amplitude 1000@ 500 Hz and observing the oplev PERROR and YERROR channels. Each coil showed a clear signal in PERROR, about 2e-6 cts. Anyway, the coils passed this test.
|
1587
|
Thu May 14 16:07:20 2009 |
pete | Summary | SUS | Channel Hopping: That ancient enemy (MC problems) |
Quote: |
Quote: | The MC side problem could also be the side tramp unit problem. Set the tramp to 0 and see if that helps. |
This started around April 23, around the time that TP1 failed and we switched to the cryopump, and also when there was a mag 4 earthquake in LA. My money's on the EQ. But I don't know how. |
I wonder if this is still a problem. It has been quiet for a day now. I've attached a day-long trend. Let's see what happens. |
Attachment 1: mc3_5days.jpg
|
|
1586
|
Thu May 14 15:28:28 2009 |
steve | Summary | SUS | April 24 earthquake effect on MC2 |
Quote: |
Quote: | The MC side problem could also be the side tramp unit problem. Set the tramp to 0 and see if that helps. |
This started around April 23, around the time that TP1 failed and we switched to the cryopump, and also when there was a mag 4 earthquake in LA. My money's on the EQ. But I don't know how. |
Only MC2 moved in this earth quake. Was the MC alignment touched up since than?
Have you guys swapped satellite amp of MC3 yet? |
Attachment 1: eq042409.jpg
|
|
1585
|
Thu May 14 02:36:05 2009 |
pete | Update | Locking | unstable IFO |
It seems that the MC3 problem is intermittent (one-day trend attached). I tried to take advantage of a "clean MC3" night, but the watch script would usually fail at the transition to DC CARM and DARM. It got past this twice and then failed later, during powering up. I need to check the handoff.
|
Attachment 1: mc3.jpg
|
|
1584
|
Thu May 14 00:15:39 2009 |
rob | Summary | SUS | Channel Hopping: That ancient enemy (MC problems) |
Quote: | The MC side problem could also be the side tramp unit problem. Set the tramp to 0 and see if that helps. |
This started around April 23, around the time that TP1 failed and we switched to the cryopump, and also when there was a mag 4 earthquake in LA. My money's on the EQ. But I don't know how. |
Attachment 1: sidemon.png
|
|
1583
|
Wed May 13 21:15:04 2009 |
rana | Summary | SUS | Channel Hopping: That ancient enemy (MC problems) |
The MC side problem could also be the side tramp unit problem. Set the tramp to 0 and see if that helps. |
1582
|
Wed May 13 14:43:29 2009 |
rob | Summary | lore | Channel Hopping: That ancient enemy (MC problems) |
Quote: |
We were stymied tonight by a problem which began late this afternoon. The MC would periodically go angularly unstable, breaking lock and tripping the MC2 watchdogs. Suspicion fell naturally upon McWFS.
Eventually I traced the problem to the MC3 SIDE damping, which appeared to not work--it wouldn't actually damp, and the Vmon values did not correspond to the SDSEN outputs. Suspicion fell on the coil driver.
Looking at the LEMO monitors on the MC3 coil driver, with the damping engaged, showed clear bit resolution at the 100mV level, indicating a digital/DAC problem. Rebooting c1sosvme, which acquires all the OSEM sensor signals and actually does the side damping, resolved the issue.
|
Lies! The problem was not resolved. The plot shows a 2-day trend, with the onset of the problem yesterday clearly visible as well as the ineffectiveness of the soft-reboot done yesterday. So we'll try a hard-reboot. |
Attachment 1: MC3sidemon.png
|
|
1581
|
Wed May 13 12:41:14 2009 |
josephb | Update | Cameras | Timing and stability tests of GigE Camera code |
At the request of people down at LLO I've been trying to work on the reliability and speed of the GigE camera code. In my testing, after several hours, the code would tend to lock up on the camera end. It was also reported at LLO after several minutes the camera display would slow down, but I haven't been able to replicate that problem.
I've recently added some additional error checking and have updated to a more recent SDK which seems to help. Attached are two plots of the frames per second of the code. In this case, the frames per second are measured as the time between calls to the C camera code for a new frame for gstreamer to encode and transmit. The data points in the first graph are actually the averaged time for sets of 1000 frames. The camera was sending 640x480 pixel frames, with an exposure time of 0.01 seconds. Since the FPS was mostly between 45 and 55, it is taking the code roughly 0.01 second to process, encode, and transmit a frame.
During the test, the memory usage by the server code was roughly 1% (or 40 megabytes out of 4 gigabytes) and 50% of a CPU (out a total of CPUs). |
Attachment 1: newCodeFPS.png
|
|
Attachment 2: newCodeFPS_hist.png
|
|
1580
|
Wed May 13 03:05:13 2009 |
pete | Update | oplevs | etmy oplev quad was bad |
Quote: |
Pete, Rob
After looking at some oplev noise spectra in DTT, we discovered that the ETMY quad (serial number 115) was noisy. Particularly, in the XX_OUT and XX_IN1 channels, quadrants 2 (by a bit more than an order of magnitude over the ETMX ref) and 4 (by a bit less than an order of mag). We went out and looked at the signals coming out of the oplev interface board; again, channels 2 and 4 were noise compared to 1 and 3 by about these same amounts. I popped in the ETMX quad and everything looked fine. I put the ETMX quad back at ETMX, and popped in Steve's scatterometer quad (serial number 121 or possibly 151, it's not terribly legible), and it looks fine. We zeroed via the offsets in the control room, and I went out and centered both the ETMX and ETMY quads.
Attached is a plot. The reference curves are with the faulty quad (115). The others are with the 121.
|
I adjusted the ETMY quad gains up by a factor of 10 so that the SUM is similar to what it was before. |
1579
|
Wed May 13 02:53:12 2009 |
rob | Summary | lore | Channel Hopping: That ancient enemy (MC problems) |
We were stymied tonight by a problem which began late this afternoon. The MC would periodically go angularly unstable, breaking lock and tripping the MC2 watchdogs. Suspicion fell naturally upon McWFS.
Eventually I traced the problem to the MC3 SIDE damping, which appeared to not work--it wouldn't actually damp, and the Vmon values did not correspond to the SDSEN outputs. Suspicion fell on the coil driver.
Looking at the LEMO monitors on the MC3 coil driver, with the damping engaged, showed clear bit resolution at the 100mV level, indicating a digital/DAC problem. Rebooting c1sosvme, which acquires all the OSEM sensor signals and actually does the side damping, resolved the issue. |
1578
|
Tue May 12 17:26:56 2009 |
pete | Update | oplevs | etmy oplev quad was bad |
Pete, Rob
After looking at some oplev noise spectra in DTT, we discovered that the ETMY quad (serial number 115) was noisy. Particularly, in the XX_OUT and XX_IN1 channels, quadrants 2 (by a bit more than an order of magnitude over the ETMX ref) and 4 (by a bit less than an order of mag). We went out and looked at the signals coming out of the oplev interface board; again, channels 2 and 4 were noise compared to 1 and 3 by about these same amounts. I popped in the ETMX quad and everything looked fine. I put the ETMX quad back at ETMX, and popped in Steve's scatterometer quad (serial number 121 or possibly 151, it's not terribly legible), and it looks fine. We zeroed via the offsets in the control room, and I went out and centered both the ETMX and ETMY quads.
Attached is a plot. The reference curves are with the faulty quad (115). The others are with the 121.
|
Attachment 1: bad_oplev_quad.pdf
|
|
1577
|
Tue May 12 15:22:09 2009 |
Yoichi | Update | LSC | Arm Finesse |
Quote: |
It looks as if the measured DARM response is skewed by an extra low pass filter at high frequencies. I don't know why is it so. |
One large uncertainty in the above estimate is the cavity pole of X-arm because I simply assumed that the ITMX reflectivity to be the designed value.
I think we can directly measure the X-arm finesse from Alberto's absolute length measurements (i.e. from the width of the resonant peaks in his scans).
By looking at Alberto and Koji's posts (elog:1244 elog:838), it looks like the FWHM of the peaks are around 3kHz. With the FSR ~ 3.8MHz, it gives a finesse of about 1300, which is reasonable.
Alberto, can you check your data and measure the FWHM more precisely ?
Note that we want to measure the FWHM of the peak in the *power* of the beat signal. The beat amplitude is proportional to the electric field *amplitude* of the transmitted auxiliary laser. What we need to get a finesse is the FWHM of the transmitted laser *power*. Thus we need to take the power of the beat signal.
|
1576
|
Tue May 12 01:22:51 2009 |
Yoichi | Update | LSC | Arm loss |
Using the armLoss script (/cvs/cds/caltech/scripts/LSC/armLoss), I measured the round trip loss (RTL) of the arms.
The results are:
XARM: RTL= 171 (+/-2) ppm
YARM: RTL = 181 (+/-2) ppm
To get the results above, I assumed that the transmissivity of the ITMs are the same as the designed value (0.005).
This may not be true though. |
1575
|
Tue May 12 01:11:55 2009 |
Yoichi | Update | LSC | DARM response (DC Readout) |
I measured the DARM response with DC readout.
This time, I first measured the open loop transfer function of the X single arm lock.
The open loop gain (Gx) can be represented as a product of the optical gain (Cx), the filter (Fx), and the suspension response (S), i.e. Gx = Cx*Fx*S.
We know Fx because this is the transfer function of the digital filters. Cx can be modeled as a simple cavity pole, but we need to know the finesse to calculate it.
In order to estimate the current finesse of the XARM cavity, I ran the armLoss script, which measures the ratio of the reflected light power between the locked and the unlocked state. Using this ratio and the designed transmissivity of the ITMX (0.005), I estimated the round trip loss in the XARM, which was 170 ppm. From this number, the cavity pole was estimated to be 1608Hz.
Using the measured Gx, the knowledge of Fx and the estimated Cx, I estimated the ETMX suspension response S, which is shown in the first attachment.
Note that this is not a pure suspension response. It includes the effects of the digital system time delay, the anti-imaging and anti-aliasing filters and so on.
Now the DARM open loop gain (Gd) can also be represented as a product of the optical gain (Cd), the filter (Fd) and the suspension response (S).
Since the actuations are applied again to the ETMs and we think ETMX and ETMY are quite similar, we should be able to use the same suspension response as XARM for DARM. Therefore, using the knowledge of the digital filter shape and the measured open loop gain, we can compute the DARM optical gain Cd.
The second attachment shows the estimated DARM response along with an Optickle prediction.
The DARM loop gain was measured with darm_offset_dc = 350. Since we haven't calibrated the DARM signal, I don't know how many meters of offset does this number correspond to. The Optickle prediction was calculated using a 20pm DARM offset. I chose this to make the prediction look similar to the measured one, though they look quite different around the RSE peak. The input power was set to 1.7W in the Optickle model (again this is just my guess).
It looks as if the measured DARM response is skewed by an extra low pass filter at high frequencies. I don't know why is it so. |
Attachment 1: SUS_Resp.png
|
|
Attachment 2: DARM_Resp.png
|
|
1574
|
Mon May 11 12:25:03 2009 |
josephb,Alex | Update | Computers | fb40m down for patching |
The 40m frame builder is currently being patched to be able utilize the full 14 TB of the new raid array (as opposed to being limited to 2 TB). This process is expected to take several hours, during which the frame builder will be unavailable. |
1573
|
Mon May 11 11:49:20 2009 |
steve | Update | PSL | MOPA cooling water lines are backwards |
Quote: | This is 8 days of 10-minute trend.
DTEC is just the feedback control signal required to keep the NPRO's pump diode at a constant temperature.
Its not the amplifier or the actual NPRO crystal's temperature readout.
There is no TEC for the amplifier. It looks like to me that by opening up the flow to the NPRO some more
we have reduced the flow to the amplifier (which is the one that needs it) and created these temperature
fluctuations.
What we need to do is choke down the needle valve and ream out the NPRO block. |
I have measured the "input" line temp at the MOPA box 10 C and the "out" line 8 C
This must be corrected.
However look at the 80 days plot of operation where the head temp variation is nothing new |
Attachment 1: htempvar80d.jpg
|
|
1572
|
Sun May 10 13:41:17 2009 |
steve | Update | VAC | ETMY damping restored, VC1 opened |
ETMY damping restored.
Cryo interlock closed VC1 ~2 days ago. P1 is 6.3 mTorr. Cryo temp 12K stable, reset photoswitch and opened VC1 |
1571
|
Sun May 10 13:34:32 2009 |
caryn | Update | PEM | Unplugged Guralp channels |
I unplugged Guralp EW1b and Guralp Vert1b and plugged in temp sensors temporarily. Guralp NS1b is still plugged in. |
1570
|
Sat May 9 15:19:10 2009 |
rana | Update | PSL | Laser head temperature oscillation |
This is 8 days of 10-minute trend.
DTEC is just the feedback control signal required to keep the NPRO's pump diode at a constant temperature.
Its not the amplifier or the actual NPRO crystal's temperature readout.
There is no TEC for the amplifier. It looks like to me that by opening up the flow to the NPRO some more
we have reduced the flow to the amplifier (which is the one that needs it) and created these temperature
fluctuations.
What we need to do is choke down the needle valve and ream out the NPRO block. |
Attachment 1: Picture_2.png
|
|
1569
|
Sat May 9 02:20:11 2009 |
Jenne | Update | PSL | Laser head temperature oscillation |
Quote: | After the laser cooling pipe was unclogged, the laser head temperature has been oscillating in 24h period.
The laser power shows the same oscillation.
Moreover, there is a trend that the temperature is slowly creeping up.
We have to do something to stop this.
Or Rob has to finish his measurements before the laser dies. |
How's DTEC doing? I thought DTEC was kind of in charge of dealing with these kinds of things, but after our laser-cooling-"fixing", DTEC has been railed at 0, aka no range.
After glancing at DTEC with Dataviewer along with HTEMP and AMPMON (my internet is too slow to want to post the pic while ssh-ed into nodus), it looks like DTEC is oscillating along with HTEMP in terms of frequency, but perhaps DTEC is running out of range because it is so close to zero? Maybe? |
1568
|
Sat May 9 00:15:21 2009 |
Yoichi | Update | PSL | Laser head temperature oscillation |
After the laser cooling pipe was unclogged, the laser head temperature has been oscillating in 24h period.
The laser power shows the same oscillation.
Moreover, there is a trend that the temperature is slowly creeping up.
We have to do something to stop this.
Or Rob has to finish his measurements before the laser dies. |
Attachment 1: laser.png
|
|
1567
|
Fri May 8 16:29:53 2009 |
rana | Update | Computer Scripts / Programs | elog and NDS |
Looks like the new NDS client worked. Attached is 12 hours of BLRMS. |
Attachment 1: Untitled.png
|
|
1566
|
Fri May 8 16:03:31 2009 |
Jenne | Update | PEM | Update on Jenne's Filtering Stuff |
To include the plots that I've been working on in some form other than on my computer, here they are:
First is the big surface plot of all the amplitude spectra, taken in 10min intervals on one month of S5 data. The times when the IFO is unlocked are represented by vertical black stripes (white was way too distracting). For the paper, I need to recreate this plot, with traces only at selected times (once or twice a week) so that it's not so overwhelmingly large. But it's pretty cool to look at as-is.
Second is the same information, encoded in a pseudo-BLRMS. (Pseudo on the RMS part - I don't ever actually take the RMS of the spectra, although perhaps I should). I've split the data from the surface plot into bands (The same set of bands that we use for the DMF stuff, since those seem like reasonable seismic bands), and integrated under the spectra for each band, at each time. i.e. one power spectra gives me 5 data points for the BLRMS - one in each band. This lets us see how good the filter is doing at different times.
At the lower frequencies, after ~25 days, the floor starts to pick up. So perhaps that's about the end of how long we can use a given Wiener filter for. Maybe we have to recalculate them about every 3 weeks. That wouldn't be tragic.
I don't really know what the crazy big peak in the 0.1-0.3Hz plot is (it's the big yellow blob in the surface plot). It is there for ~2 days, and it seems awfully symmetric about it's local peak. I have not yet correlated my peaks to high-seismic times in the H1 elog. Clearly that's on the immediate todo list.
Also perhaps on the todo list is to indicate in some way (analagous to the black stripes in the surface plot) times when the data in the band-limited plot is just extrapolated, connecting the dots between 2 valid data points.
A few other thoughts: The time chosen for the training of the filter for these plots is 6:40pm-7:40pm PDT on Sept 9, 2007 (which was a Sunday night). I need to try training the filter on a more seismically-active time, to see if that helps reduce the diurnal oscillations at high frequency. If that doesn't do it, then perhaps having a "weekday filter" and an "offpeak" filter would be a good idea. I'll have to investigate. |
Attachment 1: H1S5OneMonthWienerCompBLACK.png
|
|
Attachment 2: H1S5BandLimitedTimePlot.png
|
|
1565
|
Fri May 8 15:40:44 2009 |
pete | Update | Locking | progressively weaker locks |
the align script was run after the third lock here. it would have been interesting to see the arm powers in a 4th lock |
Attachment 1: powers_3lock.pdf
|
|
1564
|
Fri May 8 10:05:40 2009 |
Alan | Omnistructure | Computers | Restarted backup since fb40m was rebooted |
Restarted backup since fb40m was rebooted. |