ID |
Date |
Author |
Type |
Category |
Subject |
5161
|
Wed Aug 10 00:11:39 2011 |
jamie | Update | SUS | check of input diagnolization of ETMX after OSEM tweaking |
Suresh and I tweaked the OSEM angles in ETMX yesterday. Last night the ETMs were left free swinging, and today I ran Rana's peakFit scripts on ETMX to check the input diagnolization:

It's well inverted, but the matrix elements are not great:
pit yaw pos side butt
UL 0.3466 0.4685 1.6092 0.3107 1.0428
UR 0.2630 -1.5315 1.7894 -0.0706 -1.1859
LR -1.7370 -1.5681 0.3908 -0.0964 0.9392
LL -1.6534 0.4319 0.2106 0.2849 -0.8320
SD 1.0834 -2.6676 -0.9920 1.0000 -0.1101
The magnets are all pretty well centered in the OSEMS, and we worked at rotating the OSEMS such that the bounce mode was minimized.
Rana and Koji are working on ETMY now. Maybe they'll come up with a better procedure. |
5162
|
Wed Aug 10 00:21:10 2011 |
jamie | Update | CDS | updates to peakFit scripts |
I updated the peakFit routines to make them a bit more user friendly:
- modified so that any subset of optics can be processed at a time, instead of just all
- broke out tweakable fit parameters into a separate parameters.m file
- added a README that describes use
These changes were committed to the 40m svn. |
5176
|
Wed Aug 10 15:39:33 2011 |
jamie | Update | SUS | current SUS input diagonalization overview |
Below is the overview of all the core IFO suspension input diagonalizatidons.
Summary: ITMY, PRM, BS are really bad (in that order) and are our top priorities.
UPDATE:
I had originally put the condition number of the calculated input matrix (M) in the last column. However, after some discussion we decided that this is not in fact what we want to look at. The condition number of a matrix is unity if the matrix is completely diagonal. However, even our ideal input matrix is not diagonal, so the "best" condition number for the input matrix is unclear.
What instead we do know is that the matrix, B, that describes the difference between the calculated input matrix, M, and the ideal input matrix, M0: should be diagonal (identity, in fact):
M = M0 B
B should be diagonal (identity, in fact), and it's condition number should ideally be 1. So now we calculate B-1, since it can be calculated from the pre-inverted input matrix:
B-1 = M-1 * M0
From that we calculate cond(B) == cond(B-1).
cond(B) is our new measure of the "badness" of the OSEMS.
new summary: ITMY, PRM, BS are really bad (in that order) and are our top priorities.
|
5207
|
Fri Aug 12 15:16:56 2011 |
jamie | Update | SUS | today's SUS overview |
Here's an update of the suspensions, after yesterdays in-vacuum work and OSEM tweaking:
- PRM and ETMY are completely messed up. The spectra are so bad that I'm not going to bother posting anything. ETMY has extreme sensor voltages that indicate that it's maybe stuck to one of the OSEMS. PRM voltages look nominal, so I have no idea what's going on there.
- ITMY is much improved, but it could still use some work
- SRM is a little worse than what it was yesterday, but we've done a lot of work on the ITMY table, such as moving ITMY suspension and rebalancing the table.
- BS looks for some reason slightly better than it did yesterday
|
5240
|
Mon Aug 15 17:23:55 2011 |
jamie | Update | SUS | freeswing script updated |
I have updated the freeswing scripts, combining all of them into a single script that takes arguments to specify the optic to kick:
pianosa:SUS 0> ./freeswing
usage: freeswing SET
usage: freeswing OPTIC [OPTIC ...]
Kick and free-swing suspended optics.
Specify optics (i.e. 'MC1', 'ITMY') or a set:
'all' = (MC1 MC2 MC3 ETMX ETMY ITMX ITMY PRM SRM BS)
'ifo' = (ETMX ETMY ITMX ITMY PRM SRM BS)
'mc' = (MC1 MC2 MC3)
pianosa:SUS 0>
I have removed all of the old scripts, and committed the new one to the SVN. |
5241
|
Mon Aug 15 17:36:10 2011 |
jamie | Update | SUS | Strangeness with ETMY (was: Monday SUS update) |
For some reason ETMY has changed a lot. Not only does it now have the worst "badness" (B matrix condition number) at ~10, but the frequency of all the modes have shifted, some considerably. I did accidentally bump the optic when Jenne and I were adjusting the OSEMs last week, but I didn't think it was that much. The only thing I can think of that would cause the modes to move so much is that the optic has been somehow reseated in it's suspension. I really don't know how that would have happened, though.
Jenne and I went in to investigate ETMY, to see if we could see anything obviously wrong. Everything looks to be ok. The magnets are all well centered in the OSEMs, and the PDMon levels look ok.
We rechecked the balance of the table, and tweaked it a bit to make it more level. We then tweaked the OSEMs again to put them back in the center of their range. We also checked the response by using the lockin method to check the response to POS and SIDE drive in each of the OSEMs (we want large POS response and minimal SIDE response). Everything looked ok.
We're going to take another freeswing measurement and see how things look now. If there are any suggestions what should be done (if anything), about the shifted modes, please let us know. |
5242
|
Mon Aug 15 17:38:07 2011 |
jamie | Update | General | Foil aperture placed in front of ETMY |
We have placed a foil aperture in front of ETMY, to aid in aligning the Y-arm, and then the PRC. It obviously needs to be removed before we close up. |
5247
|
Tue Aug 16 10:59:06 2011 |
jamie | Update | SUS | SUS update |
Data taken from: 997530498+120
Things are actually looking ok at the moment. "Badness" (cond(B)) is below 6 for all optics.
- We don't have results from PRM since its spectra looked bad, as if it's being clamped by the earthquake stops.
- The SRM matrix definitely looks the nicest, followed by ITMX. All the other matrices have some abnormally high or low elements.
- cond(B) for ETMY is better than that for SRM, even though the ETMY matrix doesn't look as nice. Does this mean that cond(B) is not necessarily the best figure of merit, or is there something else that our naive expectation for the matrix doesn't catch?
We still need to go through and adjust all the OSEM ranges once the IFO is aligned and we know what our DC biases are. We'll repeat this one last time after that.
TM |
|
M |
cond(B) |
BS |
 |
pit yaw pos side butt
UL 1.456 0.770 0.296 0.303 1.035
UR 0.285 -1.230 1.773 -0.077 -0.945
LR -1.715 -0.340 1.704 -0.115 0.951
LL -0.544 1.660 0.227 0.265 -1.070
SD 0.612 0.275 -3.459 1.000 0.046 |
5.61948 |
SRM |
 |
pit yaw pos side butt
UL 0.891 1.125 0.950 -0.077 0.984
UR 0.934 -0.875 0.987 -0.011 -0.933
LR -1.066 -1.020 1.050 0.010 1.084
LL -1.109 0.980 1.013 -0.056 -0.999
SD 0.257 -0.021 0.304 1.000 0.006 |
4.0291 |
ITMX |
 |
pit yaw pos side butt
UL 0.436 1.035 1.042 -0.068 0.728
UR 0.855 -0.965 1.137 -0.211 -0.969
LR -1.145 -1.228 0.958 -0.263 1.224
LL -1.564 0.772 0.863 -0.120 -1.079
SD -0.522 -0.763 2.495 1.000 -0.156 |
4.55925 |
ITMY |
 |
pit yaw pos side butt
UL 1.375 0.095 1.245 -0.058 0.989
UR -0.411 1.778 0.975 -0.022 -1.065
LR -2.000 -0.222 0.755 0.006 1.001
LL -0.214 -1.905 1.025 -0.030 -0.945
SD 0.011 -0.686 0.804 1.000 0.240 |
4.14139 |
ETMX |
 |
pit yaw pos side butt
UL 0.714 0.191 1.640 0.404 1.052
UR 0.197 -1.809 1.758 -0.120 -1.133
LR -1.803 -1.889 0.360 -0.109 0.913
LL -1.286 0.111 0.242 0.415 -0.902
SD 1.823 -3.738 -0.714 1.000 -0.130 |
5.19482 |
ETMY |
 |
pit yaw pos side butt
UL 1.104 0.384 1.417 0.351 1.013
UR -0.287 -1.501 1.310 -0.074 -1.032
LR -2.000 0.115 0.583 -0.045 0.777
LL -0.609 2.000 0.690 0.380 -1.179
SD 0.043 -0.742 -0.941 1.000 0.338 |
3.57032 |
|
5260
|
Thu Aug 18 00:58:40 2011 |
jamie | Update | SUS | optics kicked and left free swinging |
ALL optics (including MC) were kicked and left free swinging at:
997689421
The "opticshutdown" script was also run, which should turn the watchdogs back on in 5 hours (at 6am).
|
5263
|
Thu Aug 18 12:22:37 2011 |
jamie | Update | SUS | suspension update |
Most of the suspension look ok, with "badness" levels between 4 and 5. I'm just posting the ones that look slightly less ideal below.
- PRM, SRM, and BS in particular show a lot of little peaks that look like some sort of intermodulations.
- ITMY has a lot of elements with imaginary components
- The ETMY POS and SIDE modes are *very* close together, which is severely adversely affecting the diagonalization
|
Attachment 2: SRM.png
|
|
5265
|
Thu Aug 18 22:24:08 2011 |
jamie | Omnistructure | VIDEO | Updated 'videoswitch' script |
I have updated the 'videoswitch' program that controls the video MUX. It now includes the ability to query the video mux for the channel mapping:
controls@pianosa:~ 0$ /opt/rtcds/caltech/c1/scripts/general/videoswitch -h
Usage:
videoswitch [options] [OUT] List current output/input mapping [for OUT]
videoswitch [options] OUT IN Set output OUT to be input IN
Options:
-h, --help show this help message and exit
-i, --inputs List input channels and exit
-o, --outputs List output channels and exit
-l, --list List all input and output channels and exit
-H HOST, --host=HOST IP address/Host name
controls@pianosa:~ 0$
|
5286
|
Tue Aug 23 10:38:27 2011 |
jamie | Update | SUS | SUS update |
SUS update before closing up:
- MC1, MC2, ITMX look good
- MC3, PRM look ok
- SRM pos and side peaks are too close together to distinguish, so the matrix is not diagnalizable. I think with more data it should be ok, though.
- all ITMY elements have imaginary components
- ITMY, ETMX, ETMY appear to have modest that swapped position:
- ITMY: pit/yaw
- ETMX: yaw/side
- ETMY: pos/side
- MC3, ETMX, ETMY have some very large/small elements
Not particularly good. We're going to work on ETMY at least, since that one is clearly bad.
OPTIC |
|
M |
cond(B) |
MC1 |
 |
pit yaw pos side butt
UL 0.733 1.198 1.168 0.050 1.057
UR 1.165 -0.802 0.896 0.015 -0.925
LR -0.835 -1.278 0.832 -0.002 0.954
LL -1.267 0.722 1.104 0.032 -1.064
SD 0.115 0.153 -0.436 1.000 -0.044 |
4.02107 |
MC2 |
 |
pit yaw pos side butt
UL 1.051 0.765 1.027 0.128 0.952
UR 0.641 -1.235 1.089 -0.089 -0.942
LR -1.359 -0.677 0.973 -0.097 1.011
LL -0.949 1.323 0.911 0.121 -1.096
SD -0.091 -0.147 -0.792 1.000 -0.066 |
4.02254 |
MC3 |
 |
pit yaw pos side butt
UL 1.589 0.353 1.148 0.170 1.099
UR 0.039 -1.647 1.145 0.207 -1.010
LR -1.961 -0.000 0.852 0.113 0.896
LL -0.411 2.000 0.855 0.076 -0.994
SD -0.418 0.396 -1.624 1.000 0.019 |
3.60876 |
PRM |
 |
pit yaw pos side butt
UL 0.532 1.424 1.808 -0.334 0.839
UR 1.355 -0.576 0.546 -0.052 -0.890
LR -0.645 -0.979 0.192 0.015 0.881
LL -1.468 1.021 1.454 -0.267 -1.391
SD 0.679 -0.546 -0.674 1.000 0.590 |
5.54281 |
BS |
 |
pit yaw pos side butt
UL 1.596 0.666 0.416 0.277 1.037
UR 0.201 -1.334 1.679 -0.047 -0.934
LR -1.799 -0.203 1.584 -0.077 0.952
LL -0.404 1.797 0.321 0.247 -1.077
SD 0.711 0.301 -3.397 1.000 0.034 |
5.46234 |
SRM |
NA |
NA |
NA |
ITMX |
 |
pit yaw pos side butt
UL 0.458 1.025 1.060 -0.065 0.753
UR 0.849 -0.975 1.152 -0.199 -0.978
LR -1.151 -1.245 0.940 -0.243 1.217
LL -1.542 0.755 0.848 -0.109 -1.052
SD -0.501 -0.719 2.278 1.000 -0.153 |
4.4212 |
ITMY |
 |
pit yaw pos side butt
UL 0.164 1.320 1.218 -0.086 0.963
UR 1.748 -0.497 0.889 -0.034 -1.043
LR -0.252 -2.000 0.782 -0.005 1.066
LL -1.836 -0.183 1.111 -0.058 -0.929
SD -0.961 -0.194 1.385 1.000 0.239 |
4.33051 |
ETMX |
 |
pit yaw pos side butt
UL 0.623 1.552 1.596 -0.033 1.027
UR 0.194 -0.448 1.841 0.491 -1.170
LR -1.806 -0.478 0.404 0.520 0.943
LL -1.377 1.522 0.159 -0.005 -0.860
SD 1.425 3.638 -0.762 1.000 -0.132 |
4.89418 |
ETMY |
 |
pit yaw pos side butt
UL 0.856 0.007 1.799 0.241 1.005
UR -0.082 -1.914 -0.201 -0.352 -1.128
LR -2.000 0.079 -0.104 -0.162 0.748
LL -1.063 2.000 1.896 0.432 -1.119
SD -0.491 -1.546 2.926 1.000 0.169 |
9.11516 |
|
5291
|
Tue Aug 23 17:45:22 2011 |
jamie | Update | SUS | ITMX, ITMY, ETMX clamped and moved to edge of tables |
In preparation for tomorrow's drag wiping and door closing, I have clamped ITMX, ITMY, and ETMX with their earthquake stops and moved the suspension cages to the door-edge of their respective tables. They will remain clamped through drag wiping.
ETMY was left free-swinging, so we will clamp and move it directly prior to drag wiping tomorrow morning. |
5293
|
Tue Aug 23 18:25:56 2011 |
jamie | Update | SUS | SRM diagnalization OK |
By looking at a longer data stretch for the SRM (6 hours instead of just one), we were able to get enough extra resolution to make fits to the very close POS and SIDE peaks. This allowed us to do the matrix inversion. The result is that SRM looks pretty good, and agrees with what was measured previously:
SRM |
 |
pit yaw pos side butt
UL 0.869 0.975 1.140 -0.253 1.085
UR 1.028 -1.025 1.083 -0.128 -1.063
LR -0.972 -0.993 0.860 -0.080 0.834
LL -1.131 1.007 0.917 -0.205 -1.018
SD 0.106 0.064 3.188 1.000 -0.011 |
4.24889 |
|
5294
|
Wed Aug 24 09:11:19 2011 |
jamie | Update | SUS | ETMY SUS update: looks good. WE'RE READY TO CLOSE |
We ran one more free swing test on ETMY last night, after the last bit of tweaking on the SIDE OSEM. It now looks pretty good:
ETMY |
 |
pit yaw pos side butt
UL -0.323 1.274 1.459 -0.019 0.932
UR 1.013 -0.726 1.410 -0.050 -1.099
LR -0.664 -1.353 0.541 -0.036 0.750
LL -2.000 0.647 0.590 -0.004 -1.219
SD 0.021 -0.035 1.174 1.000 0.137 |
4.23371 |
So I declare: WE'RE NOW READY TO CLOSE UP. |
5297
|
Wed Aug 24 12:08:56 2011 |
jamie | Update | SUS | ITMX, ETMX, ETMY free swinging |
ITMX: 998245556
ETMX, ETMY: 998248032 |
5317
|
Mon Aug 29 12:05:32 2011 |
jamie | Update | CDS | Re : fb down |
fb was requiring manual fsck on it's disks because it was sensing filesystem errors. The errors had to do with the filesystem timestamps being in the future. It turned out that fb's system date was set to something in 2005. I'm not sure what caused the date to be so off (motherboard battery problem?) But I did determine after I got the system booting that the NTP client on fb was misconfigured and was therefore incapable of setting the system date. It seems that it was configured to query a non-existent ntp server. Why the hell it would have been set like this I have no idea.
In any event, I did a manual check on /dev/sdb1, which is the root disk, and postponed a check on /dev/sda1 (the RAID mounted at /frames) until I had the system booting. /dev/sda1 is being checked now, since there are filesystems errors that need to be corrected, but it will probably take a couple of hours to complete. Once the filesystems are clean I'll reboot fb and try to get everything up and running again. |
5319
|
Mon Aug 29 18:16:10 2011 |
jamie | Update | CDS | Re : fb down |
fb is now up and running, although the /frames raid is still undergoing an fsck which is likely take another day. Consequently there is no daqd and no frames are being written to disk. It's running and providing the diskless root to the rest of the front end systems, so, so the rest of the IFO should be operational.
I burt restored the following (which I believe is everything that was rebooted), from Saturday night:
/opt/rtcds/caltech/c1/burt/autoburt/snapshots/2011/Aug/27/23:07/c1lscepics.snap
/opt/rtcds/caltech/c1/burt/autoburt/snapshots/2011/Aug/27/23:07/c1susepics.snap
/opt/rtcds/caltech/c1/burt/autoburt/snapshots/2011/Aug/27/23:07/c1iooepics.snap
/opt/rtcds/caltech/c1/burt/autoburt/snapshots/2011/Aug/27/23:07/c1assepics.snap
/opt/rtcds/caltech/c1/burt/autoburt/snapshots/2011/Aug/27/23:07/c1mcsepics.snap
/opt/rtcds/caltech/c1/burt/autoburt/snapshots/2011/Aug/27/23:07/c1gcvepics.snap
/opt/rtcds/caltech/c1/burt/autoburt/snapshots/2011/Aug/27/23:07/c1gfdepics.snap
/opt/rtcds/caltech/c1/burt/autoburt/snapshots/2011/Aug/27/23:07/c1rfmepics.snap
/opt/rtcds/caltech/c1/burt/autoburt/snapshots/2011/Aug/27/23:07/c1pemepics.snap
|
5320
|
Mon Aug 29 18:24:11 2011 |
jamie | Update | SUS | ITMY stuck to OSEMs? |
ITMY, which is supposed to be fully free-swinging at the moment, is displaying the tell-tale signs of being stuck to one of it's OSEMs. This is indicated by the PDMon values, one of which is zero while the others are max:
UL: 0.000
UR: 1.529
LR: 1.675
LL: 1.949
SD: 0.137
Do we have a procedure for remotely getting it unstuck? If not, we need to open up ITMYC and unstick it before we pump.
|
5323
|
Tue Aug 30 11:28:56 2011 |
jamie | Update | CDS | framebuilder back up |
The fsck on the framebuilder (fb) raid array (/dev/sda1) completed overnight without issue. I rebooted the framebuilder and it came up without problem.
I'm now working on getting all of the front-end computers and models restarted and talking to the framebuilder now. |
5324
|
Tue Aug 30 11:42:29 2011 |
jamie | Update | CDS | testpoint.par file found to be completely empty |
The testpoint.par file, located at /opt/rtcds/caltech/c1/target/gds/param/testpoint.par, which tells GDS processes where to find the various awgtpman processes, was completely empty. The file was there but was just 0 bytes. Apparently the awgtpman processes themselves also consult this file when starting, which means that none of the awgtpman processes would start.
This file is manipulated in the "install-daq-%" target in the RCG Makefile, ultimately being written with output from the src/epics/util/updateTestpointPar.pl script, which creates a stanza for each front-end model. Rebuilding and installing all of the models properly regenerated this file.
I have no idea what would cause this file to get truncated, but apparently this is not the first time: elog #3999. I'm submitting a bug report with CDS.
|
5325
|
Tue Aug 30 14:33:52 2011 |
jamie | Update | CDS | all front-ends back up and running |
All the front-ends are now running. Many of them came back on their own after the testpoint.par was fixed and the framebuilder was restarted. Those that didn't just needed to be restarted manually.
The c1ioo model is currently in a broken state: it won't compile. I assume that this was what Suresh was working on when the framebuilder crash happened. This model needs to be fixed. |
5356
|
Wed Sep 7 09:21:57 2011 |
jamie | Update | SUS | SUS spectra before close up |
Here are all suspension diagonalization spectra before close up. Notes:
- TMX looks the worst, but I think we can live with it. The large glitch in the UL sensor at around 999423150 (#5355) is worrying. However, it seemed to recover. The spectra below were taken from data before the glitch.
- ITMY has a lot of imaginary components. We previously found that this was due to a problem with one of it's whitening filters (#5288). I assume we're seeing the same issue here.
- SRM needs a little more data to be able to distinguish the POS and SIDE peaks, but otherwise it looks ok.
ITMX |
 |
pit yaw pos side butt
UL 0.355 0.539 0.976 -0.500 0.182
UR 0.833 -1.406 -0.307 -0.118 0.537
LR -1.167 0.055 0.717 -0.445 0.286
LL -1.645 2.000 2.000 -0.828 -2.995
SD -0.747 0.828 2.483 1.000 -1.637 |
8.01148 |
ITMY |
 |
pit yaw pos side butt
UL 1.003 0.577 1.142 -0.038 0.954
UR 0.582 -1.423 0.931 -0.013 -1.031
LR -1.418 -0.545 0.858 0.008 1.081
LL -0.997 1.455 1.069 -0.017 -0.934
SD -0.638 0.797 1.246 1.000 0.264 |
4.46659 |
BS |
 |
pit yaw pos side butt
UL 1.612 0.656 0.406 0.277 1.031
UR 0.176 -1.344 1.683 -0.058 -0.931
LR -1.824 -0.187 1.594 -0.086 0.951
LL -0.388 1.813 0.317 0.249 -1.087
SD 0.740 0.301 -3.354 1.000 0.035 |
5.49597 |
PRM |
 |
pit yaw pos side butt
UL 0.546 1.436 1.862 -0.345 0.866
UR 1.350 -0.564 0.551 -0.055 -0.878
LR -0.650 -0.977 0.138 0.023 0.858
LL -1.454 1.023 1.449 -0.268 -1.398
SD 0.634 -0.620 -0.729 1.000 0.611 |
5.78216 |
SRM |
|
|
|
ETMX |
 |
pit yaw pos side butt
UL 0.863 1.559 1.572 0.004 1.029
UR 0.127 -0.441 1.869 0.480 -1.162
LR -1.873 -0.440 0.428 0.493 0.939
LL -1.137 1.560 0.131 0.017 -0.871
SD 1.838 3.447 -0.864 1.000 -0.135 |
5.5259 |
ETMY |
 |
pit yaw pos side butt
UL -0.337 1.275 1.464 -0.024 0.929
UR 1.014 -0.725 1.414 -0.055 -1.102
LR -0.649 -1.363 0.536 -0.039 0.750
LL -2.000 0.637 0.586 -0.007 -1.220
SD 0.057 -0.016 1.202 1.000 0.142 |
4.22572 |
MC1 |
 |
pit yaw pos side butt
UL 0.858 0.974 0.128 0.053 -0.000
UR 0.184 -0.763 0.911 0.018 0.001
LR -1.816 -2.000 1.872 0.002 3.999
LL -1.142 -0.263 1.089 0.037 0.001
SD 0.040 0.036 -0.216 1.000 -0.002 |
5.36332 |
MC2 |
 |
pit yaw pos side butt
UL 1.047 0.764 1.028 0.124 0.948
UR 0.644 -1.236 1.092 -0.088 -0.949
LR -1.356 -0.680 0.972 -0.096 1.007
LL -0.953 1.320 0.908 0.117 -1.095
SD -0.092 -0.145 -0.787 1.000 -0.065 |
4.029 |
MC3 |
 |
pit yaw pos side butt
UL 1.599 0.343 1.148 0.168 1.101
UR 0.031 -1.647 1.139 0.202 -1.010
LR -1.969 0.010 0.852 0.111 0.893
LL -0.401 2.000 0.861 0.077 -0.995
SD -0.414 0.392 -1.677 1.000 0.018 |
3.61734 |
|
5408
|
Wed Sep 14 20:04:05 2011 |
jamie | Update | CDS | Update to frame builder wiper.pl script for GPS 1000000000 |
I have updated the wiper.pl script (/opt/rtcds/caltech/c1/target/fb/wiper.pl) that runs on the framebuilder (in crontab) to delete old frames in case of file system overloading. The point of this script is to keep the file system from overloading by deleting the oldest frames. As it was, it was not properly sorting numbers which would have caused it to delete post-GPS 1000000000 frames first. This issue was identified at LHO, and below is the patch that I applied to the script.
--- wiper.pl.orig 2011-04-11 13:54:40.000000000 -0700
+++ wiper.pl 2011-09-14 19:48:36.000000000 -0700
@@ -1,5 +1,7 @@
#!/usr/bin/perl
+use File::Basename;
+
print "\n" . `date` . "\n";
# Dry run, do not delete anything
$dry_run = 1;
@@ -126,14 +128,23 @@
if ($du{$minute_trend_frames_dir} > $minute_frames_keep) { $do_min = 1; };
+# sort files by GPS time split into prefixL-T-GPS-sec.gwf
+# numerically sort on 3rd field
+sub byGPSTime {
+ my $c = basename $a;
+ $c =~ s/\D+(\d+)\D+(\d+)\D+/$1/g;
+ my $d = basename $b;
+ $d =~ s/\D+(\d+)\D+(\d+)\D+/$1/g;
+ $c <=> $d;
+}
+
# Delete frame files in $dir to free $ktofree Kbytes of space
# This one reads file names in $dir/*/*.gwf sorts them by file names
# and progressively deletes them up to $ktofree limit
sub delete_frames {
($dir, $ktofree) = @_;
# Read file names; Could this be inefficient?
- @a= <$dir/*/*.gwf>;
- sort @a;
+ @a = sort byGPSTime <$dir/*/*.gwf>;
$dacc = 0; # How many kilobytes we deleted
$fnum = @a;
$dnum = 0;
@@ -145,6 +156,7 @@
if ($dacc >= $ktofree) { last; }
$dnum ++;
# Delete $file here
+ print "- " . $file . "\n";
if (!$dry_run) {
unlink($file);
}
|
5424
|
Thu Sep 15 20:16:15 2011 |
jamie | Update | CDS | New c1oaf model installed and running |
[Jamie, Jenne, Mirko]
New c1oaf model installed
We have installed the new c1oaf (online adaptive feed-forward) model. This model is now running on c1lsc. It's not really doing anything at the moment, but we wanted to get the model running, with all of it's interconnections to the other models.
c1oaf has interconnections to both c1lsc and c1pem via the following routes:
c1lsc ->SHMEM-> c1oaf
c1oaf ->SHMEM-> c1lsc
c1pem ->SHMEM-> c1rfm ->PCIE-> c1oaf
Therefore c1lsc, c1pem, and c1rfm also had to be modified to receive/send the relevant signals.
As always, when adding PCIx senders and receivers, we had to compile all the models multiple times in succession so that the /opt/rtcds/caltech/c1/chans/ipc/C1.ipc would be properly populated with the channel IPC info.
Issues:
There were a couple of issues that came up when we installed and re/started the models:
c1oaf not being registered by frame builder
When the c1oaf model was started, it had no C1:DAQ-FB0_C1OAF_STATUS channel, as it's supposed to. In the daqd log (/opt/rtcds/caltech/c1/target/fb/logs/daqd.log.19901) I found the following:
Unable to find GDS node 22 system c1oaf in INI files
It turns out this channel is actually created by the frame builder, and it could not find the channel definition file for the new model, so it was failing to create the channels for it. The frame builder "master" file (/opt/rtcds/caltech/c1/target/fb/master) needs to list the c1oaf daq ini files:
/opt/rtcds/caltech/c1/chans/daq/C1OAF.ini
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1oaf.par
These were added, and the framebuilder was restarted. After which the C1:DAQ-FB0_C1OAF_STATUS appeared correctly.
SHMEM errors on c1lsc and c1oaf
This turned out to be because of an oversight in how we wired up the skeleton c1oaf model. For the moment the c1oaf model has only the PCIx sends and receives. I had therefore grounded the inputs to the SHMEM parts that were meant to send signals to C1LSC. However, this made the RCG think that these SHMEM parts were actually receivers, since it's the grounding of the inputs to these parts that actually tells the RCG that the part is a receiver. I fixed this by adding a filter module to the input of all the senders.
Once this was all fixed, the models were recompiled, installed, and restarted, and everything came up fine.
All model changes were of course committed to the cds_user_apps svn as well. |
5651
|
Tue Oct 11 17:32:05 2011 |
jamie | HowTo | Environment | 40m google maps link |
Here's another useful link:
http://maps.google.com/maps?q=34.13928,-118.123756 |
5710
|
Thu Oct 20 09:54:53 2011 |
jamie | Update | Computer Scripts / Programs | pynds working on pianosa again |
Quote: |
Doesn't work on pianosa either. Has someone changed the python environment?
pianosa:SUS_SUMMARY 0> ./setSensors.py 1000123215 600 0.1 0.25
Traceback (most recent call last):
File "./setSensors.py", line 2, in <module>
import nds
ImportError: No module named nds
|
So I found that the NDS2 lib directory in (/ligo/apps/nds2/lib) was completely empty. I reinstalled NDS2 and pynds, and they are now available again by default on pianosa (it should "just work", assuming you don't break your environment).
Why the NDS2 lib directory was completely empty is definitely a concern to me. The contents of directories don't just disappear. I can't imagine how this would happen other than someone doing it, either on purpose or accidentally. If someone actually deleted the contents of this directory on purpose they need to speak up, explain why they did this, and come see me for a beating. |
5736
|
Tue Oct 25 18:09:44 2011 |
jamie | Update | CDS | New DEMOD part |
I forgot to elog (bad Jamie) that I broke out the demodulator from the LOCKIN module to make a new DEMOD part:

The LOCKIN part now references this part, and the demodulator can now be used independently. The 'LO SIN in' and 'LO COS in' should receive their input from the SIN and COS outputs of the OSCILLATOR part. |
5755
|
Fri Oct 28 12:47:38 2011 |
jamie | Update | CDS | CSS/BOY installed on pianosa |
I've installed Control System Studio (CSS) on pianosa, from the version 3.0.2 Red Hat binary zip. It should be available as "css" from the command line.
CSS is a new MEDM replacement. It's output is .opi files, instead of .adl files. It's supposed to include some sort of converter, but I didn't play with it enough to figure it out.
Please play around with it and let me know if there are any issues.
links:
|
5833
|
Mon Nov 7 15:43:25 2011 |
jamie | Update | LSC | LSC model recompiled |
Quote: |
While trying to compile, there was something wrong with the lockins that were there...it complained about the Q OUTs being unconnected. I even reverted to the before-I-touched-it-today version of c1lsc from the SVN, and it had the same problem. So, that means that whomever put those in the LSC model did so, and then didn't check to see if the model would compile. Not so good.
Anyhow, I just terminated them, to make it happy. If those are actually supposed to go somewhere, whoever is in charge of LSC lockins should take a look at it.
|
This was totally my fault. I'm very sorry. I modified the lockin part to output the Q phase, and forgot to modify the models that use that part appropriately. BAD JAMIE! I'll check to make sure this won't bite us again. |
5834
|
Mon Nov 7 15:49:26 2011 |
jamie | Update | IOO | WFS output matrix measured (open loop) |
Quote: |
The scripts used to make the WFS outmatrix measurement live in /cvs/cds/rtcds/caltech/c1/scripts/MC/WFS
|
I assume you mean /opt/rtcds/caltech/c1/scripts/MC/WFS.
As I've tried to reitterate many times: we do not use /cvs/cds anymore. Please put all new scripts into the proper location under /opt/rtcds. |
5836
|
Mon Nov 7 17:27:28 2011 |
jamie | Update | Computers | ISCEX IO chassis timing slave dead |
It appears that the timing slave in the c1iscex IO chassis is dead. It's front "link" lights are dark, although there appears to be power to the board (other on-board leds are lit). These front lights should either be on and blinking steadily if the board is talking to the timing system, or blinking fast if there is no connection to the timing distribution box. This likely indicates that the board has had some sort of internal failure.
Unfortunately Downs has no spare timing slave boards lying around at the moment; they're all stuffed in IO chassis awaiting shipping. I'm going to email Rolf about stealing one, and if he agrees we'll work with Todd Etzel to pull one out for a transplant |
5845
|
Wed Nov 9 10:35:30 2011 |
jamie | Update | SUS | ETMX oplev is down and sus damping restored |
Quote: |
The ETMX oplev returning beam is well centered on the qpd. We lost this signal 2 days ago. I will check on the qpd.
ETMX sus damping restored
|
As reported a couple days ago, the ETMX IO chassis has no timing signal. This is why we there is no QPD signal and why we turned off the ETMX watchdog. In fact, I believe that there is probably nothing coming out of the ETMX suspension controller at all.
I'm working on getting the timing board replaced. Hopefully today. |
5854
|
Wed Nov 9 18:02:42 2011 |
jamie | Update | CDS | ISCEX front-end working again (for the moment...) |
The c1iscex IO chassis seems to be working again, and the iscex front-end is running again.
However, I can't say that I actually fixed the problem.
Originally I thought the timing slave board had died by the fact that the front LED indicators next to the fiber IO were out. I didn't initially consider this a power supply problem since there were other leds on the board that were lit. I finally managed to track down Rolf to give downs the OK to pull the timing boards out of a spare IO chassis for us to use. However, when I replaced the timing boards in the chassis with the new ones, they showed the exact same behavior.
I then checked the power to the timing boards, which comes off a 2-pin connector from the backplane board in the back of the IO chassis. Apparently it's supposed to be 12V, but it was only showing ~2.75V. Since it was showing the same behavior for both timing boards, I assumed that the issue was on the IO chassis backplane.
I (with the help of Todd Etzel) started pulling cards out of the IO chassis (while power cycling appropriately, of course) to see if that changed anything. After pulling out both the ADC and DAC cards, the timing system then came up fine, with full power. The weird part is that everything then stayed fine after we started plugging all the cards back in. We eventually got back to the fully assembled configuration with everything working. But, nothing was changed, other than just re-seating all the cards.
Clearly there's some sort of flaky connection on the IO chassis board. Something is prone to shorting, or something, that overloads the power supply and causes the voltage supply to the timing card to drop.
All I can do at this point is keep an eye on it and go through another round of debugging if it happens again.
If it does happen again, I ask that everyone please not touch the IO chassis and let me look at it first. I want to try to poke around before anyone giggles any cables so I can track down where the issue might be. |
5873
|
Fri Nov 11 13:26:24 2011 |
jamie | Update | SUS | Musings on SUS dewhitening, and MC ELP28's |
Quote: |
For the whitening on the OSEM sensor input, FM1 is linked to the Contec binary I/O. FM1 is the inverse whitening filter. Turn it on, and the analog whitening is on (bit in the binary I/O screen turns red). Turn it off, and the analog whitening is bypassed (bit in the binary I/O screen turns gray). Good. Makes sense. Either way, the net transfer function is flat.
The dewhitening is not so simple. In FM9 of the Coil Output filter bank, we have "SimDW", and in FM10, we have "InvDW". Clicking SimDW on makes the bit in the binary I/O screen gray (off?), while clicking it off makes it red (on?). Clicking InvDW does nothing to the I/O bits. So. I think that for dewhitening, the InvDW is always supposed to be on, and you either have Simulated DW, or analog DW enabled, so that either way your transfer function is flat. Fine. I don't know why we don't just tie the analog to the InvDW filter module, and delete the SimDW, but I'm sure there's a reason.
|
The input/whitening filters used to be in a similarly confused state as the output filters, but they have been corrected. There might have been a reason for this setup in the past, but it's not what we should be doing now. The output filters all need to be fixed. We just haven't gotten to it yet.
As with the inputs, all output filters should be set up so that the full output transfer function is always flat, no matter what state it's in. The digital anti-dewhitening ("InvDW") and analog dewhitening should always be engaged and disengaged simultaneously. The "SimDW" should just be removed.
This is on my list of things to do. |
5896
|
Tue Nov 15 15:56:23 2011 |
jamie | Update | CDS | dataviewer doesn't run |
Quote: |
Dataviewer is not able to access to fb somehow.
I restarted daqd on fb but it didn't help.
Also the status screen is showing a blank while form in all the realtime model. Something bad is happening.
|
So something very strange was happening to the framebuilder (fb). I logged on the fb and found this being spewed to the logs once a second:
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 15:28:51 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
Apparently /bin/gcore was trying to be called by some daqd subprocess or thread, and was failing since that file doesn't exist. This apparently started at around 5:52 AM last night:
[Tue Nov 15 05:46:52 2011] main profiler warning: 1 empty blocks in the buffer
[Tue Nov 15 05:46:53 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:46:54 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:46:55 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:46:56 2011] main profiler warning: 0 empty blocks in the buffer
...
[Tue Nov 15 05:52:43 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:52:44 2011] main profiler warning: 0 empty blocks in the buffer
[Tue Nov 15 05:52:45 2011] main profiler warning: 0 empty blocks in the buffer
GPS time jumped from 1005400026 to 1005400379
[Tue Nov 15 05:52:46 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
[Tue Nov 15 05:52:46 2011] going down on signal 11
sh: /bin/gcore: No such file or directory
The gcore I believe it's looking for is a debugging tool that is able to retrieve images of running processes. I'm guessing that something caused something int the fb to eat crap, and it was stuck trying to debug itself. I can't tell what exactly happend, though. I'll ping the CDS guys about it. The daqd process was continuing to run, but it was not responding to anything, which is why it could not be restarted via the normal means, and maybe why the various FB0_*_STATUS channels were seemingly dead.
I manually killed the daqd process, and monit seemed to bring up a new process with no problem. I'll keep an eye on it. |
5908
|
Wed Nov 16 10:13:13 2011 |
jamie | Update | elog | restarted |
Quote: |
Basically, elog hangs up by the visit of googlebot.
Googlebot repeatedly tries to obtain all (!) of entries by specifying "page0" and "mode=full".
Elogd seems not to have the way to block an access from the specified IP.
We might be able to use http-proxy via apache. (c.f. http://midas.psi.ch/elog/adminguide.html#secure )
|
There are much more simple ways to prevent page indexing by googlebot: http://www.google.com/support/webmasters/bin/answer.py?answer=156449
However, I really think that's a less-than-idea solution to get around the actual problem which is that the elog software is a total piece of crap. If google does not index the log then it won't appear in google search results.
But if there is one url that when requested causes the elog to crash then maybe it's a better solution to cut of just that url. |
5979
|
Tue Nov 22 18:15:39 2011 |
jamie | Update | CDS | c1iscex ADC found dead. Replaced, c1iscex working again |
c1iscex has not been running for a couple of days (since the power shutdown at least). I was assuming that the problem was recurrence of the c1iscex IO chassis issue from a couple weeks ago (5854). However, upon investigation I found that the timing signals were all fine. Instead, the IOP was reporting that it was finding now ADC, even though there is one in the chassis.
Since I had a spare ADC that was going to be used for the CyMAC, I decided to try swapping it out to see if that helped. Sure enough, the system came up fine with the new ADC. The IOP (c1x01) and c1scx are now both running fine.
I assume the issue before might have been caused by a failing and flaky ADC, which has now failed. We'll need to get a new ADC for me to give back to the CyMAC. |
6037
|
Tue Nov 29 15:30:01 2011 |
jamie | Update | CDS | location of currently used filter function |
So I tracked down where the currently-used filter function code is defined (the following is all relative to /opt/rtcds/caltech/c1/core/release):
Looking at one of the generated front-end C source codes (src/fe/c1lsc/c1lsc.c) it looks like the relevant filter function is:
filterModuleD()
which is defined in:
src/include/drv/fm10Gen.c
and an associated header file is:
src/include/fm10Gen.h
|
6124
|
Thu Dec 15 11:47:43 2011 |
jamie | Update | CDS | RTS UPGRADE IN PROGRESS |
I'm now in the middle of upgrading the RTS to version 2.4.
All RTS systems will be down until futher notice... |
6125
|
Thu Dec 15 22:22:18 2011 |
jamie | Update | CDS | RTS upgrade aborted; restored to previous settings |
Unfortunately, after working on it all day, I had to abort the upgrade and revert the system back to yesterday's state.
I think I got most of the upgrade working, but for some reason I could never get the new models to talk to the framebuilder. Unfortunately, since the upgrade procedure isn't document anywhere, it was really a fly by the seat of my pants thing. I got some help from Joe, which got me through one road block, but I ultimately got stumped.
I'll try to post a longer log later about what exactly I went through.
In any event, the system is back to the state is was in yesterday, and everything seems to be working. |
6302
|
Tue Feb 21 22:06:18 2012 |
jamie | Update | LSC | beatbox DFD installed in 1X2 rack |
I have installed a proto version of the ALS beatbox delay-line frequency discriminator (DFD, formally known as MFD), in the 1X2 rack in the empty space above the RF generation box.
That empty space above the RF generation box had been intentionally left empty to provide needed ventilation airflow for the RF box, since it tends to get pretty hot. I left 1U of space between the RF box and the beatbox, and so far the situation seems ok, ie. the RF box is not cooking the beatbox. This is only a temporary arrangement, though, and we should be able to clean up the rack considerably once the beatbox is fully working.
For power I connected the beatbox to the two unused +/- 18 V Sorensen supplies in the OMC power rack next to the SP table. I disconnected the OMC cable that was connected to those supplies originally. Again, this is probably just temporary.
Right now the beatbox isn't fully functioning, but it should be enough to use for lock acquisition studies. The beatbox is intended to have two multi-channel DFDs, one for each arm, each with coarse and fine outputs. What's installed only has one DFD, but with both coarse and fine outputs. It is also intended to have differential DAQ outputs for the mixer IF outputs, which are not installed in this version.
The intended design was also supposed to use a comparator in the initial amplification stages before the delay outputs. The comparator was removed, though, since it was too slow and was limiting the bandwidth in the coarse channel. I'll post an updated schematic tomorrow.
I made some initial noise measurements: with a 21 MHz input, which corrseponds to a zero crossing for a minimal delay, the I output is at ~200 nVrms/\sqrt{Hz} at 5 Hz, falling down to ~30 nVrms about 100 Hz, after which it's mostly flat. I'll make calibrated plots for all channels tomorrow.
The actual needed delay lines are installed/hooked up either. Either Kiwamu will hook something up tonight, or I'll do it tomorrow. |
6318
|
Fri Feb 24 19:25:43 2012 |
jamie | Update | LSC | ALS X-arm beatbox added, DAQ channels wiring normalized |
I have hooked the ALS beatbox into the c1ioo DAQ. In the process, I did some rewiring so that the channel mapping corresponds to what is in the c1gcv model.
The Y-arm beat PD is going through the old proto-DFD setup. The non-existant X-arm beat PD will use the beatbox alpha.
Y coarse I (proto-DFD) --> c1ioo ADC1 14 --> C1:ALS_BEATY_COARSE_I
Y fine I (proto-DFD) --> c1ioo ADC1 15 --> C1:ALS_BEATY_FINE_I
X coarse I (bbox alpha)--> c1ioo ADC1 02 --> C1:ALS_BEATX_COARSE_I
X fine I (bbox alpha)--> c1ioo ADC1 03 --> C1:ALS_BEATX_FINE_I
This remapping required coping some filters into the BEATY_{COARSE,FINE} filter bank. I think I got it all copied over correctly, but I might have messed something up. BE AWARE.
We still need to run a proper cable from the X-arm beat PD to the beatbox.
I still need to do a full noise/response characterization of the beatbox (hopefully this weekend). |
6325
|
Mon Feb 27 18:33:11 2012 |
jamie | Update | PSL | what to do with old PSL fast channels |
It appears that the old PSL fast channels never made it into the new DAQ system. We need to figure out what to do with them.
A D990155 DAQ Interface card in far right of the 1X1 PSL EuroCard ("VME") crate is supposed output various PMC/FSS/ISS fast channels, which would then connect to the 1U "lemo breakout" ADC interface chassis. Some connections are made from the DAQ interface card to the lemo breakout, but they are not used in any RTS model, so they're not being recorded anywhere.
An old elog entry from Rana listing the various PSL DAQ channels should be used as reference, to figure out which channels are coming out, and which we should be recording.
The new ALS channels will need some of these DAQ channels, so we need to figure out which ones we're going to use, and clear out the rest.
|
6327
|
Mon Feb 27 19:04:13 2012 |
jamie | Update | CDS | spontaneous timing glitch in c1lsc IO chassis? |
For some reason there appears to have been a spontaneous timing glitch in the c1lsc IO chassis that caused all models running on c1lsc to loose timing sync with the framebuilder. All the models were reporting "0x4000" ("Timing mismatch between DAQ and FE application") in the DAQ status indicator. Looking in the front end logs and dmesg on the c1lsc front end machine I could see no obvious indication why this would have happened. The timing seemed to be hooked up fine, and the indicator lights on the various timing cards were nominal.
I restarted all the models on c1lsc, including and most importantly the c1x04 IOP, and things came back fine. Below is the restart procedure I used. Note I killed all the control models first, since the IOP can't be restarted if they're still running. I then restarted the IOP, followed by all the other control models.
controls@c1lsc ~ 0$ for m in lsc ass oaf; do /opt/rtcds/caltech/c1/scripts/killc1${m}; done
controls@c1lsc ~ 0$ /opt/rtcds/caltech/c1/scripts/startc1x04
c1x04epics C1 IOC Server started
* Stopping IOP awgtpman ... [ ok ]
controls@c1lsc ~ 0$ for m in lsc ass oaf; do /opt/rtcds/caltech/c1/scripts/startc1${m}; done
c1lscepics: no process found
ERROR: Module c1lscfe does not exist in /proc/modules
c1lscepics C1 IOC Server started
* WARNING: awgtpman_c1lsc has not yet been started.
c1assepics: no process found
ERROR: Module c1assfe does not exist in /proc/modules
c1assepics C1 IOC Server started
* WARNING: awgtpman_c1ass has not yet been started.
c1oafepics: no process found
ERROR: Module c1oaffe does not exist in /proc/modules
c1oafepics C1 IOC Server started
* WARNING: awgtpman_c1oaf has not yet been started.
controls@c1lsc ~ 0$
|
6348
|
Fri Mar 2 18:11:50 2012 |
jamie | Summary | SUS | evaluation of eLIGO tip-tilts from LLO |
[Suresh, Jamie]
Suresh and I opened up and checked out the eLIGO tip-tilts assemblies we received from LLO. There are two, TT1 and TT2, which were used for aligning AS beam into the OMC on HAM6. The mirror assemblies hang from steel wires suspended from little, damped, vertical blade springs. The magnets are press fit into the edge of the mirror assemblies. The pointy flags magnetically attach to the magnets. BOSEMS are attached to the frame. The DCC numbers on the parts seem to all be entirely lies, but this document seems to be close to what we have, sans the vertical blade springs: T0900566
We noticed a couple of issues related to the magnets and flags. One of the magnets on each mirror assembly is chipped (see attached photos). Some of the magnets are also a bit loose in their press fits in the mirror assemblies. Some of the flags don't seat very well on the magnets. Some of the flag bases are made of some sort of crappy steel that has rusted (also see pictures). Overall some flags/magnets are too wobbly and mechanically unsound. I wouldn't want to use them without overhauling the magnets and flags on the mirror assemblies.
There are what appear to be DCC/SN numbers etched on some of the parts. They seem to correspond to what's in the document above, but they appear to be lies since I can't find any DCC documents that correspond to these numbers:
TT1: D070176-00 SN001
mirror assembly: D070183-00 SN003
TT2: D070176-00 SN002
mirror assembly: D070183-00 SN006
|
6468
|
Thu Mar 29 20:13:21 2012 |
jamie | Configuration | PEM | PEM_SLOW (i.e. seismic RMS) channels added to fb master |
I've added the PEM_SLOW.ini file to the fb master file, which should give us the slow seismic RMS channels when the framebuilder is restarted. Example channels:
[C1:PEM-RMS_ACC6_1_3]
[C1:PEM-RMS_GUR2Y_0p3_1]
[C1:PEM-RMS_STS1X_3_10]
etc.
I also updated the path to the other _SLOW.ini files.
I DID NOT RESTART FB.
I will do it first thing in the am tomorrow, when Kiwamu is not busy getting real work done.
Here's is a the diff for /opt/rtcds/caltech/c1/target/fb/master:h
controls@pianosa:/opt/rtcds/caltech/c1/target/fb 1$ diff -u master~ master
--- master~ 2011-09-15 17:32:24.000000000 -0700
+++ master 2012-03-29 19:51:52.000000000 -0700
@@ -7,11 +7,12 @@
/opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini
/opt/rtcds/caltech/c1/chans/daq/C1MCS.ini
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1mcs.par
-/cvs/cds/rtcds/caltech/c1/chans/daq/SUS_SLOW.ini
-/cvs/cds/rtcds/caltech/c1/chans/daq/MCS_SLOW.ini
-/cvs/cds/rtcds/caltech/c1/chans/daq/RMS_SLOW.ini
-/cvs/cds/rtcds/caltech/c1/chans/daq/IOP_SLOW.ini
-/cvs/cds/rtcds/caltech/c1/chans/daq/IOO_SLOW.ini
+/opt/rtcds/caltech/c1/chans/daq/SUS_SLOW.ini
+/opt/rtcds/caltech/c1/chans/daq/MCS_SLOW.ini
+/opt/rtcds/caltech/c1/chans/daq/RMS_SLOW.ini
+/opt/rtcds/caltech/c1/chans/daq/IOP_SLOW.ini
+/opt/rtcds/caltech/c1/chans/daq/IOO_SLOW.ini
+/opt/rtcds/caltech/c1/chans/daq/PEM_SLOW.ini
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1rfm.par
/opt/rtcds/caltech/c1/chans/daq/C1RFM.ini
/opt/rtcds/caltech/c1/chans/daq/C1IOO.ini
controls@pianosa:/opt/rtcds/caltech/c1/target/fb 1$
|
6484
|
Wed Apr 4 13:25:29 2012 |
jamie | Configuration | PEM | PEM_SLOW (i.e. seismic RMS) channels aquiring |
Quote: |
I've added the PEM_SLOW.ini file to the fb master file, which should give us the slow seismic RMS channels when the framebuilder is restarted. Example channels:
[C1:PEM-RMS_ACC6_1_3]
[C1:PEM-RMS_GUR2Y_0p3_1]
[C1:PEM-RMS_STS1X_3_10]
etc.
|
The framebuilder seems to have been restarted, or restarted on it's own, so these channels are now being acquired.
Below is a minute trend of a smattering of the available RMS channels over the last five days.

|
7114
|
Wed Aug 8 10:15:13 2012 |
jamie | Update | Environment | Another earthquake, optics damped |
There were another couple of earthquakes at about 9:30am and 9:50am local.

All but MC2 were off the watchdogs. I damped and realigned everything and everything looks ok now.

|
7142
|
Fri Aug 10 11:05:33 2012 |
jamie | Configuration | IOO | MC trans optics configured |
Quote: |
Quote: |
Quote: |
The PDA255 is a good ringdown detector - Steve can find one in the 40m if you ask him nicely.
|
We found a PDA255 but it doesn't seem to work. I am not sure if that is one you are mentioning...but I'll ask Steve tomorrow!
|
I double checked the PDA255 found at the 40m and it is broken/bad. Also there was no success hunting PDs at Bridge. So the MC trans is still in the same configuration. Nothing has changed. I'll try doing ringdown measurements with PDA400 today.
|
Can you explain more what "broken/bad" means? Is there no signal? Is it noisy? Glitch? etc. |