Remember that the Oplevs are not good references because of the temperature sensitivity. The week long trend shows lots of 24 hour fluctuations.
A plot showing that the daily variation in the OLs is sometimes almost as much as the full scale readout (-1 to +1).
ITMX, PRM and BS watchdogs are tripped. They were restored.
Stable MC was disabled so I can use MC_REFL 1 W beam to measure green glass .
This morning, at about 12 Koji found all the front-ends down.
At 1:45pm rebooted ISCEX, ISCEY, SOSVME, SUSVME1, SUSVME2, LSC, ASC, ISCAUX
Then I burtestored ISCEX, ISCEY, ISCAUX to April 2nd, 23:07.
The front-ends are now up and running again.
I restored damping to all SUSes except ITM-east. The ITMX OSEMs are being used in the clean assembly room.
Very, very cool!
Kiwamu (or whoever is here last tonight): please run the free-swing/kick script (/opt/rtcds/caltech/c1/scripts/SUS/freeswing) before you leave, and I'll check the matrices and update the suspensions tomorrow morning.
All suspentions were restored and MC locked. PRM side osem RMS motion was high.
Atm2, Why the PRM is 2x as noisy as the SRM ?
OSEM voltages to be corrected at upcoming vent: threshold ~ 0.7-1.2V, ( at 22 out of 50 )
ITMX_UL, UR, LL, LR, SD
ETMX_UL, UR, LL, LR, SD
SRM_UL, UR, LL
MC3_UL, LR, LL
Why are all the suspension watchdogs tripped? None of the suspension models are running on c1ioo, so they should be completely unaffected. Steve, did you find them tripped, or did you shut them off?
In either event they should be safetly turned back on.
I've turned off the coils. Though non of them are on the c1ioo, who knows what can happen when we'll try to run the models again.
Now that the replacement susaux machine is installed and fully tested, I renamed it from c1susaux2 to c1susaux and updated the DNS lookup tables on chiara accordingly.
Then we restarted daqd.
[Suresh / Kiwamu]
The c1lsc and c1sus machine were rebooted.
- - (CDS troubles)
After we restarted daqd and pressed some DAQ RELOAD buttons the c1lsc machine crashed.
The machine didn't respond to ssh, so the machine was physically rebooted by pressing the reset button.
Then we found all the realtime processes on the c1sus machine became frozen, so we restarted them by sshing and typing the start scripts.
However after that, the vertex suspensions became undamped, even though we did the burt restore correctly.
This symptom was exactly the same as Jenne reported (#5571).
We tried the same technique as Jenne did ; hardware reboot of the c1sus machine. Then everything became okay.
The burt restore was done for c1lsc, c1asc, c1sus and c1mcs.
- - (ITMX trouble)
During the trial of damping recovery, the ITMX mirror seemed stacked to an OSEM. The UL readout became zero and the rest of them became full range.
Eventually introducing a small offset in C1:SUS-ITMX_YAW_COMM released the mirror. The amount of the offset we introduced was about +1.
[Jamie, Brett, Jenne]
We made some small modifications to the sus_single_control suspension controller library part to get in/out the signals that Brett needs for his "global damping" work. We brought out the POS signal before the SUSPOS DOF filter, and we added a new GLOBPOS input to accommodate the global damping control signals. We added a new EPIC input to control a switch between local and global damping. It's all best seen from this detail from the model:
The POSOUT goto goes to an additional output. As you can see I did a bunch of cleanup to the spaghetti in this part of the model as well.
As the part has a new input and output now we had to modify c1sus, c1scx, c1scy, and c1mcs models as well. I did a bunch of cleanup in those models as well. The models have all been compiled and installed, but a restart is still needed. I'll do this first thing tomorrow morning.
All changes were committed to the userapps SVN, like they should always be.
We still need to update the SUS MEDM screens to display these new signals, and add switches for the local/global switch. I'll do this tomorrow.
During the cleanup I found multiple broken links to the sus_single_control library part. This is not good. I assume that most of them were accidental, but we need to be careful when modifying things. If we break those links we could think we're updating controller models when in fact we're not.
The one exception I found was that the MC2 controller link was clearly broken on purpose, as the MC2 controller has additional stuff added to it ("STATE_ESTIMATE"):
I can find no elog that mentions the words "STATE" and "ESTIMATE". This is obviously very problematic. I'm assuming Den made these modifications, and I found this report: 7497, which mentions something about "state estimation" and MC2. I can't find any other record of these changes, or that the MC2 controller was broken from the library. This is complete mickey mouse bullshit. Shame shame shame. Don't ever make changes like this and not log it.
I'm going to let this sit for a day, but tomorrow I'm going to remove replace the MC2 controller with a proper link to the sus_single_control library part. This work was never logged so it didn't happen as far as I'm concerned.
Most of the suspension look ok, with "badness" levels between 4 and 5. I'm just posting the ones that look slightly less ideal below.
pit yaw pos side butt
UL 0.466 1.420 1.795 -0.322 0.866
UR 1.383 -0.580 0.516 -0.046 -0.861
LR -0.617 -0.978 0.205 0.011 0.867
LL -1.534 1.022 1.484 -0.265 -1.407
SD 0.846 -0.632 -0.651 1.000 0.555
pit yaw pos side butt
UL 0.783 1.046 1.115 -0.149 1.029
UR 1.042 -0.954 1.109 -0.060 -1.051
LR -0.958 -0.926 0.885 -0.035 0.856
LL -1.217 1.074 0.891 -0.125 -1.063
SD 0.242 0.052 1.544 1.000 0.029
pit yaw pos side butt
UL 1.536 0.714 0.371 0.283 1.042
UR 0.225 -1.286 1.715 -0.084 -0.927
LR -1.775 -0.286 1.629 -0.117 0.960
LL -0.464 1.714 0.285 0.250 -1.070
SD 0.705 0.299 -3.239 1.000 0.023
pit yaw pos side butt
UL 1.335 0.209 1.232 -0.071 0.976
UR -0.537 1.732 0.940 -0.025 -1.068
LR -2.000 -0.268 0.768 0.004 1.046
LL -0.129 -1.791 1.060 -0.043 -0.911
SD -0.069 -0.885 1.196 1.000 0.239
pit yaw pos side butt
UL 1.103 0.286 1.194 -0.039 0.994
UR -0.196 -1.643 -0.806 -0.466 -1.113
LR -2.000 0.071 -0.373 -0.209 0.744
LL -0.701 2.000 1.627 0.217 -1.149
SD 0.105 -1.007 3.893 1.000 0.290
All suspension damping has been restored.
Earthquake of magnitude 5.0 shakes ETMY loose.
MC2 lost it's damping later.
I just spent the last hour checking in a bunch of uncommitted changes to stuff in the SVN. We need to be MUCH BETTER about this. We must commit changes after we make them. When multiple changes get mixed together there's no way to recover from one bad one.
Last login: Fri Sep 19 00:11:44 2008 from gwave-69.ligo.c
Sun Microsystems Inc. SunOS 5.9 Generic May 2002
svn: This client is too old to work with working copy '.'; please get a newer Subversion client
SunOS nodus 5.9 Generic_118558-39 sun4u sparc SUNW,A70 Solaris
I installed svn on op440m. This involved installing the following packages from sunfreeware:
apache-2.2.6-sol9-sparc-local libiconv-1.11-sol9-sparc-local subversion-1.4.5-sol9-sparc-local
db-4.2.52.NC-sol9-sparc-local libxml2-2.6.31-sol9-sparc-local swig-1.3.29-sol9-sparc-local
expat-2.0.1-sol9-sparc-local neon-0.25.5-sol9-sparc-local zlib-1.2.3-sol9-sparc-local
The packages are located in /cvs/cds/caltech/apps/solaris/packages. The command line to install
a package is "pkgadd -d " followed by the package name. This can be repeated on nodus to get
svn over there. (Kind of egregious to require an apache installation for the svn _client_, I
i tried to commit something this afternoon and got the following error message:
Error: Commit failed (details follow):
Error: Server sent unexpected return value (405 Method Not Allowed) in response to
Error: MKCOL request for '/svn/!svn/wrk/d2523f8e-eda2-d847-b8e5-59c020170cec/trunk/frank'
anyone had this before? what's wrong?
Yesterday and this morning's slow NFS disk access was caused by 'svndumpfilter' being run at linux1 to carve out the Noise Budget directory. It is being moved to another server; I think the disk access is back to normal speed now.
This morning I opened the chambers and started some in-vac works.
As explained in this entry, I swapped pzt mirror (A) and (C) successfully.
The chambers are still open, so don't be surprised.
(today's missions for IOO)
- cabling for the pzt mirrors
- energizing the pzt mirrors and slide them to their midpoint.
- locking and alignment of the MC
- realignment of the pzt mirrors and other optics.
- letting the beam go down to the arm cavity
As a result of the vacuum work, now the IR beam is hitting ETMX.
The spot of the transmitted beam from the cavity can be found at the end table by using an IR viewer.
BUT, what we really need (instead of just the DC sweeps) is the DC sweep with the uncertainty/noise displayed as a shaded area on the plot, as Nic did for us in the pre-CESAR modelling.
I've taken a first stab at this. Through various means, I've made an estimation of the total noise RMS of each error signal, and plotted a shaded region that shows the range of values the error signal is likely to take, when the IFO is statically sitting at one CARM offset.
I have not included any effects that would change the RMS of these signals in a CARM-offset dependent way. Since this is just a rough first pass, I didn't want to get carried away just yet.
For the transmission PDs, I measured the RMS on single arm lock. I also measured the incident power on the QPDs and thorlabs PDs for an estimate of shot noise, but this was ridiculously smaller than the in-loop RIN. I had originally though of just plotting sensing noise for the traces (i.e. dark+shot), because the amount of seismic and frequency noise in the in-loop signal obviously depends on the loop, but this gives a very misleading, tiny value. In reality we have RIN from the PRC due to seismic noise, angular motion of the optics, etc., which I have not quantified at this time.
So: for this first, rough, pass, I am simply multiplying the single transmission noise RMSs by a factor of 10 for the coupled RMS. If nothing else, this makes the SqrtInv signal look plausible when we actually practically find it to be plausible.
For the REFL PDs, I misaligned the ITMs for a prompt PRM reflection for a worst-case shot noise situation, and took the RMS of the spectra. (Also wrote down the dark RMSs, which are about a factor of 2 lower). I then also multiplied these by ten, to be consistent with the transmission PDs. In reality, the shot noise component will go down as we approach zero CARM offset, but if other effects dominate, that won't matter.
Enough blathering, here's the plot:
Now, in addition to the region of linearity/validity of the different signals, we can hopefully see the amount of error relative to the desired CARM offset. (Or, at least, how that error qualitatively changes over the range of offsets)
This suggests that we MAY be able to hop over to a normalized RF signal; but this is a pretty big maybe. This signal has the response of the quotient of two nontrivial optical plants, which I have not yet given much thought to; it is probably the right time to do so...
This is looking very useful. It will be useful if you can upload some python code somewhere so that I can muck with it.
I would guess that the right way to determine the trans RMS is just to use the single arm lock RIN and then apply that as RIN (not pure TR RMS) to the TR signals before doing the sqrt operation.
At 09:34 PST I noted a glitch in the controls room as the machines went down except for c1ioo. Briefly, the video feeds disappeared from the screens, though the screens themselves didn't lose power. At first I though this was some kind of power glitch, but upon checking with Jordan, it most likely was related to some system crash. Coming back to the controls room, I could see the MC reflection beam swinging, but unfortunately all the FE models came down. I noticed that the DAQ status channels were blank.
I ssh into c1ioo no problem and ran "rtcds stop c1ioo c1als c1omc", then "rtcds restart c1x03" to do a soft restart. This worked, but the DAQ status was still blank. I then tried to ssh into c1sus and c1lsc without success, similarly c1iscex and c1iscey were unreachable. I went and did a hard restart on c1iscex by switching it off, then its extension chassis, then unplugging the power cords, then inverting these steps, and could ssh into it from rossa. I ran "rtcds start c1x01" and saw the same blank DAQ status. I noticed the elog was also down... so nodus was also affected?
Anchal got on zoom to offer some assistance. We discovered that the fb1 and nodus were subject to some kind of system reboot at precisely 09:34. The "systemctl --failed" command on fb1 displayed both the daqd_dc.service and rc-local.service as loaded but failed (inactive). Is it a good idea to try and reboot the fb1 machine? ... Anchal was able to bring elog back up from nodus (ergo, this post).
Although it probably needs the DAQ service from the fb1 machine to be up and running, I tried running the scripts/cds/rebootC1LSC.sh script. This didn't work. I tried running sudo systemctl restart daqd_dc from the fb1 machine without success. Running systemctl reset-failed "worked" for daqd_dc and rc-local services on fb1 in the sense that they were no longer output from systemctl --failed, but they remained inactive (dead) when running systemctl status on them. Following from 15303 I succeeded in restarting the daqd services. Turned out I needed to manually start the open-mx and mx services in fb1. I rerun the restartC1LSC script without success. The script fails because some machines need to be rebooted by hand.
The two acrylic optical table enclosures were moved from the carpenter shop to CES. I need to order windows. The latest quotes from Laseroptik are posted at the wiki / aux_optics page.
Things to do: order windows, draw and order window flange, install surgical tubing seals, buy and line enclosures with IR shield films.
In the last two days Steve and I took some optics away from the both ETM end table.
This is because we need an enough space to set up the green locking stuff into the end table, and also need to know how much space is available.
Optics we took away are : Alberto's RF stuff, fiber stuff and some optics obviously not in used.
The picture taken after the removing is attached. Attachment1:ETMX, Attachment2:ETMY
And the pictures taken before the removing are on the wiki, so you can check how they are changed.
The PD Kiwamu removed from the Y table was TRY, which we still need.
My bad if he took that. By mistake I told him that was the one I installed on the table for the length measurement and we didn't need it anymore.
I'm going to ask Kiwamu if he can kindly put it back.
I made some efforts to fix the situation of SRM but it is still bad.
The POS motion wasn't well damped. Something is wrong either (maybe both) sensing part or actuation part.
I am going to check the sensing matrix with the new free swinging spectra (#5690)
When I was trying to lock SRMI I found that the fringes observed at the AS camera didn't show spatial higher order modes, which is good.
So I thought the SRM suspension became quiet, but it actually wasn't. Because the RMS monitor of the SRM OSEMs already went to about 30 counts.
At the same time the opelev error signals were well suppressed. It means some DOFs which were insensitive to the oplev were ringing up, namely POS and SIDE.
According to the LSC error signal and the ASDC signal, I believe that the POS was going wild (although I didn't check the OSEM spectra).
+ Readjusted the f2a filters (see the attachment).
+ Tried to eliminate a coupling between the POS and SIDE drives by tweaking the output matrix.
=> In order to eliminate the coupling from the POS drive to SIDE sensor, I had to put a comparable factor into an element.
So it might be possible that the POS sensor was showing the SIDE signal and vice versa.
In order to check it I left SRM free swinging (#5690).
The main reason why I couldn't lock DRMI was that the suspensions were touchy and especially the SRM suspension wasn't good.
The SRM input matrix has been readjusted.
However still there is the unwanted coupling from the POS drive to SIDE signal and from the SIDE drive to POS signal.
We installed beam targets on PRM and BS suspension cages.
On both suspensions one of the screw holes for the target actually houses the set screw for the side OSEM. This means that the screw on one side of the target only goes in partial way.
The target installed on BS is wrong! It has a center hole, instead of two 45 deg holes. I forgot to remove it, but it will obvious it's wrong to the next person who tries to use it. I believe we're supposed to have a correct target for BS, Steve?
The earthquake stop screws on PRM were too short and were preventing installation of the PRM target. Therefore, in order to install the target on PRM I had to replace the earthquake stops with ones Jenne and I found in the bake lab clean room that were longer, but have little springs instead of viton inserts at the ends. This is ok for now, but
We checked the beam through PRM and it's a little high to the right (as viewed from behind). Tomorrow we're going to open ITMX chamber so that we can get a closer look at the spot on PR2.
The two eye target for the BS is in the clean tool box. It actually has irises.
A nicer, better maintained version of tconvert is now supplied by the lalapps package. It's called lalapps_tconvert. I installed lalapps on all the workstations and aliased tconvert to point to lalapps_tconvert.
tdsavg 5 C1:LSC-PD4_DC_IN1
was causing grievous woe in the cm_step script. It turned out to fail intermittently at the command line, as did other LSC channels. (But non-LSC channels seem to be OK.) So we power cycled c1lsc (we couldn't ssh).
Then we noticed that computers were out of sync again (several timing fields said 16383 in the C0DAQ_RFMNETWORK screen). We restarted c1iscey, c1iscex, c1lsc, c1susvme1, and c1susvme2. The timing fields went back to 0. But the tdsavg command still intermittently said "ERROR: LDAQ - SendRequest - bad NDS status: 13".
The channel C1:LSC-SRM_OUT16 seems to work with tdsavg every time.
Let us know if you know how to fix this.
Did you try restarting the framebuilder?
What you type is in bold:
op440m> telnet fb40m 8087
Restarting the framebuilder didn't work, but the problem now appears to be fixed.
Upon reflection, we also decided to try killing all open DTT and Dataviewer windows. This also involved liberal use of ps -ef to seek out and destroy all diag's, dc3's, framer4's, etc.
That may have worked, but it happened simultaneously to killing the tpman process on fb40m, so we can't be sure which is the actual solution.
To restart the testpoint manager:
what you type is in bold:
rosalba> ssh fb40m
fb40m~> pkill tpman
The tpman is actually immortal, like Voldemort or the Kurgan or the Cylons in the new BG. Truly slaying it requires special magic, so the pkill tpman command has the effect of restarting it.
In the future, we should make it a matter of policy to close DTTs and Dataviewers when we're done using them, and killing any unattended ones that we encounter.
tdsavg isn't working:
controls@rossa:/opt/rtcds/caltech/c1/scripts/LSC 6$ tdsavg 10 C1:LSC-ASDC_IN1
ERROR: LDAQ - Unable to find NDS host "fb0"
ERROR: LDAQ - Unable to find NDS host "fb1"
ERROR: LDAQ - Unable to open socket to NDS.
When this command is executed inside a script, it doesn't return anything. eg:
set offset = `tdsavg 10 C1:LSC_ASDC_IN1`
returns a blank line.
Past elog research said lots of things about test points. I didn't suspect that, since there aren't many test points occupied (according to the CDS status screens), but I cleared the test points anyway (elog 6319). Didn't change anything, still broken.
LSCoffsets script, and any others depending on tdsavg will not work until this is fixed.
LSCoffsets is working again.
tdsavg (now, but didn't used to) needs "LIGONDSIP=fb" to be specified. Jamie just put this in the global environment, so tdsavg should just work like normal again.
Also, the rest of the LSCoffsets script (really the subcommand offset2) was tsch syntax, so I created offset3 which is bash syntax.
Now we can use LSCoffsets again.
I found that the LSCoffset script didn't work today. The script is supposed to null the electrical offsets in all the LSC channels.
I went through the sentences in the script and eventually found that the tdsavg command returns 0 every time.
I thought this was related to the test points, so I ran the following commands to flush all the test point running and the issue was solved.
[diag]> diag tp clear *
EDIT, JCD 11June2012: 3rd line there should just be [diag]> tp clear *
[diag]> tp clear *
I found that tdsdata doesn't work.
When I star tdsdata, he takes a few ~ 10 seconds of data, and he dies with a message "Segmentation fault".
I tried to get data for some times and some channels, and this problem was observed everytime.
I also tried tdsdata on allegra, op440m and mafalda, and it didn't work on all of them.
Yesterday, I got a new version of tdsdata (which modified the problem of Message ID: 1328) and tried to build
thme on my directory (/cvs/cds/caltech/users/kakeru.....)
This may have some relation to this problem.
Matt logged in and rebuilt the TDS stuff for us on Mafalda in /cvs/cds/caltech/apps/linux/tds_090304.
He says that he can't build his stuff on 64-bit because there's not a sanctioned 64-bit build of GDS yet.
This should have all the latest fixes in it. I tried using both the old and new code from allegra and they both are fine:
./tdsdata 16384 2 C1:IOO-MC_F > /users/rana/test.txt
I loaded the data I got with the above command and there were no data dropouts. Possibly the dropout problem is only
associated with testpoints and so we have to wait for the TP fix.
I tested new tdsdata and found it was working well.
I excited C1:SUS-ITMY_SUSPIT_EXC with tdssine, and get data from C1:LSC-TRY_OUT (testpoint) and C1:SUS- ITMY_OPLEV_PERROR (recorded point) with new and old tdsdata.
With old tdsdata (/cvs/cds/caltech/apps/linux/tds/bin/tdsdata), I found some jumps of datapoint, which is a same problem with before (Attachment 1).
With new tdsdata (/cvs/cds/caltech/apps/linux/tds_090304/bin/tdsdata), there looks to be no jumps (Attachment 2; taken about 10 minutes after Attachment 1).
The problem of old tdsdata looks to be remaining even for recordedpoints.
You should use /cvs/cds/caltech/apps/linux/tds_090304/bin/tdsdata.
We confirmed that new tds(/cvs/cds/caltech/apps/linux/tds_090304/) works well on linux 64, and replaced it to /cvs/cds/caltech/apps/linux/tds/
The old /cvs/cds/caltech/apps/linux/tds is put in /cvs/cds/caltech/apps/linux/tds.bak
The tdscntr.pl in the new tds was probably the one from LLO, which is actually the version I sent to Tobin. It had paths and channel names defined for the LLO. So I copied back my original 40m version.
I found a strange jump of value in my data taken with tdsdata.
I couldn't find same jump in a playback of DataViewer, so I think this is a problem of tdsdata.
Be careful when you use tdsdata!
The attached file is an example of jumped data.
I try to get data with allegra and op440m, and both has same kind of jump.
(A downsampling or interpolation may be wrong.)
Rana said there is a fixed version of tdsdata in some PC, but 64bit linux may not have.
I try it tomorrow.