ID |
Date |
Author |
Type |
Category |
Subject |
11329
|
Thu May 28 10:42:19 2015 |
Steve | Update | PEM | bad Guralp X-cable |
Quote: |
Tried swapping cables at the Guralp interface box side. It seems that all of our seismic signal problems have to do with the GUR2 cable being flaky (not surprising since it looks like it was patched with Orange Electrical tape!! rather than proper mechanical strain relief).
After swapping the cables today, the GUR2 DAQ channels all look fine: i.e. GUR1 (the one at the Y end) is fine, as is its cable and the GUR2 analog channels inside the interface box.
OTOH, the GUR1 DAQ channels (which have GUR2 (EX) connected into it) are too small by a factor of ~1000. Seems like that end of the cable will need to be remade. Luckily Jenne is still around this week and can point us to the pinout / instructions. Looks like there could be some shorting inside the backshell, so I've left it disconnected rather than risk damaging the seismometer. We should get a GUR1 style backshell to remake this cable. It might also be possible that the end at the seismometer is bad - Steve was supposed to swap the screws on the granite-aluminum plate on Thursday; I'll double check.
|
The Guralps were swapped.
What I did: turned DC power off at 1X1, hand carried them, centered each leveling bubbles, gently locked the jack screws and turned power back on.
ETMY at the east end now has CMG-T40-0008, sn T4157 as channel C1:PEM-SEIS_STS_1_Y_OUT_DQ.........
ETMX at south end now has CMG-T40-0053, sn T4Q17 as channel C1:PEM-SEIS_STS_2_Y_OUT_DQ.........
Conclusion: Guralps are fine. X cable is bad. It was bad 6 months ago when it was made.
We can still swap the 3ft short cables at the granite base if Rana has not done it.
|
Attachment 1: Gurs180dXbad.png
|
|
Attachment 2: swappedGurs.png
|
|
11328
|
Wed May 27 17:14:08 2015 |
ericq | Update | LSC | X Aux Laser crystal temperature changed |
Rana suspects that the lack of X beatnote is related to the PSL laser temperature change (ELOG 11294).
I used the information on the wiki and old elogs (wiki-40m, ELOG 6732), to deduce that the new end laser temperatures should be:
- X end-> 38.98 C
- Y end-> 35.80 C
I went out to the X end and found the laser crystal temperature set to 40.87, which is not what the measurements I linked to suggest would be the ideal temperature for the previous NPRO laser temperature of 30.89, which would be 37.02. I could not find any elog describing the choice of this setpoint.
I've changed the X end laser crystal temperature to the value above. I've hooked up the X IR and green beatnotes to go the control room analzyer, and have been looking for the beatnote as I adjust the digital temperature offset, but haven't found it yet...
If this proves totally fruitless, we can just put the lasers back to their original temperatures, since it's unclear if it helped the PC drive noise levels. |
11327
|
Wed May 27 15:20:54 2015 |
ericq | Update | Computer Scripts / Programs | Chiara Backup Hiccup |
The local chiara backups are still failing due to vanished source files. I've emailed Max about the summary page jobs, since I think they're running remotely. |
11326
|
Wed May 27 02:53:57 2015 |
ericq | Update | General | ifo recovery log |
Given my suspicion of fault with the X Green BBPD, Koji generously provided me with another one that he had confirmed to be working.
However, I turns out I was mistaken. With the existing BBPD, I did indeed witness a beat in the RF output, but it is somehow something like 20dBm smaller than it should be. This is why I missed it the other night. Here's a video of a RF output on a scope, wherein the beat is only barely visible because I've set the trigger level very low. I could not make the beatnote any larger through any alignment motions; I had gotten to this point by doing near/far field overlap on the PSL table.
I'm not sure what could have caused this. Mode mismatch? By eye, the beam spots looked about the same in the near and far fields, and we haven't had to touch the mode matching in quite some time... I've given up on trying to solve this for tonight.
Just for kicks, I hooked up the fiber PD IR beatnotes as inputs to the ALS DFD. The X beat is too small to even really see above the control room analyzer's noise floor, but the Y beat looked big enough. With the arms locked on IR, the phase tracker output RMS was a few kHz, so not even worth thinking about any further. Not so surprising.
Finally, I put back / hooked up everything in its nominal state. |
11325
|
Tue May 26 19:57:11 2015 |
rana | Update | Computer Scripts / Programs | ifoCoupling |
Looks like a very handy code, especially with the real statistical tests.
I would make sure to use much smaller excitation amplitudes. Since the coupling is nonlinear, we expect that its only a good noise budget estimator when the excitation amplitude is less than a factor of 3 above the quiesscent excitation. |
11324
|
Tue May 26 11:05:10 2015 |
Steve | Update | PEM | worked around ETMY seism. |
The cable tray holder cross beam is removed and cut short for good access to seismometer. |
Attachment 1: ETMYseismic.png
|
|
Attachment 2: cut.jpg
|
|
11323
|
Sun May 24 14:45:02 2015 |
Koji | HowTo | General | How to launch StripTools at specified locations |
LLO Operator Tips:
koji.arai@cr9:/opt/rtcds/userapps/trunk/asc/l1/scripts/initial_alignment$ cat autostart_strips.sh
#!/bin/bash
# Baffle window setup 1500x480
StripTool -xrm 'StripTool.StripGraph.geometry:1500x470+0+24' /opt/rtcds/userapps/trunk/asc/l1/scripts/initial_alignment/baffle_pd.stp &
sleep 1
# DC signals setup
StripTool -xrm 'StripTool.StripGraph.geometry:1500x470+0+470' /opt/rtcds/userapps/trunk/asc/l1/scripts/initial_alignment/dc_signals.stp &
sleep 1
# WFS prx mich sry setup
StripTool -xrm 'StripTool.StripGraph.geometry:1500x470+0-24' /opt/rtcds/userapps/trunk/asc/l1/scripts/initial_alignment/wfs_prx_mich_sry.stp &
sleep 1
exit
|
11322
|
Sat May 23 22:43:10 2015 |
ericq | Update | General | ifo recovery log |
Running train-of-thought elog:
East N2 cylinder found empty, replaced. West is >2kpsi
Removed Yuta-specific code from damprestore. A grep for 'yuta' in the python files within /scripts/ shows some other occurances, but nothing that is really in use at this time. New feature of damprestore.py: remembers oplev status.
Ran LSCoffsets.
WFS offsets relieved (all <20).
Adjusted FSS offset to minimize MC_FAST_MON
ASS ran (but the arm alignment has been astoundingly stable lately. I haven't touched it all this week)
ITMX is the only optic that got a correction over 20 counts.
BS and *TM oplev spots look well centered, except for ITMX.
I undid the gain reduction rana introduced because X ASS seemed to be really slow. It is currently fine in its older state. What's going on here?
Some network latency stuff is going on, even freezing up terminals when trying to write text files. This may (or may not) be correlated with the summary page rysnc jobs on nodus. It occurs to me that we have a DAQ ethernet network seperate from the martian network, but the frame transfers need to go through the martian network, since nodus is the only way out to the outside world. If we had a machine/gateway directly from the DAQ network to the caltech network, the martian network wouldn't get bogged down when frames are being uploaded
GTRY = 0.55, ok. Aligned GTRX to 0.52, also ok
Y beatnote was found easily. Have spent >30 minutes looking for X green beatnote. Typical FSS slow and X temperature ranges don't seem to be giving much. Will check the beat alignment with a scope, but if the beat is too high to begin with, it may not work...
I suspect a problem with the X Green BBPD
I could see the IR beatnote between the PSL and AUX X lasers at the input to the frequency counter. (I believe it is a real beatnote because it reacted as expected to the end temperature moving, and stabilizing the end laser to the arm). However, when placing the IR beatnote at a frequency which should've made the green beatnote visible on an analyzer and/or scope, no beatnote was found. I played with the beat alignment to no avail; the DC output of the BBPD behaved as expected, but I never saw anything in the RF output or on the control room analyzer. I also checked the beatnote signal chain by hooking up a 1mV 26MHz signal into the cable that hooks up to the RF out of the BBPD; the signal showed up clearly on the control room analyzer.
I'm not sure what may have happened. ELOG 9996 may be related.
I'm calling it a night. |
11321
|
Fri May 22 18:09:58 2015 |
ericq | Update | Computer Scripts / Programs | ifoCoupling |
I've started working on a general routine to measure noise couplings in our interferometers. Often this is done with swept sine measurements, but this misses the nonlinear part of the coupling, especially if the linear part is alreay reduced through some compensation or feedforward scheme. Rana suggested using a series of narrow band-limited noise injections.
The structure I'm working on is a python script that uses the AWG interface written by Chris W. to create the excitations. Afterwards, I calculate a series of PSD estimates from the data (i.e. a spectrogram), and apply a two-sample, unequal variance, t-test to test for statisically significant increases in the noise spectra to try and evaluate the nonlinear contriubutions to the noise. I've started a git repository at github.com/e-q/ifoCoupling with the code.
So far, I've tested one such injection of noise coupling from the ETMX oplev error point to the single arm length error signal. It's completely missing the user interface and structure to do a general series of measurements, but this is just organizational; I'm trying to get the math/science down first.
Here's a result from today:

Median, instead of the usual mean, PSDs are used throughout, to reject outliers/glitches.
The linear part of the coupling can be estimated using the coherence / spectrum height in the excitation band, but I'm not sure what the best what to present/paramerize the nonlinear parts of each individaul excitation band's result is.
Also, I anticipate being able to write an excitation auto-leveling routine, gradually increasing the exctiation level until the excited spectrum is some amount noisier than the baseline spectrum, up to some maximum amount configurable by the user.
The excitation shaping could probably be improved, too. It's currently and elliptic + butterworth bandpass for a sharp edge and rolloff.
I'm open to any thoughts and/or suggestions anyone may have! |
Attachment 1: ETMX_PIT_L_coupling.png
|
|
11320
|
Fri May 22 12:09:57 2015 |
rana | Update | SUS | DampRestore script problem |
I will move it back. We need to fix our scripts to not use any users/ libraries ever again.
Quote: |
PRM watchdog tripped, but the damprestore.py script wouldn't run.
It turns out the script tries to import some ezca stuff from /users/yuta ( ), which had been moved to /users/OLD/yuta ( ).
I've moved the yuta directory back to /users/ until I fix the damprestore script.
|
|
11319
|
Fri May 22 11:59:54 2015 |
ericq | Update | SUS | DampRestore script problem |
PRM watchdog tripped, but the damprestore.py script wouldn't run.
It turns out the script tries to import some ezca stuff from /users/yuta ( ), which had been moved to /users/OLD/yuta ( ).
I've moved the yuta directory back to /users/ until I fix the damprestore script. |
11318
|
Wed May 20 11:41:59 2015 |
ericq | Update | General | some status |
West cyclinder is empty, east is at 2000 psi; regulated N2 pressure is 64psi. I'll replace the west one after the meeting. |
11317
|
Wed May 20 03:08:27 2015 |
rana | Update | General | some status |
I think that the real clue was that the dropouts are in some channels and not in others:
https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20150519/psl/
As it turns out, the channel with no dropouts is the RAW PSL RMTEMP channel. All the others are the minute trends. So something is up with the trend making or the trend reading in the cluster.
Quote: |
There's a few hours so far after today's c1cal shut off that the summary page shows no dropouts. I'm not yet sure that this is related, but it seems like a clue.
|
|
11316
|
Tue May 19 19:24:30 2015 |
rana | Update | PEM | Seismic BLRMS filters |
I was wondering about the design of the BLRMS fitlers for the seismic channels since the STS ones seem to have so little gain compared to the Guralps.
Here are some plots of the Bode magnitude and impulse responses of the bandpass filters (before the low passing). There's a bunch of entries from Masha on this from her SURF summer. Can anyone comment on why they are all so different?
One of the old Masha entries speaks of designing the lowpass filter in an intelligent way: by adjusting the filter order until the power in the stopband is less than 1% of the power in the passband. Seems like we could do that for bandpass too. For now I have made the names reasonable and changed all of the BP filters to 4th order Butterworth.
Also, it turns out that the Vel2Vel (gain ~0.02) filters were mistakenly on in the STS BP filter banks. The GUR inputs have a gain to scale the counts to velocity, but the STS seem to already be in microns/sec (where is this gain?) so I turned off and deleted the Vel2Vel filters; in any case the gain should not be done seperately in each BP bank, but altogether before the BP filtering. |
Attachment 1: BLRMS_BP.pdf
|
|
Attachment 2: BLRMS_imp.pdf
|
|
11315
|
Tue May 19 18:55:12 2015 |
rana, ericQ | Update | General | some status |
After one day the pressures are east/west = 2200/450 PSI
Quote: |
- New east pressure reading is 2500 PSI. Regulated N2 pressure is 68 PSI.
|
|
11314
|
Tue May 19 18:38:33 2015 |
rana | Update | CDS | MXstream restart script working (beta) |
Good catch; that was some seriously bad programming on my part. I had some undeclared variable garbage going on. I fixed it and re-implemented the script in CRON on megatron. The log file shows that it has detected no problems for the last several checks. I'll check it again tomorrow to see if its doing good or bad. |
11313
|
Tue May 19 17:53:38 2015 |
rana | Update | Modern Control | Brushing up on Wiener Filtering |
Good to see that misofw.m is still alive and well. Todo:
- Apply weighting to the filter via time domain SOS prefiltering of the signals (see Jenne's code from the LLO Global FF paper)
- Consider using some of the MC SUSPOS signals. Its not strictly legal, but I wonder if its responsible for out noise below 1 Hz.
- Steve was in the midst of hooking up the Wilcoxones to get better subtraction at 5-15 Hz, but couldn't find cables. We need to hunt them down in W Bridge.
- Attach one of the medium or big Lings to the MC2 chamber platform and see if we can do this in hardware without driving the suspension. WE want to use a DAC channel (from where?) and pipe it to the Ling using a medium high current drive. Might start with the 50 Ohm output of the SR560 and later use a BUF634 ala Crackle coil driver.
|
11312
|
Tue May 19 17:03:34 2015 |
Koji | Update | CDS | MXstream restart script working (beta) |
AutoMX is resetting mx_stream every 5 minutes. Basically everytime AutoMX is called,
it resets mx_stream. Is the mx_stream stalling such often? Or the script is detecting false alerms?
> tail -200 /opt/rtcds/caltech/c1/scripts/cds/autoMX.log
Tue May 19 16:43:01 PDT 2015
LSC - FB bad. Runnning restart:
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1sus closed.
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1lsc closed.
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1ioo closed.
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1iscex closed.
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1iscey closed.
0
Tue May 19 16:48:02 PDT 2015
LSC - FB bad. Runnning restart:
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1sus closed.
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1lsc closed.
ssh_exchange_identification: read: Connection reset by peer
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1iscex closed.
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1iscey closed.
0
Tue May 19 16:53:01 PDT 2015
LSC - FB bad. Runnning restart:
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1sus closed.
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1lsc closed.
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1ioo closed.
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1iscex closed.
* Stopping mx_stream ... [ ok ]
* Starting mx_stream ... [ ok ]
Connection to c1iscey closed.
|
11311
|
Tue May 19 16:18:57 2015 |
ericq | Update | General | crons fixed |
I wrapped rampdown.py in rampdown.sh, which is just these lines:
#!/bin/bash
source /ligo/cdscfg/workstationrc.sh
/opt/rtcds/caltech/c1/scripts/SUS/rampdown.py > /dev/null 2>&1
This is now what megatron's cron runs. It appears to be working.
I also added the workstationrc line to the n2 and chiara HDD checking scripts that run on nodus, which should resolve the issue from ELOG 11249 |
11310
|
Tue May 19 14:51:44 2015 |
ericq | Update | Modern Control | Brushing up on Wiener Filtering |
As part of preparing for the SURF projects this summer, I grabbed ~50 minutes of MCL and STS_1 data from early this morning to do a little MISO wiener filtering. It was pretty straightforward to use the misofw.m code to achieve an offline subtraction factor of ~10 from 1-3Hz. This isn't the best ever, but doesn't compare so unfavorably to older work, especially given that I did no prefiltering, and didn't use all that long of a data stretch.
Code and plot (but not data) is attached. |
Attachment 1: mclData.png
|
|
Attachment 2: mclWiener.zip
|
11309
|
Tue May 19 11:50:52 2015 |
manasa | Update | PEM | No noticeable effect from M4.0 earthquake |
There was an earthquake: M4.0 - 40km SSW of South Dos Palos, California
No noticeable effects on the IFO. MC did not lose lock; however the arms did unlock. |
11308
|
Tue May 19 11:24:44 2015 |
ericq | Update | Computer Scripts / Programs | Notification Scheme |
Given some of the things we've facing lately, it occurs to me that we could be better served by having some sort of unified human-alerting scheme in place, for things like:
- Local/offsite backup failures
- Vaccumm system problems
- HDD status for things like /frames/ and /cvs/cds/, whether the disks are full, or their SMART status indicates imminent mechanical failure
Currently, many of these things are just checked sporadically when it occurs to someone to do so, or when debugging random issues. Smoother IFO operation and peace of mind could be gained if we're confident that the relevant people are notified in a timely manner.
Thoughts? Suggestions on other things to monitor, like maybe frontend/model crashes? |
11307
|
Tue May 19 11:15:09 2015 |
ericq | Update | Computer Scripts / Programs | Chiara Backup Hiccup |
Starting on the 14th (five days ago) the local chiara rsync backup of /cvs/cds to an external HDD has been failing:
caltech/c1/scripts/backup/rsync_chiara.backup.log:
2015-05-13 07:00:01,614 INFO Updating backup image of /cvs/cds
2015-05-13 07:49:46,266 INFO Backup rsync job ran successfully, transferred 6504 files.
2015-05-14 07:00:01,826 INFO Updating backup image of /cvs/cds
2015-05-14 07:50:18,709 ERROR Backup rysnc job failed with exit code 24!
2015-05-15 07:00:01,385 INFO Updating backup image of /cvs/cds
2015-05-15 08:09:18,527 ERROR Backup rysnc job failed with exit code 24!
...
Code 24 apparently means "Partial transfer due to vanished source files."
Manually running the backup command on chiara worked fine, returning a code of 0 (success), so we are backed up. For completeness, the command is controls@chiara: sudo rsync -av --delete --stats /home/cds/ /media/40mBackup
Are the summary page jobs moving files around at this time of day? If so, one of the two should be rescheduled to not conflict. |
11306
|
Tue May 19 00:19:23 2015 |
rana | Update | General | some status |
There's a few hours so far after today's c1cal shut off that the summary page shows no dropouts. I'm not yet sure that this is related, but it seems like a clue. |
Attachment 1: Screen_Shot_2015-05-19_at_12.17.39_AM.png
|
|
11305
|
Mon May 18 18:03:12 2015 |
rana | Update | General | some status |
The c1cal model was maxing out its CPU meter so I logged onto c1lsc and did 'rtcds c1cal stop'. Let's see if this changes any of our FB / DAQD problems. |
Attachment 1: CPUtrend.png
|
|
11304
|
Mon May 18 17:44:30 2015 |
rana | HowTo | CDS | Bypassing the CDSUTILS prefix issue |
Too weird. I undid me changes. We'll have to make the C1: stuff work inside each python script.
Quote: |
This makes things act weird:
controls@pianosa|MC 1> z avg 1 "C1:LSC-TRY_OUT"
IFO environment variable not specified.
|
|
11303
|
Mon May 18 17:42:14 2015 |
rana, ericQ | Update | General | some status |
Today at 5 PM we replaced the east N2 cylinder. The east pressure was 500 and the west cylinder pressure was 1000. Since Steve's elogs say that the consumption can be as high as 800 per day we wanted to be safe.
- We closed the black valve before the regulator and closed the valve on the cylinder.
- We unscrewed the brass fill line to the cylinder.
- We unchained the cylinder and put in the dolly (and attached the chains on there).
- We rolled in a fresh cylinder from outside using the red dolly (it should have chains).
- We put it in place, hooked up the chains, and screwed on the brass nozzle with the large adjustable wrench (need to put a non-adjustable here).
- Opened up the cylinder valve.
- Opened up the black valve.
- New east pressure reading is 2500 PSI. Regulated N2 pressure is 68 PSI.
Quote: |
1) Checked the N2 pressures: the unregulated cylinder pressures are both around 1500 PSI. How long until they get to 1000?
|
|
11302
|
Mon May 18 16:56:12 2015 |
ericq | HowTo | CDS | Bypassing the CDSUTILS prefix issue |
This makes things act weird:
controls@pianosa|MC 1> z avg 1 "C1:LSC-TRY_OUT"
IFO environment variable not specified.
|
11301
|
Mon May 18 16:28:18 2015 |
ericq | Update | General | some status |
Quote: |
4) Noticed that DAQD is restarting once per hour on the hour. Why?
|
It looks like daqd isn't being restarted, but in fact crashing every hour.
Going into the logs in target/fb/logs/old, it looks like at 10 seconds past the hour, every hour, daqd starts spitting out:
[Mon May 18 12:00:10 2015] main profiler warning: 1 empty blocks in the buffer
[Mon May 18 12:00:11 2015] main profiler warning: 0 empty blocks in the buffer
[Mon May 18 12:00:12 2015] main profiler warning: 0 empty blocks in the buffer
[Mon May 18 12:00:13 2015] main profiler warning: 0 empty blocks in the buffer
...
***CRASH***
An ELOG search on this kind of phrase will get you a lot of talk about FB transfer problems.
I noticed the framebuilder had 100% usage on its internal, non-RAID, non /frames/, HDD, which hosts the root filesystem (OS files, home directory, diskless boot files, etc), largely due to a ~110GB directory of frames from our first RF lock that had been copied over to the home directory. The HDD only has 135GB capacity. I thought that maybe this was somehow a bottleneck for files moving around, but after deleting the huge directory, daqd still died at 4PM.
The offsite LDAS rsync happens at ten minutes past the hour, so is unlikely to be the culprit. I don't have any other clues at this point. |
11300
|
Mon May 18 14:46:20 2015 |
manasa | Summary | General | Delay line frequency discriminator for FOL error signal |
Measuring the voltage noise and frequency response of the Analog Delay-line Frequency Discriminator (DFD)
The schematic and an actual photo of the setup is shown below. The setup was checked to be physically sturdy with no loose connections or moving parts.

The voltage noise at the output of the DFD was measured using an SR785 signal analyzer while simultaneously monitoring the signal on an oscilloscope.
The noise at the output of the DFD was measured for no RF input and at several RF input frequencies including the zero crossing frequency and the optimum operating frequency of the DFD (20MHz).
The plot below show the voltage noise for different RF inputs to the DFD. It can be seen that the noise level is slightly lower at the zero crossing frequency where the amplitude noise is eliminated by the DFD.

I also did measurements to obtain the frequency response of the setup as the cable length difference has changed from the prior setup. The cable length difference is 21cm and the obtained linear signal at the output of the DFD extends over ~ 380MHz which is good enough for our purposes in FOL. A cosine fit to the data was done as before. //edit- Manasa: The gain of SR560 was set to 20 to obtain the data shown below//
Fit Coefficients (with 95% confidence bounds):
a = -0.8763 (-1.076, -0.6763)
b = 3.771 (3.441, 4.102)

Data and matlab scripts are zipped and attached. |
Attachment 4: DFD.zip
|
11299
|
Mon May 18 14:22:05 2015 |
ericq | Update | Computer Scripts / Programs | rsync frames to LDAS cluster |
Quote: |
Still seems to be running without causing FB issues.
|
I'm not so sure. I just was experiencing some severe network latency / EPICS channel freezes that was alleviated by killing the rsync job on nodus. It started a few minutes after ten past the hour, when the rysnc job started.
Unrelated to this, for some odd reason, there is some weirdness going on with ssh'ing to martian machines from the control room computers. I.e. on pianosa, ssh nodus fails with a failure to resolve hostaname message, but ssh nodus.martian succeeds. |
11298
|
Mon May 18 11:59:07 2015 |
rana | Update | General | some status |
Yes - my rampdown.py script correctly ramps down the watchdog thresholds. This replaces the old rampdown.pl Perl script that Rob and Dave Barker wrote.
Unfortunately, cron doesn't correctly inherit the bashrc environment variables so its having trouble running.
On a positive note, I've resurrected the MEDM Screenshot taking cron job, so now this webpage is alive (mostly) and you can check screens from remote:
https://nodus.ligo.caltech.edu:30889/medm/screenshot.html |
11297
|
Mon May 18 09:50:00 2015 |
ericq | Update | General | some status |
Quote: |
Added this to the megatron crontab and commented out the op340m crontab line. IF this works for awhile we can retire our last Solaris machine.
|
For some reason, my email address is the one that megatron complains to when cron commands fail; since 11:15PM last night, I've been getting emails that the rampdown.py line is failing, with the super-helpful message: expr: syntax error |
11296
|
Sun May 17 23:46:25 2015 |
rana | Update | ASC | IOO / Arm trends |
Looking at the summary page trends from today, you can see that the MC transmission is pretty flat after I zeroed the MCWFS offsets. In addition, the transmission from both arms is also flat, indicating that our previous observation of long term drift in the Y arm transmission probably had more to do with bad Y-arm initial alignment than unbalanced ETMY coil-magnets.
Much like checking the N2 pressure, amount of coffee beans, frames backups, etc. we should put MC WFS offset adjustment into our periodic checklist. Would be good to have a reminder system that pings us to check these items and wait for confirmation that we have done so. |
11295
|
Sat May 16 21:40:29 2015 |
rana | Update | PEM | Guralp maintenance |
Tried swapping cables at the Guralp interface box side. It seems that all of our seismic signal problems have to do with the GUR2 cable being flaky (not surprising since it looks like it was patched with Orange Electrical tape!! rather than proper mechanical strain relief).
After swapping the cables today, the GUR2 DAQ channels all look fine: i.e. GUR1 (the one at the Y end) is fine, as is its cable and the GUR2 analog channels inside the interface box.
OTOH, the GUR1 DAQ channels (which have GUR2 (EX) connected into it) are too small by a factor of ~1000. Seems like that end of the cable will need to be remade. Luckily Jenne is still around this week and can point us to the pinout / instructions. Looks like there could be some shorting inside the backshell, so I've left it disconnected rather than risk damaging the seismometer. We should get a GUR1 style backshell to remake this cable. It might also be possible that the end at the seismometer is bad - Steve was supposed to swap the screws on the granite-aluminum plate on Thursday; I'll double check. |
Attachment 1: GurPost_150516.png
|
|
11294
|
Sat May 16 21:05:24 2015 |
rana | Update | General | some status |
1) Checked the N2 pressures: the unregulated cylinder pressures are both around 1500 PSI. How long until they get to 1000?
2) The IMC has been flaky for a day or so; don't know why. I moved the gains in the autolocker so now the input gain slider to the MC board is 10 dB higher and the output slider is 10 dB lower. This is updated in the mcdown and mcup scripts and both committed to SVN. The trend shows that the MC was wandering away after ~15 minutes of lock, so I suspected the WFS offsets. I ran the offsets script (after flipping the z servo signs and adding 'C1:' prefix). So far powers are good and stable.
3) pianosa was unresponsive and I couldn't ssh to it. I powered it off and then it came back.
4) Noticed that DAQD is restarting once per hour on the hour. Why?
5) Many (but not all) EPICS readbacks are whiting out every several minutes. I remote booted c1susaux since it was one of the victims, but it didn't change any behavior.
6) The ETMX and ITMX have very different bounce mode response: should add to our Vent Todo List. Double checked that the bounce/roll bandstop is on and at the right frequency for the bounce mode. Increased the stopband from 40 to 50 dB to see if that helps.
7) op340 is still running ! The only reason to keep it alive is its crontab:
op340m:SUS>crontab -l
07 * * * * /opt/rtcds/caltech/c1/burt/autoburt/burt.cron >> /opt/rtcds/caltech/c1/burt/burtcron.log
#46 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/PSL/FSS/FSSSlowServo > /cvs/cds/caltech/logs/scripts/FSSslow.cronlog 2>&1
#14,44 * * * * /cvs/cds/caltech/conlog/bin/check_conlogger_and_restart_if_dead
15,45 * * * * /opt/rtcds/caltech/c1/scripts/SUS/rampdown.pl > /dev/null 2>&1
#10 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/MC/autolockMCmain40m >/cvs/cds/caltech/logs/scripts/mclock.cronlog 2>&1
#27 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/PSL/FSS/RCthermalPID.pl >/cvs/cds/caltech/logs/scripts/RCthermalPID.cronlog 2>&1
00 0 * * * /var/scripts/ntp.sh > /dev/null 2>&1
#00 4 * * * /opt/rtcds/caltech/c1/scripts/RGA/RGAlogger.cron >> /cvs/cds/caltech/users/rward/RGA/RGAcron.out 2>&1
#00 6 * * * /cvs/cds/scripts/backupScripts.pl
00 7 * * * /opt/rtcds/caltech/c1/scripts/AutoUpdate/update_conlog.cron
00 8 * * * /opt/rtcds/caltech/c1/scripts/crontab/backupCrontab
added a new script (scripts/SUS/rampdown.py) which decrements every 30 minutes if needed. Added this to the megatron crontab and commented out the op340m crontab line. IF this works for awhile we can retire our last Solaris machine.
8) To see if we could get rid of the wandering PCDRIVE noise, I looked into the NPRO temperatures: was - T_crystal = 30.89 C, T_diode1 = 21 C, T_diode2 = 22 C. I moved up the crystal temp to 33.0 C, to see if it could make the noise more stable. Then I used the trimpots on the front of the controller to maximimize the laseroutput at these temperatures; it was basically maximized already. Lets see if there's any qualitative difference after a week. I'm attaching the pinout for the DSUB25 diagnostics connector on the back of the box. Aidan is going to help us record this stuff with AcroMag tech so that we can see if there's any correlation with PCDRIVE. The shifts in FSS_SLOW coincident with PCDRIVE noise corresponds to ~100 MHz, so it seems like it could be NPRO related.
|
Attachment 1: 48.png
|
|
Attachment 2: 39.png
|
|
11293
|
Sat May 16 20:37:09 2015 |
rana | HowTo | CDS | Bypassing the CDSUTILS prefix issue |
The CDSUTILS package has a feature where it substitutes in a C1 or H1 or L1 prefix depending upon what site you are at. The idea is that this should make code portable between LLO and LHO.
Here at the 40m, we have no need to do that, so its better for us to be able to copy and paste channel names directly from MEDM or whatever without having to remove the "C1:" from all over the place.
the way to do this on the command line is (in bash) to type:
export IFO=''
To make this easier on us, I have implemented this in our shared .bashrc so that its always the case. This might break some scripts which have been adapted to use the weird CDSUTILS convention, so beware and fix appropriately.
|
11292
|
Fri May 15 16:18:28 2015 |
Steve | Update | VAC | Vac Operation Guide |
Vacuum Operation Guide is up loaded into the 40m-wiki. This is an old master copy. Not exact in terms of real action, but it is still a good guide of logic.
Rana has promissed to watch the N2 supply and change cylinder when it is empty. I will be Hanford next week. |
11291
|
Thu May 14 17:41:10 2015 |
rana | Update | PEM | weather station and Guralp maintenance |
Today Steve and I tried to recenter the Guralps. The breakout box technique didn't work for us, so we just turned the leveling screws until we got the mass position outputs within +/-50 mV for all DoF as read out by the breakout box.
Some points:
- GUR1 is at the ETMY (E/W arm) and GUR2 is at the X-end (South arm)
- The SS containers are good and make a good seal.
- We had to replace the screws on the granite slab interface plate. The heads were too big to allow the connector to snap into place.
- The Guralps had been left way, way off level and the brass locking screws were all the way up. We locked them down after leveling today. Steve was blaming Cathy(?).
- The GUR1_Z channel
now looks good - see the summary pages for the before and after behavior. My mistake; the low frequency is still as bad as before.
- GUR2 X/Y still look like there is no whitening or if the masses are stuck or the interface box is broken.
- When we first powered them up, a few of the channels of both seismometers showed 100-200 Hz oscillations. This has settled down after several minutes.
The attachment shows the 6 channels after our work. You can see that GUR2_X/Y still look deadish. I tried wiggling the cables at the interface box and powering on/off, but no luck. Next, we swap cables.
Tried to bring the weather station back to life, but no luck. The unit on the wall is alive and so is the EPICS IOC (c1pem1). But there is apparently no communication between them. telnet into c1pem and the error message repeating at the prompt is:
Weather Monitor Output: NO COMM
Might be related to the flaky connector situation that Liz and I found there a couple summers ago, but I tried jiggling and reseating that one with no luck. Looks like it stopped working around 8 PM on March 24, 2014. That's the same time as a ~30s power outage, so perhaps we just need some more power cycling? Tried hitting the reset button on the VME card for c1pem1, but didn't change anything.
Let's try power cycling that crate (which has c1pem1, c0daqawg, and some GPS receiver)...nope - no luck.
Also tried power cycling the weather box which is near the BS chamber on the wall. This didn't change the error message at the c1pem1 telnet prompt. |
Attachment 1: GurPost_150514.png
|
|
Attachment 2: secretWeatherTrends.png
|
|
11290
|
Wed May 13 13:33:34 2015 |
Steve | Frogs | PEM | Guralp breakout box recovered |
COD_Sugar napolion is due to Steve: Item delivered, model CMG-SCU-0013, sn G9536
Quote: |
Reward being offered for the safe return of this thing:

|
|
11289
|
Wed May 13 10:07:36 2015 |
rana | Frogs | PEM | Guralp breakout paddle |
Reward being offered for the safe return of this thing:

|
11288
|
Wed May 13 09:17:28 2015 |
rana | Update | Computer Scripts / Programs | rsync frames to LDAS cluster |
Still seems to be running without causing FB issues. One thought is that we could look through the FB status channel trends and see if there is some excess of FB problems at 10 min after the hour to see if its causing problems.
I also looked into our minute trend situation. Looks like the files are comrpessed and have checksum enabled. The size changes sometimes, but its roughly 35 MB per hour. So 840 MB per day.
According to the wiper.pl script, its trying to keep the minute-trend directory to below some fixed fraction of the total /frames disk. The comment in the scripts says 0.005%,
but I'm dubious since that's only 13TB*5e-5 = 600 MB, and that would only keep us for a day. Maybe the comment should read 0.5% instead...
Quote: |
The rsync job to sync our frames over to the cluster has been on a 20 MB/s BW limit for awhile now.
Dan Kozak has now set up a cronjob to do this at 10 min after the hour, every hour. Let's see how this goes.
You can find the script and its logfile name by doing 'crontab -l' on nodus.
|
|
11287
|
Tue May 12 14:57:52 2015 |
Steve | Update | VAC | CC1 cold cathode gauges are baked now |
Baking both CC1 at 85 C for 60 hrs did not help.
The temperature is increased to 125 C and it is being repeated.
|
11286
|
Tue May 12 12:04:41 2015 |
manasa | Update | General | Some maintenance |
* Relocked IMC. I guess it was stuck somewhere in the autlocker loop. I disabled autolocker and locked it manually. Autolocker has been reenabled and seems to be running just fine.
* The X arm has been having trouble staying locked. There seemed to be some amount of gain peaking. I reduced the gain from 0.007 to 0.006.
* I disabled the triggered BounceRG filter : FM8 in the Xarm filter module. We already have a triggered Bounce filter: FM6 that takes care of the noise at bounce/roll frequencies. FM8 was just adding too much gain at 16.5Hz. Once this filter was disabled the X arm lock has been much more stable.
Also, the Y arm doesn't use FM8 for locking either.
|
11285
|
Tue May 12 08:51:08 2015 |
ericq | Update | CDS | c1lsp and c1sup removed? |
Quote: |
was this change not elogged??
|
This is my sin.
Back in Febuary (around the 25th) I modified c1sus.mdl , removing the simulated plant connections we weren't using from c1lsp and c1sup . This was included in the model's svn log, but not elogged. 
The models don't start with the rtcds restart shortcut, because I removed them from the c1lsc line in FB:/diskless/root/etc/rtsystab (or c1lsc:/etc/rtsystab ). There is a commented out line in there that can be uncommented to restore them to the list of models c1lsc is allowed to run.
However, I wouldn't suspect that the models not running should affect the suspension drift, since the connections from them to c1sus have been removed. If we still have trends from early February, we could look and see if the drift was happening before I made this change. |
11284
|
Mon May 11 18:14:52 2015 |
rana | Update | IMC | MC_F calibration |
I saw that entry, but it doesn't state what the calibration is in units of Hz/counts. It just gives the final calibrated spectrum. |
11283
|
Mon May 11 15:15:12 2015 |
manasa | Update | General | Ran ASS for arms |
Arm powers had drifted to ~ 0.5 in transmission.
X and Y arms were locked and ASS'd to bring the arm transmission powers to ~1. |
11282
|
Mon May 11 14:08:19 2015 |
manasa | Update | CDS | c1lsp and c1sup removed? |
I just found out that c1lsp and c1sup models no more exist on the FE status medm screens. I am assuming some changes were done to the models as well.
Earlier today, I was looking at some of the old medm screens running on Donatella that did not reflect this modification.
Did I miss any elogs about this or was this change not elogged??
Quote: |
I found the c1lsp and c1sup models not running anymore on c1lsc (white blocks for status lights on medm).
To fix this, I ssh'd into c1lsc. c1lsc status did not show c1lsp and c1sup models running on it.
I tried the usual rtcds restart <model name> for both and that returned error "Cannot start/stop model 'c1XXX' on host c1lsc".
I also tried rtcds restart all on c1lsc, but that has NOT brought back the models alive.
Does anyone know how I can fix this??
c1sup runs some the suspension controls. So I am afraid that the drift and frequent unlocking of the arms we see might be related to this.
P.S. We might also want to add the FE status channels to the summary pages.
|
|
11281
|
Mon May 11 13:26:02 2015 |
manasa | Update | IMC | MC_F calibration |
The last MC_F calibration was done by Ayaka : Elog 7823
Quote: |
And does anyone know what the MC_F calibration is?
|
|
11280
|
Mon May 11 13:21:25 2015 |
manasa | Update | CDS | c1lsp and c1sup not running |
I found the c1lsp and c1sup models not running anymore on c1lsc (white blocks for status lights on medm).
To fix this, I ssh'd into c1lsc. c1lsc status did not show c1lsp and c1sup models running on it.
I tried the usual rtcds restart <model name> for both and that returned error "Cannot start/stop model 'c1XXX' on host c1lsc".
I also tried rtcds restart all on c1lsc, but that has NOT brought back the models alive.
Does anyone know how I can fix this??
c1sup runs some the suspension controls. So I am afraid that the drift and frequent unlocking of the arms we see might be related to this.
P.S. We might also want to add the FE status channels to the summary pages. |