ID |
Date |
Author |
Type |
Category |
Subject |
153
|
Sun Dec 2 17:37:33 2007 |
rana | Omnistructure | Computers | Network Cabling in the Office |
We all know that we've spent many integrated man hours trying to figure out why our network connections
in the office area don't work. Usually its because of the bad hub around the Tobin/Osamu desk.
I pried open some of the wall conduit today and it looks pretty easy to fish cables through. I think
its time we finally did that. It may be a little disruptive, but I propose we get Larry to come over
and figure out what needs to happen for us to get regular 100 Mbit ports on the walls. These can
then all go over and get connected to a switch in the rack that holds linux1.
Opinions / comments ? |
179
|
Fri Dec 7 11:33:24 2007 |
waldman | Omnistructure | OMC | PZT wiring |
The 2 pin LEMO connector has got an unmarked pin and a pin marked by a white half-circle.
The unmarked pin is connected to the side of the PZT attached to the mirror.
The marked pin is connected to the side of the PZT attached to the tombstone. |
188
|
Wed Dec 12 16:22:22 2007 |
alberto | Omnistructure | Electronics | LC filter for the RF-AM monitor circuit |
In the LC configuration (see attached schematic) the resonant frequency is tuned to one of the peak of our RF-AM monitor and it is amplified by a factor equal to the Q of the filter. As I wrote in one of the last elog entries, we would like amplifications of about 10-30 dB in order to have negligible couplings. Such values are obtained only with small capacitances (few or less pF). The drawback is relatively large inductance (uH or more) which has inevitably low Self Resonant Frequencies (SRF - the resonant frequencies of the RLC circuit usually associated with an actual inductor - ~ MHz). Even before, one limit is also the input impedance of the RF amplifier. Quality factors > 1 require megaohms, far from the 50 ohms in the MiniCircuit amplifiers I’m using now. So, if we plan to use these even for the final design of the circuit, we have to abandon the LC configuration.
For this same reason the only way I could get the expected responses from my several test boards was with a 10 megaohm input probe (see attachment for the measurement with and without probe). Assuming that impedance, I found these as the best trade-offs between the attenuation requirements and the values of the inductors for respectively the peaks at 33, 66,133, 166,199 MHz:
26uH, 6.6u, 20u, 73u, 16u
If we could find inductor with these values and high SRF the configuration should work. The problem is I couldn’t find any. Above a few uH they all seem to have SRF ~ MHz.
That is why I switched to the Butterworth. This should work despite the input impedance of the amplifier and with much smaller inductances. I made a totally new test circuit, with surface mount components. I think I still have to fix some things in the measurements but (this time I got rid of the simulator I was using earlier and designed a new configuration with new values from the Horowitz’s tables) it seems I have the expected peaks. More soon. |
190
|
Thu Dec 13 12:05:36 2007 |
alberto | Omnistructure | Electronics | The new Butterworth seems to work quite well |
It works better probably because of the small inductors I'm using this time.
The peak is at 30 MHz because I didn't have the precise elements to get 33.
The bandwidth and the Q could be improved by adding one or two more order to the filter and trying to better match the low-pass' resonant frequency with the high-pass'.
Also I have to see if it could work at 166 and 199 MHz as well. |
200
|
Wed Dec 19 11:31:01 2007 |
steve | Omnistructure | PEM | aircond filter maintenance |
Jeff is working on all air condiontion units of the 40m lab
This we do every six months. |
272
|
Sat Jan 26 02:08:53 2008 |
John | Omnistructure | LSC | Fibres |
There is now a fibre running from the SP table to the ISCT at the Y-end. In the coming days I will try to mode match the beam from this fibre into the arm through ETMY. To achieve this I will be altering the optical layout of this table. |
310
|
Tue Feb 12 13:53:27 2008 |
rob | Omnistructure | VAC | Return of the RGA |
The new RGA head was installed a few days ago. I just ran the RGAlogger script to see if it works, which it does. I also edited the crontab file on op340m to run the RGAlogger script every night at 1:25 AM. It should run tonight. |
365
|
Fri Mar 7 19:04:39 2008 |
steve | Omnistructure | PSL | laser pointer |
Green laser pointer was found in my desk.
I blamed Rana for not returning it to me after a conference talk.
It is surprisingly bright still.
I will bring sweets for Wednesday meeting. |
453
|
Sat Apr 26 11:21:15 2008 |
ajw | Omnistructure | Computers | backup of /cvs/cds restarted |
The backup of /cvs/cds (which runs as a cron job on fb40m; see /cvs/cds/caltech/scripts/backup/000README.txt)
has been down since fb40m was rebooted on March 3.
I was unable to start it because of conflicting ssh keys in /home/controls/.ssh .
With help from Dan Kozak, we got it to work with both sets of keys
( id_rsa, which allows one to ssh between computers in our 113 network without typing a password,
and backup2PB which allows the cron job to push the backup files to the archive in Powell-Booth).
It still goes down every time one reboots fb40m, and I don't have a solution.
A simple solution is for the script to send an email whenever it can't connect via ssh keys
(requiring a restart of ssh-agent with a passphrase), but email doesn't seem to work on fb40m.
I'll see if I can get help on how to have sendmail run on fb40m. |
464
|
Mon May 5 11:04:30 2008 |
rob | Omnistructure | Computers | Network setup |
Mafalda was not connected to the network, and so our DMF-based seisBLRMS has not been running for ~1 week. I traced this to a broken ethernet cable connecting mafalda to the network switch in the rack next to the B&W printer. This cable has a broken connector at the switch side, which means it can't stay connected if there's any tension. It needs to be replaced. |
483
|
Fri May 16 17:27:55 2008 |
Andrey | Omnistructure | General | Toilets are broken, do not use them !!! |
Both toilets in 40-meter were constantly flushing, the leaking water was on the floor inside of the restrooms, so
BOTH RESTROOMS ARE CLOSED TILL MONDAY
I have heard the constant loud sound of flushing water, opened the door, and was unpleasantly surprised because all the floor was under the layer of water and the toilets were constantly flushing. I called security at X5000, a plumber came in and told that a team of plumbers needs to repair the flushing system after the weekend. The plumber today just shut off the flushing water, wiped off the floor and told not to use the restrooms in the weekend. We should expect a team of plumbers on Monday.
Sinks are working, so you can wash your hands. |
780
|
Fri Aug 1 11:51:15 2008 |
justing | Omnistructure | Computers | added /cvs/cds/site directory |
I added a /cvs/cds/site directory. This is the same as is dicsussed here. Right now it just has the text file 'cit' in it, but eventually the other scripts should be added. I'll probably use it in the next version of mDV. |
885
|
Tue Aug 26 09:58:59 2008 |
steve | Omnistructure | COC | ETMX is #03 |
This is the picture of ETMX from the upper south west viewport |
888
|
Tue Aug 26 18:19:16 2008 |
rana | Omnistructure | Electronics | Resistor Noise at the 40m |
As Stefan points out in his recent ISS ilog entries at LLO, Daniel Sigg recently wrote a
recommendation memo on resistor and capacitor choices: T070016.
While working on the PMC I have had to use leaded resistors and wondered about the noise. As it turns
out we have the RN series of 1/4 W resistors from Stackpole Electronics. The RN series are
metal film resistors (datasheet attached); metal film is what Sigg recommends for lowest flicker
noise.
So we are OK for using the Stackpole 1/4 W leaded resistors in low noise circuits. |
1038
|
Fri Oct 10 00:34:52 2008 |
rob | Omnistructure | Computers | FEs are down |
The front-end machines are all down. Another cosmic-ray in the RFM, I suppose. Whoever comes in first in the morning should do the all-boot described in the wiki. |
1039
|
Fri Oct 10 10:20:42 2008 |
Alberto | Omnistructure | Computers | FEs are down |
Quote: |
The front-end machines are all down. Another cosmic-ray in the RFM, I suppose. Whoever comes in first in the morning should do the all-boot described in the wiki. |
Yoichi and I went along the arms turning off and on all the FE machines. Then, from the control room we rebooted them all following the procedures in the wiki. Everything is now up again.
I restored the full IFO, re-locked the mode cleaner. |
1040
|
Fri Oct 10 13:57:33 2008 |
Alberto | Omnistructure | Computers | Problems in locking the X arm |
This morning for some reason that I didn't clearly understand I could not lock the Xarm. The Y arm was not a problem and the Restore and Align script worked fine.
Looking at the LSC medm screen something strange was happening on the ETMX output. Even if the Input switch for c1:LSC-ETMX_INMON was open, there still was some random output going into c1:LSC-ETMX_INMON, and it was not a residual of the restor script running. Probably something bad happened this monring when we rebooted all the FE computers for the RFM network crash that we had last night.
Restarting the LSC computer didn't solve the problem so I decided to reboot the scipe25 computer, corresponding to c1dcuepics, that controls the LSC channels.
Somehow rebooting that machine erased all the parameters on almost all medm screens. In particular the mode cleaner mirrors got a kick and took a while to stop. I then burtrestored all the medm screen parameters to yesterday Thursday October 9 at 16:00. After that everything came back to normal. I had to re-lock the PMC and the MC.
Burtrestoring c1dcuepics.snap required to edit the .snap file because of a bug in burtrestore for that computer wich adds an extra return before the final quote symbol in the file. That bug should be fixed sometime.
The rebooting apparently fixed the problem with ETMX on the LSC screen. The strange output is not present anymore and I was able to easily lock the X arm. I then run the Align and the Restore full IFO scripts. |
1070
|
Wed Oct 22 20:50:30 2008 |
Alberto | Omnistructure | Computers | GPS |
Today I measured the GPS clock frequency at the output of CLOCK_MON in a board on the same crate where the c1iool0 computer is located. The monitor was connected with a BNC cable to the 10MHz reference input of the frequency counter on top of that rack, where it was used to check the 166MHz coming from one of the Marconi.
The frequency was supposed to be 10MHz but I actually measured 8 MHz. I tracked down the GPS input cable to the board and it turned out to come from one of the 1Y7 rack. Here it was connected to a board with a display that was showing corrupted digits, plus some leds on the front panel were red.
I'm not sure the GPS reference is working properly. |
1075
|
Thu Oct 23 18:45:18 2008 |
Alberto | Omnistructure | Computer Scripts / Programs | Python code for GPIB devices developed for the Absl length experiment |
I wrote two Python scripts for my measurement that can be also used/imitated by others: sweepfrequency. py and HP8590.py. The first is is the one that we run by a Python interpreter (just typing "python <name script> <parameters>"from the terminal). It manages the parameters that we have to pass it for the measurement and calls the second one, HP8590.py which actually does most of the job.
Here what it does. It scans the frequency of the Marconi and, for each step, searches the highest peak in the Spectrum Analyzer (which is centered 50 KHz around the frequency of the Marconi). It then associates the amplitude of the peak to the frequency of the Marconi and write the two number in two columns of a file.
The file name, the GPIB-to/LAN interface IP address, the frequency range, the frequency step amplitude and the number of measures we want it to average for each step, are all set by the parameters when we call sweepfrequency.py.
More details are in the help of the function or just looking at the header of the code.
I guess that one can perform other similar measurement just with little changes in the code so I think it could turn out useful to anyone else. |
1135
|
Fri Nov 14 17:41:50 2008 |
Jenne | Omnistructure | Electronics | Sweet New Soldering Iron |
The fancy new Weller Soldering Iron is now hooked up on the electronics bench.
Accessories for it are in the blue twirly cabinet (spare tips of different types, CD, and USB cable to connect it to a computer, should we ever decide to do so.
Rana: the soldering iron has a USB port? |
1198
|
Sat Dec 20 23:37:43 2008 |
rob | Omnistructure | General | Saturday Night Fever after presumed power failure |
Just came by to pick something up...
... alarm handlers screeching...
... TP1 failure--closing V1... call Steve... Steve says ok till tomorrow...
... all front ends down (red)...
... all suspensions watchdogged...
... all (I think) servos off...
... PSL shutter closed ...
... chiller at 15C ... I turned it off to prevent condensation in PA...
... MOPA shutter closed... turned off key on Lightwave power supply
... good luck all, and happy holidays! |
1215
|
Sun Jan 4 13:17:23 2009 |
Alan | Omnistructure | Computer Scripts / Programs | New 40mWebStatus |
I have set up some code in /cvs/cds/caltech/scripts/webStatus along with a cronjob on controls@nodus to generate a webStatus every half hour, at 40mWebStatus
you are welcome to add/delete lines corresponding to interesting EPICS channels, in the template /cvs/cds/caltech/scripts/webStatus/webStatus_template.html . The 2nd number is the "golden" value of the EPICS channel; it can be edited by hand, or one could copy a "golden" webStatus.html to webStatus_template.html . I think it's probably premature to automate this...
I noticed that Yoichi also has a cron job posting 40m medm screen snapshots. Very nice.
controls@nodus also runs a third cronjob, which checks if the nightly backup fails, and if so, sends an email to me.
I guess we need some kind of "official" crontab file for controls@nodus so that we know how/where to add things. So, I put one in /cvs/cds/caltech/crontab/controls@nodus.crontab |
1216
|
Mon Jan 5 11:21:05 2009 |
Alan | Omnistructure | Computer Scripts / Programs | New 40mWebStatus |
Quote: |
I guess we need some kind of "official" crontab file for controls@nodus so that we know how/where to add things. So, I put one in /cvs/cds/caltech/crontab/controls@nodus.crontab |
Alan and I agreed that we should edit the crontab by "crontab -e" command rather than editing the "official" crontab in /cvs/cds/caltech/crontab/.
After confirming that the new crontab works as expected, you are encouraged to make a copy of the new crontab into /cvs/cds/caltech/crontab/ as a backup.
Then do "svn ci" in the directory. |
1218
|
Thu Jan 8 20:26:17 2009 |
rob | Omnistructure | General | Earthquake in San Bernardino |
Magnitude 4.5
Date-Time
* Friday, January 09, 2009 at 03:49:46 UTC
* Thursday, January 08, 2009 at 07:49:46 PM at epicenter
Location 34.113°N, 117.294°W
Depth 13.8 km (8.6 miles)
Region GREATER LOS ANGELES AREA, CALIFORNIA
Distances
* 2 km (1 miles) S (183°) from San Bernardino, CA
* 6 km (4 miles) NNE (25°) from Colton, CA
* 8 km (5 miles) E (89°) from Rialto, CA
* 88 km (55 miles) E (86°) from Los Angeles Civic Center, CA
Location Uncertainty horizontal +/- 0.3 km (0.2 miles); depth +/- 0.8 km (0.5 miles)
Parameters Nph=142, Dmin=1 km, Rmss=0.38 sec, Gp= 14°,
M-type=moment magnitude (Mw), Version=Q
I felt it from home.
All the watchdogs are tripped, vacuum normal. It looks like all the OSEM sensor values are swinging, so presumably no broken magnets. I'm leaving the suspensions off so we can take fine-res spectra overnight.
Watchout for crappy cables coming loose. |
1372
|
Mon Mar 9 10:59:05 2009 |
Alan | Omnistructure | Computers | ssh agent on fb40m restarted for backup |
After the boot-fest, the nightly backup to Powell-Booth failed, and an automatic email got sent to me. I restarted the ssh agent, following the instructions in /cvs/cds/caltech/scripts/backup/000README.txt . |
1392
|
Thu Mar 12 00:29:39 2009 |
Jenne | Omnistructure | DMF | DMF being whiny again |
Quote: | The seisBLRMS has been running on megatron via an open terminal ssh'd into there from allegra with matlab running. |
[Yoichi, Jenne]
seisBLRMS was down again. I assumed it was just because the DMF Master Enable was in the 'Disabled' state, but enabling it didn't do the trick. Rana's green terminal window was complaining about not being able to find nodus.ligo.caltech.edu. Yoichi and I stopped it, closed and restarted Matlab, ran mdv_config, then ran seisBLRMS again, and it seems happy now.
On the todo list still is making the DMF / seisBLRMS stuff happy all the time. |
1414
|
Fri Mar 20 15:54:29 2009 |
steve | Omnistructure | General | 480V crane power switch on MEZ |
CES Mezzanine is beeing rebuilt to accommodate our new neighbor: the 20ft high water slide...& .jacuzzi
All our ac power transformers are up there. Yesterday we labelled the power switch of 480VAC on the mezz
that we need to keep to run the 3 cranes in the lab. |
1453
|
Fri Apr 3 14:52:38 2009 |
Jenne | Omnistructure | PEM | Guralp is finally back! |
After many, many "it'll be there in 2 weeks" from the Guralp people, our seismometer is finally back!
I have it plugged into the Guralp breakout box's Channel 1xyz (so I have unplugged the other Guralp). Both of the Guralp's are currently sitting under the MC1/MC3 chamber.
Before we can have both Guralps up and running, I need to stuff the next 3 channels of the breakout box (back in the fall, I only had Caryn do 1x, 1y, 1z, and now I need 2x, 2y and 2z done with the fancy low-noise resistors), so all the gains match between the 2 sets of channels.
I'm leaving the new Guralp plugged in so we can see how it behaves for the next couple days, until I take out the breakout box for stuffing. |
1518
|
Fri Apr 24 16:24:25 2009 |
rob | Omnistructure | VAC | Paschen |
In response to Steve's elog entry, and for 40m posterity, I provide the Paschen Curve. |
1564
|
Fri May 8 10:05:40 2009 |
Alan | Omnistructure | Computers | Restarted backup since fb40m was rebooted |
Restarted backup since fb40m was rebooted. |
1594
|
Sun May 17 20:50:38 2009 |
rob | Omnistructure | Environment | mag 5.0 earthquake in inglewood |
2009 May 18 03:39:36 UTC
Earthquake Details
Magnitude |
5.0 |
Date-Time |
- Monday, May 18, 2009 at 03:39:36 UTC
- Sunday, May 17, 2009 at 08:39:36 PM at epicenter
|
Location |
33.940°N, 118.338°W |
Depth |
13.5 km (8.4 miles) |
Region |
GREATER LOS ANGELES AREA, CALIFORNIA |
Distances |
- 2 km (1 miles) E (91°) from Lennox, CA
- 2 km (1 miles) SSE (159°) from Inglewood, CA
- 3 km (2 miles) NNE (22°) from Hawthorne, CA
- 7 km (4 miles) ENE (72°) from El Segundo, CA
- 15 km (10 miles) SSW (213°) from Los Angeles Civic Center, CA
|
Location Uncertainty |
horizontal +/- 0.4 km (0.2 miles); depth +/- 0.9 km (0.6 miles) |
Parameters |
Nph=139, Dmin=7 km, Rmss=0.42 sec, Gp= 40°,
M-type=local magnitude (ML), Version=C |
Source |
|
Event ID |
ci10410337 |
|
1613
|
Wed May 20 10:43:17 2009 |
steve | Omnistructure | Environment | accelerometers sensitivity |
Quote: |
2009 May 18 03:39:36 UTC
Earthquake Details
Magnitude |
5.0 |
Date-Time |
- Monday, May 18, 2009 at 03:39:36 UTC
- Sunday, May 17, 2009 at 08:39:36 PM at epicenter
|
Location |
33.940°N, 118.338°W |
Depth |
13.5 km (8.4 miles) |
Region |
GREATER LOS ANGELES AREA, CALIFORNIA |
Distances |
- 2 km (1 miles) E (91°) from Lennox, CA
- 2 km (1 miles) SSE (159°) from Inglewood, CA
- 3 km (2 miles) NNE (22°) from Hawthorne, CA
- 7 km (4 miles) ENE (72°) from El Segundo, CA
- 15 km (10 miles) SSW (213°) from Los Angeles Civic Center, CA
|
Location Uncertainty |
horizontal +/- 0.4 km (0.2 miles); depth +/- 0.9 km (0.6 miles) |
Parameters |
Nph=139, Dmin=7 km, Rmss=0.42 sec, Gp= 40°,
M-type=local magnitude (ML), Version=C |
Source |
|
Event ID |
ci10410337 |
|
Wilcoxon 731A seismic accelerometers and Guralp CMG-40T-old seismometer at magnitude 5 and 4 erthquakes |
1614
|
Wed May 20 16:03:52 2009 |
steve | Omnistructure | Environment | using OSEMs to look at seismic activity |
Rana suggested using OSEM sensing voltages as guide lines to look seismic activity.
As you see todays drilling and tumping activity was nothing compared to the EQ of mag 5 and 4
Optic level servos are turned back on.
What Steve means is that there is some drilling going on in the CES shop to accomodate the new water flume group. We want to
make sure that the mirrors don't move enough to break the magnets. On the dataviewer we should look to make sure that the
sensor channels stay between 0-2 V. -Rana
|
1722
|
Wed Jul 8 11:13:36 2009 |
Alberto | Omnistructure | Computers | wireless router disconnected |
Once again, this morning I found the wireless router disconnected from the LAN cable. No martian WiFi was available.
I wonder who is been doing that and for what reason. |
1731
|
Fri Jul 10 19:56:23 2009 |
rana, koji | Omnistructure | Environment | Changed office temp |
I have increased the temperature setpoint in the office area by ~0.75 deg F. Figure attached. Also a few days ago I increased the setpoint of the AC in the control room. Looks like the Laser is able to handle the changes in office area temperature so far, but lets see how it fares over the weekend. |
1732
|
Sun Jul 12 20:05:06 2009 |
rana | Omnistructure | Environment | Changed office temp |
This is a 7-day minute trend. There's no obvious effect in any of the channels looking back 2 days.
Seems like the laser chiller fix has made the laser much more immune to the office area temperature. |
1734
|
Sun Jul 12 23:14:56 2009 |
Jenne | Omnistructure | General | Web screenshots aren't being updated |
Before heading back to the 40m to check on the computer situation, I thought I'd check the web screenshots page that Kakeru worked on, and it looks like none of the screens have been updated since June 1st. I don't know what the story is on that one, or how to fix it, but it'd be handy if it were fixed. |
1738
|
Mon Jul 13 15:48:05 2009 |
rana | Omnistructure | Environment | Removal of the cold air deflection device for the MOPA chiller |
Around 2 PM today, I removed the blue flap which has been deflecting the cold air from the AC down into the laser chiller.
Let's watch the laser trends for a few days to see if there's any effect. |
1740
|
Mon Jul 13 23:03:14 2009 |
rob, alberto | Omnistructure | Environment | Removal of the cold air deflection device for the MOPA chiller |
Quote: | Around 2 PM today, I removed the blue flap which has been deflecting the cold air from the AC down into the laser chiller.
Let's watch the laser trends for a few days to see if there's any effect. |
Alberto has moved us to stage 2 of this experiment: turning off the AC.
The situation at the control room computers with the AC on minus the blue flap is untenable--it's too cold and the air flow has an unpleasant eye-drying effect. |
1741
|
Tue Jul 14 00:32:46 2009 |
rob, alberto | Omnistructure | Environment | Removal of the cold air deflection device for the MOPA chiller |
Quote: |
Quote: | Around 2 PM today, I removed the blue flap which has been deflecting the cold air from the AC down into the laser chiller.
Let's watch the laser trends for a few days to see if there's any effect. |
Alberto has moved us to stage 2 of this experiment: turning off the AC.
The situation at the control room computers with the AC on minus the blue flap is untenable--it's too cold and the air flow has an unpleasant eye-drying effect. |
I turned the AC back on because the temperature of the room was going up so also that of the laser chiller. |
1745
|
Tue Jul 14 17:48:20 2009 |
Jenne | Omnistructure | Environment | Removal of the cold air deflection device for the MOPA chiller |
Quote: |
Quote: |
Quote: | Around 2 PM today, I removed the blue flap which has been deflecting the cold air from the AC down into the laser chiller.
Let's watch the laser trends for a few days to see if there's any effect. |
Alberto has moved us to stage 2 of this experiment: turning off the AC.
The situation at the control room computers with the AC on minus the blue flap is untenable--it's too cold and the air flow has an unpleasant eye-drying effect. |
I turned the AC back on because the temperature of the room was going up so also that of the laser chiller. |
I reinstalled the blue-flap technology on the AC, because the MOPA power was dropping like a rock. A light-ish rock since it wasn't going down too fast, but the alarms started going a little while ago because PMC trans was too low, because the power was getting a little low. The laser water chiller is reading 21.97C, which is higher than it normally does/did before the AC shenanigans (It usually reads 20.00C).
Attached is a look-back of 18 hours, during which you can see in the AMPMON the time that Rana removed the blue flap around 2pm yesterday and the AMPMON changes a little bit, but not drastically, the time around 11pm when the AC was turned off, and AMPMON goes down pretty fast, and about 12:30am, when Alberto turned the AC back on, and AMPMON starts to recover. I think that the AMPMON starts to go down again in the morning because it's been crazy hot here in Pasadena, so the room might be getting warmer, especially with the laser chiller-chiller not actively chilling the laser chiller (by not being pointed at the water chiller), so the water isn't getting as cold, and the HTEMP started to go up.
In the last few minutes of having put the blue flap back on the AC, the laser chiller is already reading a lower temperature, and the AMPMON is starting to recover. |
1762
|
Sun Jul 19 22:38:24 2009 |
rob | Omnistructure | General | Web screenshots aren't being updated |
Quote: |
Before heading back to the 40m to check on the computer situation, I thought I'd check the web screenshots page that Kakeru worked on, and it looks like none of the screens have been updated since June 1st. I don't know what the story is on that one, or how to fix it, but it'd be handy if it were fixed.
|
Apparently I broke this when I added op540m to the webstatus page. It's fixed now. |
1780
|
Wed Jul 22 18:04:14 2009 |
rob | Omnistructure | Computers | weird noise coming from Gigabit switch |
in the rack next to the printer. It sounds like a fan is hitting something. |
1854
|
Fri Aug 7 13:42:12 2009 |
ajw | Omnistructure | Computers | backup of frames restored |
Ever since July 22, the backup script that runs on fb40m has failed to ssh to ldas-cit.ligo.caltech.edu to back up our trend frames and /cvs/cds.
This was a new failure mode which the scripts didn't catch, so I only noticed it when fb40m was rebooted a couple of days ago.
Alex fixed the problem (RAID array was configured with the wrong IP address, conflicting with the outside world), and I modified the script ( /cvs/cds/caltech/scripts/backup/rsync.backup ) to handle the new directory structure Alex made.
Now the backup is current and the automated script should keep it so, at least until the next time fb40m is rebooted...
|
1858
|
Fri Aug 7 16:14:57 2009 |
rob | Omnistructure | VAC | UPS failed |
Steve, Rana, Ben, Jenne, Alberto, Rob
UPS in the vacuum rack failed this afternoon, cutting off power to the vacuum control system. After plugging all the stuff that had been plugged into the UPS into the wall, everything came back up. It appears that V1 closed appropriately, TP1 spun down gracefully on its own battery, and the pressure did not rise much above 3e-6 torr.
The UPS fizzed and smelled burnt. Rana will order a new, bigger, better, faster one.
|
1863
|
Fri Aug 7 18:06:24 2009 |
rob | Omnistructure | VAC | opening V1 when PTP1 is broken |
We've had a devil of a time getting V1 to open, due to the Interlock code.
The short story is that if C1:Vac-PTP1_pressure > 1.0, the interlock code won't let you push the button to open V1 (but it won't close V1).
PTP1 is broken, so the interlock was frustrating us. It's been broken for a while, but this hasn't bitten us till now.
We tried swapping out the controller for PTP1 with one of Bob's from the Bake lab, but it didn't work.
It said "NO COMM" in the C1:Vac-PTP1_status, so I figured it wouldn't update if we just used tdswrite to change C1:Vac-PTP1_pressure to 0.0. This actually worked, and V1 is now open. This is a temporary fix. |
1927
|
Wed Aug 19 02:17:52 2009 |
rana | Omnistructure | Environment | Control Room Workstation desks lowered to human height |
There were no injuries...Now we need to get some new chairs. |
1936
|
Mon Aug 24 10:43:27 2009 |
Alberto | Omnistructure | Computers | RFM Network Failure |
This morning I found that all the front end computers down. A failure of the RFM network drove all the computers down.
I was about to restart them all, but it wasn't necessary. After I power cycled and restarted C1SOSVME all the other computers and RFM network came back to their green status on the MEDM screen. After that I just had to reset and then restart C1SUSVME1/2. |
1977
|
Tue Sep 8 19:36:52 2009 |
Jenne | Omnistructure | DMF | DMF restarted |
I (think I) restarted DMF. It's on Mafalda, running in matlab (not the complied version which Rana was having trouble with back in the day). To start Matlab, I did "nohup matlab", ran mdv_config, then started seisBLRMS.m running. Since I used nohup, I then closed the terminal window, and am crossing my fingers in hopes that it continues to work. I would have used Screen, but that doesn't seem to work on Mafalda. |
1979
|
Tue Sep 8 20:25:03 2009 |
Jenne | Omnistructure | DMF | DMF restarted |
Quote: |
I (think I) restarted DMF. It's on Mafalda, running in matlab (not the complied version which Rana was having trouble with back in the day). To start Matlab, I did "nohup matlab", ran mdv_config, then started seisBLRMS.m running. Since I used nohup, I then closed the terminal window, and am crossing my fingers in hopes that it continues to work. I would have used Screen, but that doesn't seem to work on Mafalda.
|
Just kidding. That plan didn't work. The new plan: I started a terminal window on Op540, which is ssh-ed into Mafalda, and started up matlab to run seisBLRMS. That window is still open.
Because Unix was being finicky, I had to open an xterm window (xterm -bg green -fg black), and then ssh to mafalda and run matlab there. The symptoms which led to this were that even though in a regular terminal window on Op540, ssh-ed to mafalda, I could access tconvert, I could not make gps.m work in matlab. When Rana ssh-ed from Allegra to Op540 to Mafalda and ran matlab, he could get gps.m to work. So it seems like it was a Unix terminal crazy thing. Anyhow, starting an xterm window on Op540m and ssh-ing to mafalda from there seemed to work.
Hopefully this having a terminal window open and running DMF will be a temporary solution, and we can get the compiled version to work again soon. |