40m
QIL
Cryo_Lab
CTN
SUS_Lab
TCS_Lab
OMC_Lab
CRIME_Lab
FEA
ENG_Labs
OptContFac
Mariner
WBEEShop
|
40m Log |
Not logged in |
 |
|
Tue Dec 26 17:24:24 2017, Steve, Update, General, power outage
|
Thu Dec 28 17:22:02 2017, gautam, Update, General, power outage - CDS recovery
|
Thu Jan 4 14:39:50 2018, gautam, Update, General, power outage - timing error
|
Fri Jan 5 21:54:28 2018, rana, Update, General, power outage - timing error
|
Fri Jan 5 22:19:53 2018, gautam, Update, General, power outage - timing error
|
Sat Jan 6 18:27:37 2018, gautam, Update, General, power outage - IFO recovery 
|
Mon Apr 16 22:09:53 2018, Kevin, Update, General, power outage - BLRM recovery 
|
Tue Apr 17 09:57:09 2018, Steve, Update, General, seismometer interfaces 
|
Wed Apr 18 20:33:19 2018, Kevin, Update, General, seismometer interfaces  
|
Fri Apr 20 23:36:28 2018, Kevin, Update, PEM, Seismometer BLRMs  
|
Tue Apr 24 21:19:08 2018, Kevin, Update, PEM, Seismometer BLRMs   
|
Wed Apr 25 17:44:39 2018, Arnold, Update, PEM, PEM Anti-Alias wiring
|
Thu Apr 26 09:35:49 2018, Kevin, Update, PEM, PEM Anti-Alias wiring 
|
Thu Apr 26 20:22:21 2018, Kevin, Update, PEM, ADC common mode rejection with new seismometer connections 
|
Wed May 16 21:02:22 2018, Kevin, Update, PEM, ADC common mode rejection with new seismometer connections
|
Thu Jun 14 15:24:32 2018, Steve, Update, PEM, ADC DAC In Line Test Boards are in
|
Tue Jan 2 15:43:35 2018, Steve, Update, VAC, pumpdown after power outage
|
Wed Feb 7 15:50:42 2018, Steve, Update, VAC, IFO pressure monitoring
|
|
Message ID: 13493
Entry time: Thu Dec 28 17:22:02 2017
In reply to: 13492
Reply to this: 13503
|
Author: |
gautam |
Type: |
Update |
Category: |
General |
Subject: |
power outage - CDS recovery |
|
|
- I had to manually reboot c1lsc, c1sus and c1ioo.
- I edited the line in /etc/rt.sh (specifically, on FB /diskless/root.jessie/etc/rt.sh) that lists models running on a given frontend, to exclude c1dnn and c1oaf, as these are the models that have been giving us most trouble on startup. After this, I was able to bring back all models on these three machines using rtcds restart --all. The original line in this file has just been commented out, and can be restored whenever we wish to do so.
- mx_stream processes are showing failed status on all the frontends. As a result, the daqd processes are still not working. Usual debugging methods didn't work.
- Restored all sus dampings.
- Slow computers all seem to be responsive, so no action was required there.
- Burtrestored c1psl to solve the "sticky slider" problem, relocked PMC. I didn't do anything further on the PSL table w.r.t. the manual beam block Steve has placed there till the vacuum situation returns to normal.
@Steve: I noticed that we are down to our final bottle of N2, not sure if it will last till 2 Jan which is presumably when the next delivery will come in. Since V1 is closed and the PSL beam is blocked, perhaps this doesn't matter.
from Steve: there are spare full N2 bottles at the south end outside and inside. I replaced the N2 on Sunday night. So the system should be Ok as is.
I also hard-rebooted megatron and optimus as these were unresponsive to ping.
*Seems like the mx_stream errors were due to the mx process not being started on FB. I could fix this by running sudo systemctl start mx on FB. After which I ran sudo systemctl restart daqd_*. But the DC errors persist - not sure how to fix this. Elogging suggests that "0x4000" errors are connected to timing problems on FB, but restarting the ntp service on FB (which is the suggested fix in said elogs) didn't fix it. Also unsure if mx process is supposed to automatically start on FB at startup. |
|
|