40m
QIL
Cryo_Lab
CTN
SUS_Lab
TCS_Lab
OMC_Lab
CRIME_Lab
FEA
ENG_Labs
OptContFac
Mariner
WBEEShop
|
40m Log |
Not logged in |
 |
|
Sun Aug 5 13:28:43 2018, gautam, Update, CDS, c1lsc flaky
|
Mon Aug 6 00:26:21 2018, gautam, Update, CDS, More CDS woes
|
Mon Aug 6 14:38:38 2018, gautam, Update, CDS, More CDS woes 
|
Mon Aug 6 19:49:09 2018, gautam, Update, CDS, More CDS woes
|
Tue Aug 7 11:30:46 2018, gautam, Update, CDS, More CDS woes
|
Tue Aug 7 22:28:23 2018, gautam, Update, CDS, More CDS woes
|
Wed Aug 8 23:03:42 2018, gautam, Update, CDS, c1lsc model started
|
Thu Aug 9 12:31:13 2018, gautam, Update, CDS, CDS status update
|
Wed Aug 15 21:27:47 2018, gautam, Update, CDS, CDS status update
|
Tue Sep 4 10:14:11 2018, gautam, Update, CDS, CDS status update
|
Wed Sep 5 10:59:23 2018, wgautam, Update, CDS, CDS status update
|
Thu Sep 6 14:21:26 2018, gautam, Update, CDS, ADC replacement in c1lsc expansion chassis
|
Fri Sep 7 12:35:14 2018, gautam, Update, CDS, ADC replacement in c1lsc expansion chassis
|
Mon Sep 10 12:44:48 2018, Jon, Update, CDS, ADC replacement in c1lsc expansion chassis
|
Thu Sep 20 11:29:04 2018, gautam, Update, CDS, New PCIe fiber housed 
|
Thu Sep 20 16:19:04 2018, gautam, Update, CDS, New PCIe fiber install postponed to tomorrow
|
Fri Sep 21 16:46:38 2018, gautam, Update, CDS, New PCIe fiber installed and routed 
|
|
Message ID: 14133
Entry time: Sun Aug 5 13:28:43 2018
Reply to this: 14136
|
Author: |
gautam |
Type: |
Update |
Category: |
CDS |
Subject: |
c1lsc flaky |
|
|
Since the lab-wide computer shutdown last Wednesday, all the realtime models running on c1lsc have been flaky. The error is always the same:
[58477.149254] c1cal: ADC TIMEOUT 0 10963 19 11027
[58477.149254] c1daf: ADC TIMEOUT 0 10963 19 11027
[58477.149254] c1ass: ADC TIMEOUT 0 10963 19 11027
[58477.149254] c1oaf: ADC TIMEOUT 0 10963 19 11027
[58477.149254] c1lsc: ADC TIMEOUT 0 10963 19 11027
[58478.148001] c1x04: timeout 0 1000000
[58479.148017] c1x04: timeout 1 1000000
[58479.148017] c1x04: exiting from fe_code()
This has happened at least 4 times since Wednesday. The reboot script makes recovery easier, but doing it once in 2 days is getting annoying, especially since we are running many things (e.g. ASS) in custom configurations which have to be reloaded each time. I wonder why the problem persists even though I've power-cycled the expansion chassis? I want to try and do some IFO characterization today so I'm going to run the reboot script again but I'll get in touch with J Hanks to see if he has any insight (I don't think there are any logfiles on the FEs anyways that I'll wipe out by doing a reboot). I wonder if this problem is connected to DuoTone? But if so, why is c1lsc the only FE with this problem? c1sus also does not have the DuoTone system set up correctly...
The last time this happened, the problem apparently fixed itself so I still don't have any insight as to what is causing the problem in the first place . Maybe I'll try disabling c1oaf since that's the configuration we've been running in for a few weeks. |