40m
QIL
Cryo_Lab
CTN
SUS_Lab
TCS_Lab
OMC_Lab
CRIME_Lab
FEA
ENG_Labs
OptContFac
Mariner
WBEEShop
|
40m Log |
Not logged in |
 |
|
Wed Sep 29 17:10:09 2021, Anchal, Summary, CDS, c1teststand problems summary
|
Thu Sep 30 14:09:37 2021, Anchal, Summary, CDS, New way to ssh into c1teststand
|
Thu Mar 3 15:37:40 2022, Anchal, Summary, CDS, c1teststand restructured
|
Mon Oct 4 11:05:44 2021, Anchal, Summary, CDS, c1teststand problems summary
|
Mon Oct 4 18:00:16 2021, Koji, Summary, CDS, c1teststand problems summary
|
Tue Oct 5 17:58:52 2021, Anchal, Summary, CDS, c1teststand problems summary
|
Tue Oct 5 18:00:53 2021, Anchal, Summary, CDS, c1teststand time synchronization working now
|
Mon Oct 11 17:31:25 2021, Anchal, Summary, CDS, Fixed mounting of mx devices in fb. daqd_dc is running now.
|
Mon Oct 11 18:29:35 2021, Anchal, Summary, CDS, Moving forward?
|
Tue Oct 12 17:20:12 2021, Anchal, Summary, CDS, Connected c1sus2 to martian network
|
Tue Oct 12 23:42:56 2021, Koji, Summary, CDS, Connected c1sus2 to martian network
|
Wed Oct 13 11:25:14 2021, Anchal, Summary, CDS, Ran c1sus2 models in martian CDS. All good!
|
Tue Oct 19 18:20:33 2021, Ian MacMillan, Summary, CDS, c1sus2 DAC to ADC test 
|
Tue Oct 19 23:43:09 2021, Koji, Summary, CDS, c1sus2 DAC to ADC test   
|
Wed Oct 20 11:48:27 2021, Anchal, Summary, CDS, Power supple configured correctly.
|
Tue Oct 26 18:24:00 2021, Ian MacMillan, Summary, CDS, c1sus2 DAC to ADC test 
|
Wed Dec 22 17:40:22 2021, Anchal, Summary, CDS, c1su2 model updated with SUS damping blocks for 7 SOSs
|
Wed Dec 29 20:09:40 2021, rana, Summary, CDS, c1su2 model updated with SUS damping blocks for 7 SOSs
|
Fri Mar 4 11:04:34 2022, Anchal, Summary, CDS, c1susaux2 system setup and running
|
Mon Mar 7 19:38:47 2022, Anchal, Summary, CDS, c1susaux2 slow controls issues
|
Mon Mar 14 12:20:05 2022, Anchal, Summary, CDS, c1susaux2 slow controls acromag chassis installed 
|
Thu Mar 17 19:12:44 2022, Anchal, Summary, CDS, c1auxey1 slow controls acromag chassis installed, not powered
|
Fri Mar 18 18:39:13 2022, Yehonathan, Summary, CDS, c1auxey1 slow controls acromag chassis installed, powered
|
Fri Mar 18 19:10:51 2022, Anchal, Summary, CDS, c1auxey1 slow controls issues
|
Mon Mar 21 18:42:06 2022, Anchal, Summary, CDS, c1auxey1 slow controls issues
|
Mon Apr 4 17:03:47 2022, Anchal, Summary, CDS, c1susaux2 slow controls acromag chassis fixed and installed
|
Wed Jul 6 22:40:03 2022, Tega, Summary, CDS, Use osem variance to turn off SUS damping instead of coil outputs
|
Thu Jul 7 21:25:48 2022, Tega, Summary, CDS, Use osem variance to turn off SUS damping instead of coil outputs
|
Tue Mar 15 11:52:34 2022, Anchal, Summary, CDS, c1su2 model updated for sending Run/Acquire Binary Output to Binary Interface card
|
Tue Mar 15 14:10:41 2022, Anchal, Summary, CDS, c1su2 model remade, reinstalled, restarted after the update
|
Tue Oct 12 17:10:56 2021, Anchal, Summary, CDS, Some more information
|
|
Message ID: 16365
Entry time: Wed Sep 29 17:10:09 2021
Reply to this: 16367
16372
|
Author: |
Anchal |
Type: |
Summary |
Category: |
CDS |
Subject: |
c1teststand problems summary |
|
|
[anchal, ian]
We went and collected some information for the overlords to fix the c1teststand DAQ network issue.
- from c1teststand, c1bhd and c1sus2 computers were not accessible through ssh. (No route to host). So we restarted both the computers (the I/O chassis were ON).
- After the computers restarted, we were able to ssh into c1bhd and c1sus, ad we ran rtcds start c1x06 and rtcds start c1x07.
- The first page in attachment shows the screenshot of GDS_TP screens of the IOP models after this step.
- Then we started teh user models by running rtcds start c1bhd and rtcds start c1su2.
- The second page shows the screenshot of GDS_TP screens. You can notice that DAQ status is red in all the screens and the DC statuses are blank.
- So we checked if daqd_ services are running in the fb computer. They were not. So we started them all by sudo systemctl start daqd_*.
- Third page shows the status of all services after this step. the daqd_dc.service remained at failed state.
- open-mx_stream.service was not even loaded in fb. We started it by running sudo systemctl start open-mx_stream.service.
- The fourth page shows the status of this service. It started without any errors.
- However, when we went to check the status of mx_stream.service in c1bhd and c1sus2, they were not loaded and we we tried to start them, they showed failed state and kept trying to start every 3 seconds without success. (See page 5 and 6).
- Finally, we also took a screenshot of timedatectl command output on the three computers fb, c1bhd, and c1sus2 to show that their times were not synced at all.
- The ntp service is running on fb but it probably does not have access to any of the servers it is following.
- The timesyncd on c1bhd and c1sus2 (FE machines) is also running but showing status 'Idle' which suggested they are unable to find the ntp signal from fb.
- I believe this issue is similar to what jamie ficed in the fb1 on martian network in 40m/16302. Since the fb on c1teststand network was cloned before this fix, it might have this dysfunctional ntp as well.
We would try to get internet access to c1teststand soon. Meanwhile, someone with more experience and knowledge should look into this situation and try to fix it. We need to test the c1teststand within few weeks now. |
|
|