Spare optics from the AP table were moved to glass cabinet in the east arm. I'm not sure this is the right place. We'll see what everybody thinks.
There were two UNMARKED optics. Shame on you! No pencil marks on the optics either. These optics were shipped to the FBI for finger tip analysis.
SOS optics prepared to be hanged are moved from the South Flow Bench to S15 Clean Cabinet.
SRMU 03 (1-25-2010) specification summery E080460-05-D, older vintage SRM 01 and PRM 02 (need more specification)
There was one NOT MARKED SOS with two broken magnets on its face. This is labeled ???
This was done to prepare clean space for TIP-TILT drive- test set up.
The existing cable from 1X5 can reach only to the south end: from whitening filter to satelite amp. This will be good for future test of suspensions.
We need to make new cable from 1 X 1 to the south end = 40m long
While I'm not sure what specific optic this is, I think it's an older optic. (a) All of the new optics we got from Ramin were enscribed with their #. (b) This optic appears to have a short arrow scribe line (about the length of the guiderod), and then no scribe line (that I could see through the glass dish) on the other side. The new optics all have a long arrow scribe line, ~1/2 the full width of the optic, and have clear scribe lines on the opposite side.
Earth quake stops need viton tips.
Wirestandoffs are still aluminum.
Bah, we need ruby slippers for all future suspensions. Prism with curved backside and smooth grooves.
No aluminum, no cry.
The spare M126N-1064-700, sn 5519 of Dec 2006 rebuilt NPRO's power output
measured 750mW at DC2.06A with Ohpir meter.
Alberto's controller unit 125/126-OPN-PS, sn516m was disconnected from lenght measurment NPRO on the AP table.
5519 NPRO was clamp to the optical table without heatsink and it was on for 15 minutes.
Spare ILIGO electronics temporarly stored in the east arm. We need cabinet space.
(on tower )
buy Ni coated ones for future use from www.electroenergy.com
Ok for larger RUBY,
unit is not in perfect condtition but usefull
Pick up FC from Gary with purchasing date 7-7-2015
NOT finished, last edited 7-7
The crane I -beam now leveled at all degrees of rotation. The lower hinge was moved southward about 1/4 of an inch. Performance was tested at 2000 lbs
Atm1, work in progress
Atm2, load test at 1 Ton
Atm3, service report
The air condition was off for the south arm. I turned it on.
Looks like either the LR OSEM is totally mis adjusted in its holder or the whitening eletronics are broken.
Also looks like the ETMY is just not damped at 1 Hz? How can this be?
I look at the SUS_SUMMARY screen which apparently only Steve and I look at:
Looks like the suspensions have factor of 10-100 different gains. Why?
** The ETMY just doesn't behave correctly when I bias it. Both pitch and yaw seem to make it do yaw. I leave this for Jamie to debug in the morning.
*** Also, the BIAS buttons are still broken - the HOPR/LOPR limits ought to be 5000 and the default slider increment be 100. Also the YAW button readback doesn't correctly show the state of the BIAS.
**** And.....we have also lost the DAQ channels that used to be associated with the _IN1 of the SUSPOS/PIT/YAW filter modules. Please put them back; our templates don't work without them.
The measured change in the REFL DC power with and without PRM aligned seems unacceptably small. Something wrong ?
The difference in the power with and without PRM aligned should be more than a factor of 300.
[difference in power] = [single bounce from PRM] / [two times of transmission through PRM ]
= (1-T) / T^2 ~ 310,
where T is the transmissivity of PRM and T = 5.5% is assumed in the calculation.
Also the reflectivity of MICH is assumed to be 1 for simplicity.
We now have (with the PRM misaligned):
REFL11: Power incident = 7.60 mW ; DC out = 0.330 V => efficiency = 0.87 A/W
REFL55: Power incident = 23 mW ; DC out = 0.850 V => efficiency = 0.74 A/W
and with the PRM aligned::
REFL11: DC out = 0.35 V => 8 mW is incident
REFL55: DC out = 0.975 V => 26 mW is incident
Just tying up a loose end. The next day Kiwamu and I checked to see what the trouble was. We concluded that the PRM had not moved during my measurement though I had 'Misaligned' it from the medm screen. So all the power levels measured here were with the PRM aligned. The power level change was subsequently measured and e-logged
Just a quick report.
The AS55 signal contains more noise than the REFL signals.
Why ? Is this the reason of the instability in PRMI ?
I locked the Power-Precycled ITMY with REFL33.
As shown in the plot above, I compared the in-loop signal (REFL33) and out-of-loop signals (REFL11 and AS55).
All the signals are calibrated into the displacements of the PR-ITMY cavity by injecting a calibration peak at 283 Hz through the actuator of PRM.
AS55 (blue curve) showed a structure around 3 Hz and higher flat noise below 1 Hz.
* Temporary strain relief for the heliax cables on 1X2 (Steve)
* RF diagrams and check lists (Suresh)
=> In the lunch meeting we will discuss the details about what we will do for the RF installation.
* Electronics design and plan for Green locking (Aidan / Kiwamu)
=> In the lunch meeting we will discuss the details.
* LSC model (Koji)
* Video cable session (team)
* LPF for the laser temperature control (Larisa)
$TARGET_DIR = /cvs/cds/caltech/target
It remains to (Jon is taking care of these)
1) Checked the N2 pressures: the unregulated cylinder pressures are both around 1500 PSI. How long until they get to 1000?
2) The IMC has been flaky for a day or so; don't know why. I moved the gains in the autolocker so now the input gain slider to the MC board is 10 dB higher and the output slider is 10 dB lower. This is updated in the mcdown and mcup scripts and both committed to SVN. The trend shows that the MC was wandering away after ~15 minutes of lock, so I suspected the WFS offsets. I ran the offsets script (after flipping the z servo signs and adding 'C1:' prefix). So far powers are good and stable.
3) pianosa was unresponsive and I couldn't ssh to it. I powered it off and then it came back.
4) Noticed that DAQD is restarting once per hour on the hour. Why?
5) Many (but not all) EPICS readbacks are whiting out every several minutes. I remote booted c1susaux since it was one of the victims, but it didn't change any behavior.
6) The ETMX and ITMX have very different bounce mode response: should add to our Vent Todo List. Double checked that the bounce/roll bandstop is on and at the right frequency for the bounce mode. Increased the stopband from 40 to 50 dB to see if that helps.
7) op340 is still running ! The only reason to keep it alive is its crontab:
07 * * * * /opt/rtcds/caltech/c1/burt/autoburt/burt.cron >> /opt/rtcds/caltech/c1/burt/burtcron.log
#46 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/PSL/FSS/FSSSlowServo > /cvs/cds/caltech/logs/scripts/FSSslow.cronlog 2>&1
#14,44 * * * * /cvs/cds/caltech/conlog/bin/check_conlogger_and_restart_if_dead
15,45 * * * * /opt/rtcds/caltech/c1/scripts/SUS/rampdown.pl > /dev/null 2>&1
#10 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/MC/autolockMCmain40m >/cvs/cds/caltech/logs/scripts/mclock.cronlog 2>&1
#27 * * * * /opt/rtcds/caltech/c1/scripts/general/scripto_cron /opt/rtcds/caltech/c1/scripts/PSL/FSS/RCthermalPID.pl >/cvs/cds/caltech/logs/scripts/RCthermalPID.cronlog 2>&1
00 0 * * * /var/scripts/ntp.sh > /dev/null 2>&1
#00 4 * * * /opt/rtcds/caltech/c1/scripts/RGA/RGAlogger.cron >> /cvs/cds/caltech/users/rward/RGA/RGAcron.out 2>&1
#00 6 * * * /cvs/cds/scripts/backupScripts.pl
00 7 * * * /opt/rtcds/caltech/c1/scripts/AutoUpdate/update_conlog.cron
00 8 * * * /opt/rtcds/caltech/c1/scripts/crontab/backupCrontab
added a new script (scripts/SUS/rampdown.py) which decrements every 30 minutes if needed. Added this to the megatron crontab and commented out the op340m crontab line. IF this works for awhile we can retire our last Solaris machine.
8) To see if we could get rid of the wandering PCDRIVE noise, I looked into the NPRO temperatures: was - T_crystal = 30.89 C, T_diode1 = 21 C, T_diode2 = 22 C. I moved up the crystal temp to 33.0 C, to see if it could make the noise more stable. Then I used the trimpots on the front of the controller to maximimize the laseroutput at these temperatures; it was basically maximized already. Lets see if there's any qualitative difference after a week. I'm attaching the pinout for the DSUB25 diagnostics connector on the back of the box. Aidan is going to help us record this stuff with AcroMag tech so that we can see if there's any correlation with PCDRIVE. The shifts in FSS_SLOW coincident with PCDRIVE noise corresponds to ~100 MHz, so it seems like it could be NPRO related.
Added this to the megatron crontab and commented out the op340m crontab line. IF this works for awhile we can retire our last Solaris machine.
For some reason, my email address is the one that megatron complains to when cron commands fail; since 11:15PM last night, I've been getting emails that the rampdown.py line is failing, with the super-helpful message: expr: syntax error
expr: syntax error
Yes - my rampdown.py script correctly ramps down the watchdog thresholds. This replaces the old rampdown.pl Perl script that Rob and Dave Barker wrote.
Unfortunately, cron doesn't correctly inherit the bashrc environment variables so its having trouble running.
On a positive note, I've resurrected the MEDM Screenshot taking cron job, so now this webpage is alive (mostly) and you can check screens from remote:
It looks like daqd isn't being restarted, but in fact crashing every hour.
Going into the logs in target/fb/logs/old, it looks like at 10 seconds past the hour, every hour, daqd starts spitting out:
[Mon May 18 12:00:10 2015] main profiler warning: 1 empty blocks in the buffer
[Mon May 18 12:00:11 2015] main profiler warning: 0 empty blocks in the buffer
[Mon May 18 12:00:12 2015] main profiler warning: 0 empty blocks in the buffer
[Mon May 18 12:00:13 2015] main profiler warning: 0 empty blocks in the buffer
An ELOG search on this kind of phrase will get you a lot of talk about FB transfer problems.
I noticed the framebuilder had 100% usage on its internal, non-RAID, non /frames/, HDD, which hosts the root filesystem (OS files, home directory, diskless boot files, etc), largely due to a ~110GB directory of frames from our first RF lock that had been copied over to the home directory. The HDD only has 135GB capacity. I thought that maybe this was somehow a bottleneck for files moving around, but after deleting the huge directory, daqd still died at 4PM.
The offsite LDAS rsync happens at ten minutes past the hour, so is unlikely to be the culprit. I don't have any other clues at this point.
Today at 5 PM we replaced the east N2 cylinder. The east pressure was 500 and the west cylinder pressure was 1000. Since Steve's elogs say that the consumption can be as high as 800 per day we wanted to be safe.
The c1cal model was maxing out its CPU meter so I logged onto c1lsc and did 'rtcds c1cal stop'. Let's see if this changes any of our FB / DAQD problems.
There's a few hours so far after today's c1cal shut off that the summary page shows no dropouts. I'm not yet sure that this is related, but it seems like a clue.
After one day the pressures are east/west = 2200/450 PSI
I think that the real clue was that the dropouts are in some channels and not in others:
As it turns out, the channel with no dropouts is the RAW PSL RMTEMP channel. All the others are the minute trends. So something is up with the trend making or the trend reading in the cluster.
West cyclinder is empty, east is at 2000 psi; regulated N2 pressure is 64psi. I'll replace the west one after the meeting.
Some of the sub-suspension screens need labels to describe what those row and column are.
Yesterday (Sep 25) evening: I had to reboot c1psl, c1iool0, and c1aux to recover nominal IMC locking
Today megatron had no response and I had to reboot it with the reset button. MCautolocker and FSSSlow were recovered and the IMC is locking as usual.
Since we have setup POP22 PD now(elog #8192), we could confirm that sideband power builds up when PRMI is sideband locked.
Here's some plot of PRC intra-cavity powers and MICH,PRCL error signals. As you can see from POP22, we locked at the peak of 11MHz sideband. There was oscillation at ~500 Hz, but we couldn't optimize the gain yet.
Here's 30 sec movie of AS, POP, REFL when acquiring (and losing) PRMI sideband lock. It was pretty hard to take a movie because it locks pretty seldom (~1 lock / 10 min).
For MICH lock, we used ITMs instead of BS for reducing coupling between PRCL.
Also, AS55 phase rotation angle was coarsely optimized by minimizing MICH signal in I.
For PRCL lock, we used REFL55_I_ERR instead of REFL33_I_ERR. It had better PDH signal and we coarsely optimized phase rotation angle by minimizing PRCL PDH signal in Q.
== PRMI sideband ==
MICH: AS55_Q_ERR, AS55_PHASE_R = -12 deg, MICH_GAIN = -0.1, feedback to ITMX(-1),ITMY(+1)
PRCL: REFL55_I_ERR, REFL55_PHASE_R = 70 deg, PRCL_GAIN = -15, feedback to PRM
We set POP22_PHASE_R = -170 deg by minimizing Q.
- We tried to use REFL55_Q_ERR to lock MICH, but couldn't. It looks like REFL error signals are bad.
- We tried to use POP22_I_ERR to trigger PRCL lock, but it didn't work.
Good progress in IFO locking tonight, with the arm powers reaching about half the full resonant maximum.
Still to do is check out some weirdness with the OMC DAC, fix the wireless network, and look at c1susvme2 timing.
Today we did the following works in order to get ready for the new CDS test.
- solved the DAC issue.
- checked all the channel assignments of the ADC and the DAC.
- preparation for modification of the AA filter chassis.
- checked DAC cable length.
- connected the power cables of the BO boards to Sorensens.
Although we performed those works, we still couldn't do the actual damping tests.
To do the damping tests, we have to modify the AA chassis to let the SCSIs go in it. Now Joe and Steve are working for this issue.
Also we found that we should make three more 37pin Dsub - 40pin IDC cables.
But this is not a critical issue because the cables and the connectors are already in our hands. So we can make them any time later.
Now all the DAC channels are working correctly.
There had been a combination of some issues.
When I posted the elog entry last time, the DAC was not working at all (see here).
But in the last week Joe found that the IO process didn't correctly run. He modified the IOP file named 'c1x02.mdl' and ran it after compiling and installing it.
This made the situation better because we then were able to see the most of the signals coming out from the DACs.
However we never saw any signals associated with SIDE_COILs.
We checked the DAC cards, their slots and their timing signals. But they all were fine.
At that time we were bit confused and spent a couple of days because the DAC signals appeared at a different slot some time after we rebooted the computer. Actually this issue still remains unsolved...
Finally we found that SIDE_COILs had an input matrix which didn't show up in the medm screen.
We put 1 in the matrix and we successfully got the signal coming out from the DAC.
We checked all the channel assignments of the DACs and the ADCs.
All the channels are assigned correctly (i.e. pin number and channel name).
We have been planning to put the SCSI cables into the AA chassis to get the ADC signals.
As Joe said in the past entry (see here) , we need a modification on the AA chassis to let the SCSIs go in it.
Joe and Steve will put an extension component so that we can make the chassis wider and eventually SCSI can go in.
(DAC cable length)
In the default plan we are going to reuse some DAC cables which are connected to the existing systems.
To reuse them we had to make sure that the length of those cables are long enough for the new CDS.
After stopping the watchdogs, we disconnected those DAC cables and confirmed they were long enough.
Now those cables are connected to the original place where they have been before.
The same test will be performed for the binary outputs.
(power cables to Sorensens)
Since the binary output boards need +/- 15V power, we hooked up the power cables to Sorensens sitting on the new 1X5 rack.
After cabling them, we turned on the power and successfully saw the green LEDs shining on the back panel of the boards.
Here I show two photos of the latest ABSL (ABSolute Length measurement) setup.
Figure.1 : A picture of the ABSL setup on the AP table.
The setup has been a little bit modified from the before (#4923).
As I said on the entry #4923, the way of sampling the ABSL laser wasn't so good because the beam, which didn't go through the faraday, was sampled.
In this latest configuration the laser is sampled after the faraday with a 90% beam splitter.
The transmitted light from the 90% BS (written in pink) is sent to the PSL table through the access tube which connects the AP and PSL table .
FIgure.2: A picture of the ABSL setup on the PSL table.
The 10% sampled beam ( pink beam in the picture) eventually comes to the PSL table via the access tube (the hole on the left hand side of the picture).
Then the ABSL beam goes through a mode matching telescope, which consists of a combination of a concave and a convex lens.
The PSL laser (red line in the picture) is sampled from a point after the doubling crystal.
The beam is combined at a 50 % BS, which has been setup for several purposes( see for example #3759 and #4339 ) .
A fast response PD (~1 GHz) is used for the beat-note detection.
For the EY, instead of balancing the table, I just moved the weight approximately so that the ETMY OSEMS were at half light, but didn't check the level since ETMY is the only optic.
Some notes on OMC/AS work (Aaron/Gautam can amend/correct):
- Beam is now well centered in OMC MMT. Hits input coupling mirror and cleanly exits the vacuum to the AS table.
- Didn't see much on OMC trans, but PDs are good based on flashlight test.
- just before closing, re-aligned beam in yaw so that it gets close to the east screw on the input coupler. Aaron and I think we maybe saw a flash there with the OMC length PZT being driven at full range by a triangle wave.
- with OMC Undulators (aka tip/tilt PZT mirrors) energized, the beam was low on PZT1 mirror. We pitched ITMY by ~150 micro-rad and that centered the beam on PZT1 mirror. ITMY-OL is probably not better than 100 urad as a DC reference?
- We checked the range of Undulator 1 and we were getting ~5 mrad of yaw of the beam for the full range, and perhaps half of that in pitch. Rob Ward emailed us from Oz to say that the range is robably 2.7 mrad, so that checks out.
Even if the ITMY has to be in the wrong position to get the beam to the OMC, we can still do the heater tests in one position and then do the OMC checkout stuff in the other position.
Gautam suspects that there is a possible hysterical behaviour in the Undulators which is related to the MC3 glitching and the slow machine hangups and also possibly the illuminati.
-We noticed a ghost beam that from MC REFL (MMT2) that should be dumped during the next vent--it travels parallel to the OMC's long axis and nearly hits one of the steering mirrors for OMC refl.
-We measured the level of the table and found it ~3 divisions off from level, with the south end tilted up
-Gautam rotated and slightly translated OM5 to realign the optic, as expected. No additional optics were added.
-Gautam and I tested the TT piezo driver. We found that 3.6V on the driver's input gave 75V (of 150V) at the output, at least for yaw on piezo 1. However, as Gautam mentioned, during testing it seemed that the other outputs may have different (nonzero) offset voltages, or some hysterisis.
[Koji / Kiwamu]
We did several tests to figure out what could be a source of the computer issue.
The Dolphin switch box looks suspicious, but not 100% sure.
(what we did)
+ Removed the pciRfm sentence from the c1x04 model to disable the Dolphin connection in the software.
+ Found no difference in the Makefile, which is supposed to comment out the Dolphin connection sentences.
==> So we had to edit the Makefile by ourselves
+ Did a hand-comilpe by editing the Makefile and running a make command.
+ Restarted the c1x04 process and it ran wihtout problems.
==> the Dolphin connection was somehow preventing the c1x04 process from runnning.
+ Unplugged the Dolphin cables on the back side of the Dolphin box and re-plug them to other ports.
==> didn't improve the issue.
+ During those tests, c1lsc frequently became frozen. We disabled the automatic-start of c1lsc, c1ass, c1oaf by editting rtsystab.
==> after the test we reverted it.
+ We reverted all the things to the previous configuration.
[Rana, Suresh, Kiwanu]
We did the following things:
We did the following things:
* taking the VCO stability data from the error signal instead of the feedback
* taking the VCO stability data from the error signal instead of the feedback
* tried calibrating the signal but confused
* tried calibrating the signal but confused
* increased the modulation depth of the green end PDH.
We found that a cable coming out from the VCO box was quite touchy. This cable was used for taking the feedback signal.
When we touched the cable it made a big noise in the feedback. So we decided to remove the cable and take the signal from the error point (i.e. just after the mixer and the LPF.)
In order to correct that signal to the one in terms of the feedback signal, we put a digital filter which is exactly the same as that of the PLL (pole at 1.5 Hz, zero at 40 Hz, G=1) .
However for some reasons the signal shown in the digital side looked completely mis-calibrated by ~ 100. We have no idea what is going on.
Anyway we are taking the data over tonight because we can correct the signal later. The 2nd round data started from AM1:40
What is the point to use the error instead of the feedback? It does not make sense to me.
If the cable is flaky why we don't solder it on the circuit? Why we don't put a buffer just after the test point?
It does not make sense to obtain the error signal in order to estimate the freeruning noise without the precise loop characterization.
(i.e. THE FEEDBACK LOOP TRINITY: Spectrum, Openloop, Calibration)
RA: I agree that feedback would be better because we could use it without much calibration. But the only difference between the "error signal" and the "feedback signal" in this case is a 1.6:40 pole:zero stage with DC gain of 0 dB. So we can't actually use either one without calibration and the gain between these two places is almost the same so they are both equally bad for the SNR of the measurement. I think that Suresh and Kiwamu are diligently reading about PLLs and will have a more quantitative result on Monday afternoon.
[Koji and Kiwamu]
We did some more vacuum works today. It is getting ready to pump down.
(what we did)
- alignment of the POY mirrors. Now the beam is coming out from the ITMY chamber successfully
- leveling of the tables (except for the IOO and OMC chamber)
- realigned the beam axis down to the Y arm because the leveling of the BS table changed the alignments.
- installed IP_POS mirrors
- aligned the green beam and made it overlap with IR beam path.
- repositioned green steering mirrors since one of them are too close to the dark beam path
I want to collect some data with the arms locked to investigate the possibility/usefullness of having seismic feedforward implemented for the arms (it is already known to help the IMC length and PRC angular stability at low frequencies). To facilitate diagnostics I modified the file /users/Templates/Seismic/Seismic_vs_TRXTRYandMC.xml to have the correct channel names in light of Lydia's channel name changes in 2016. Looking at the coherence data, the alignment of the cartesian coordinate system of the Seismometers at the ends and the global interferometer coordinate system can be improved.
I don't know if for the MISO filter design if there is any difference in using TRX/TRY as the target, or the arm length control signal.
Data collection started at 1249018179. I've setup a script running in a tmux shell to turn off the LSC enable in 2 hours.
About the analog CARM control with ALS:
We're looking at using a Sigg designed remotely switchable delay line box on the currently undelayed side of the ALS DFD beat. For a beat frequency of 50MHz, one cycle is 20ns, this thing has 24ns total delay capability, so we should be able to get pretty close to a zero crossing of the analog I or Q outputs of the demod board. This can be used as IN2 for the common mode board.
Gautam is testing the functionality of the delay and switching, and should post a link to the DCC page of the schematic. Rana and Koji have been discussing the implementation of the remote switching (RCG vs. VME).
I spent some time this afternoon trying to lock the X arm in this way, but instead of at IR resonance, just wherever the I output of the DFD had a zero crossing. However, I didn't give enough thought to the loop shapes; Koji helped me think it through. Tomorrow, I'll make a little pomona box to go before the CM IN2 that will give the ALS loop shape a pole where we expect the CARM coupled cavity pole to be (~120Hz), so that the REFL11 and ALS signals have a similar shape when we're trying to transition.
The common mode board does have a filter for this kind of thing for single arm tests, but puts in a zero as well, as it expects the single arm pole, which isn't present in the ALS sensing, so maybe I'll whip up something appropriate for this, too.
Something odd is happening with the CM board. Measuring from either input to OUT1 (the "slow output") shows a nice flat response up until many 10s of kHz.
However, when I connect my idependently confirmed 120Hz LPF to either input, the pole frequency gets moved up to ~360Hz and the DC gain falls some 10dB. This happens regardless if the input is used or not, I saw this shape at a tee on the output of the LPF when the other leg of the tee was connected to a CM board input.
This has sabotaged my high bandwidth ALS efforts. I will investigate the board's input situation tomorrow.
First, things that were done:
Things that I noticed:
I think there are two things that could be happening here, given the above information:
When I came to the 40m, I found most of the FB signals are dead.
The suspensions were not dumped but not too much excited. Use watchdog switches to cut off the coil actuators.
Restarted mxstream from the CDS_FE_STATUS screen. The c1lsc processes got fine. But the FB indicators for c1sus, c1ioo, c1iscex/y are still red.
Sshed into c1sus/ioo, run rtcds restart all . This made them came back under control.
Same treatment for c2iscex and c1iscey. This made c1sus stall again. Also c1iscey did not come back.
At this point I decided to kill all of the rt processes on c1sus/c1ioo/c1iscex/c1iscey to avoid interference between them.
And started to restart from the end machines.
c1iscex did not come back by rtcds restart all.
Run lsmod on c1iscey and found c1x05 persisted stay on the kernel. rmmod did not remove the c1x05 module.
Run software reboot of c1iscey. => c1iscey came back online.
c1iscey did not come back by rtcds restart all.
Run software reboot of c1iscex. => c1iscex came back online.
c1ioo just came back by rtcds restart all.
c1sus did not come back by rtcds restart all.
Run software reboot of c1sus => c1sus came back online.
This series of restarting made the fb connections of some of the c1lsc processes screwed up.
Run the following restarting commands => all of the process are running with FB connection.
rtcds restart c1sup
rtcds restart c1ass
rtcds restart c1lsc
Enable damping loops by reverting the watchdog switches.
All of the FE status are green except for the c1rfm bit 2 (GE FANUC RFM CARD 0).
Found some LSC scripts didn't run on pianosa. Particularly all the scripts on the C1:IFO_CONFIGURE screen don't run.
They need to be fixed.
Somehow some DAQ channels for C1SUS have disappeared from the DAQ channel list.
Indeed there are only a few DAQ channels listed in the C1SUS.ini file.
I ran the activateDQ.py and restarted daqd.
Everything looks okay. C1SUS and C1PEM were restarted because they became frozen.
I found again the ini files had been refreshed.
I ran the activateDQ.py script (link to the script wiki page) and restarted the daqd process on fb.
The activateDQ.py script should be included into the recompile or rebuild scripts so that we don't have to run the script everytime by hands.
I am going to add this topic on the CDS todo list (wiki page).