40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 38 of 344  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  3982   Tue Nov 23 23:13:40 2010 kiwamuSummaryCDSplan: we will install C1LSC

 [Joe, Suresh, Kiwamu]

 We will fully install and run the new C1LSC front end machine tomorrow.

And finally it is going to take care of the IOO PZT mirrors as well as LSC codes. 

 


 (background stroy)

 During the in-vac work today, we tried to energize and adjust the PZT mirrors to their midpoints.

However it turned out that C1ASC, which controls the voltage applying on the PZT mirrors, were not running.

We tried rebooting C1ASC by keying the crate but it didn't come back.

 The error message we got in telnet  was :

   memory init failure !!

 

 We discussed how to control the PZT mirrors from point of view of both short term and long term operation.

We decided to quit using C1ASC and use new C1LSC instead.

A good thing of this action is that, this work will bring the CDS closer to the final configuration. 

 

(things to do)

 - move C1LSC to the proper rack (1X4).

 - pull out the stuff associated with C1ASC from the 1Y3 rack.

 - install an IO chasis to the 1Y3 rack.

- string a fiber from C1LSC to the IO chasis.

- timing cable (?)

- configure C1LSC for Gentoo

- run a simple model to check the health

- build a model for controlling the PZT mirrors

  3983   Tue Nov 23 23:52:49 2010 ranaUpdateCDSUpdated apps

Wow. I typed DTT on rossa and it actually worked! No complaints about testpoints, etc. I was also able to use its new 'NDS2' function to get data off of the CIT cluster (L1:DARM_ERR from February). You have to use the kinit/kdestroy stuff to use NDS2 as usual (look up NDS2 in DASWG if you don't know what I mean).

  3986   Thu Nov 25 02:49:39 2010 kiwamuUpdateCDSinstallation of C1LSC: still going on

 [Joe, Kiwamu]

 We tried installing C1LSC but it's not completely done yet due to the following issues.

    (1)  A PCIe optical fiber which is supposed to connect C1LSC and its IO chasis is broken at a high probability.

    (2)  Two DAC boards (blue and golden board) are missing.

 We will ask the CDS people at Downs and take some more of those stuff from there.


( works we did )

 - took the whole C1ASC crate out from the 1Y3 rack.

 - installed an IO chasis to the place where C1ASC was.

 - strung a timing optical fiber to the IO chasis.

 - checked the functionality of the PICe optical fiber and found it doesn't work.

 

 

Fig.1  c1asc taken out from the rack                                                                         Fig.2 IO chasis installed to the rack

DSC_2723_ss.jpg      DSC_2724_ss.jpg

 

Fig.3 PCIe extension fiber (red arrow for an obvious bended point) 

DSC_2727_ss.jpg 

  3995   Tue Nov 30 12:25:08 2010 josephbUpdateCDSLSC computer to chassis cable dead

Problem:

We seemed to have a broken fiber link for use between the LSC and its IO chassis.  It is unclear to mean when this damage occurred.  The cable had been sitting in a box with styrofoam padding, and the kink is in the middle of the fiber, with no other obvious damage near by.  The cable however may have previously been used by the people in Downs for testing and possibly then.  Or when we were stringing it, we caused a kink to happen.

Tried Solutions:

I talked to Alex yesterday, and he suggested unplugging the power on both the computer and the IO chassis completely, then plugging in the new fiber connector, as he had to do that once with a fiber connection at Hanford.  We tried this this morning, however, still no joy.  At this point I plan to toss the fiber as I don't know of any way to rehabilitate kinked fibers.

Note this means that I rebooted c1sus and then did a burt restore from the Nov/30/07:07 directory for c1suspeics, c1rmsepics, c1mcsepics.  It looks like all the filters switched on.

Current Plan:

We do, however, have the a Dolphin fiber which originally was intended to go between the LSC and its IO chassis, before Rolf was told it doesn't work well that way.  However, we were going to connect the LSC machine to the rest of the network via Dolphin.

We can put the LSC machine next to its chassis in the LSC rack, and connect the chassis to the rest of the front ends by the Dolphin fiber.  In that case we just need the usual copper style cable going between the chassis and the computer.

 

  3999   Tue Nov 30 16:02:18 2010 josephbUpdateCDSstatus

Issues:

1) Turns out the /opt/rtcds/caltech/c1/target/gds/param/testpoint.par file had been emptied or deleted at one point, and the only entry in it was c1pem.  This had been causing us a lack of test points for the last few days.  It is unclear when or how this happened.  The file has been fixed to include all the front end models again.  (Fixed)

2) Alex and I worked on tracking down why there's a GPS difference between the front ends and the frame builder, which is why we see a 0x4000 error on all the front end GDS screens. This involved several rebuilds of the front end codes and reboots of the machines involved. (Broken)

3) Still working on understanding why the RFM communication, which I think is related to the timing issues we're seeing.  I know the data is being transferred on the card, but it seems to being rejected after being red in, suggesting a time stamp mismatch. (Broken)

4) The c1iscex binary output card still doesn't work.  (Broken)

Plan:

Alex and I will be working on the above issues tomorrow morning.

Status:

Currently, the c1ioo, c1sus and c1iscex computers are running with their front ends. They all still have 0x4000 error.  However, you can still look at channels on dataviewer for example.  However, there's a possibility of inconsistent timing between computer (although all models on a single computer will be in sync).

All the front ends where burt restorted to 07:07 this morning.  I spot checked several optic filter banks and they look to have been turned on.

  4003   Wed Dec 1 12:02:49 2010 josephb, alexUpdateCDSRebuilding frame builder with latest code

Problem:

The front ends seem to have different gps timestamps on the data than the frame builder has when receiving them.

One theory is we have fairly been doing SVN checkouts of the code for the front ends once a week or every two weeks, but the frame builder has not been rebuilt for about a month. 

Current Action:

Alex is currently rebuilding the frame builder with the latest code changes.

It also suggests I should try rebuilding the frame builder on a semi-regular basis as updates come in.

 

 

  4004   Wed Dec 1 13:41:21 2010 josephb, alex, rolfUpdateCDSTiming is back

Problem:

We had timing problems across the front ends.

Solution:

Noticing that the 1PPS reference was not blinking on the Master Timing Distribution box.  It was supposed to be getting a signal from the c0dcu1 VME crate computer, but this was not happening.

We disconnected the timing signal going into c0dcu1, coming from c0daqctrl, and connected the 1PPS directly from c0daqctrl to the Ref In for the Master Timing distribution box (blue box with lots of fibers coming out of it in 1X5).

We now have agreement in timing between front ends.

After several reboots we now have working RFM again, along with computers who agree on the current GPS time along with the frame builder.

Status:

RFM is back and testpoints should be happy.

We still don't have a working binary output for the X end.  I may need to get a replacement backplane with more than 4 slots if the 1st slot of this board has the same problem as the large boards.

I have burt restored the c1ioo, c1mcs, c1rms, c1sus, and c1scx processes, and optics look to be damped.

 

  4008   Fri Dec 3 14:34:23 2010 ranaUpdateCDSfooling around in the FB rack

This morning (~0100) I started to redo some of the wiring in the rack with the FB in it. This was in an effort to activate the new Megatron (Sun Fire 4600) which we got from Rolf.

Its sitting right above the Frame Builder (FB). The fibers in there are a rats nest. Someone needs to team up with Joe to neaten up the cabling in that rack - its a mini-disaster.

While fooling around in there I most probably disturbed something, leading to the FB troubles today.

  4009   Fri Dec 3 15:37:10 2010 josephbUpdateCDSfb, front ends fixed - tested RFM between c1ioo and c1iscex

Problem:

The front ends and fb computers were unresponsive this morning.

This was due to the fb machine having its ethernet cable plugged into the wrong input.   It should be plugged into the port labeled 0.

Since all the front end machines mount their root partition from fb, this caused them to also hang.

Solution:

The cable has been relabled to "fb" on both ends, and plugged into the correct jack.  All the front ends were rebooted.

 

Testing RFM for green locking:

I tested the RFM connection between c1ioo and c1scx.  Unfortunately, on the first test, it turns out the c1ioo machine had its gps time off by 1 second compared to c1sus and c1iscex.  A second reboot seems to have fixed the issue.

However, it bothers me that the code didn't come up with the correct time on the first boot.

The test was done using the c1gcv model and by modifying the c1scx model.  At the moment, the MC_L channel is being passed the MC_L input of the ETMX suspension.  In the final configuration, this will be a properly shaped error signal from the green locking.

The MC_L signal is currently not actually driving the optic, as the ETMX POS MATRIX currently has a 0 for the MC_L component.

  4014   Mon Dec 6 11:59:41 2010 josephbUpdateCDSNew c1lsc computer moved to lsc rack

Computer moved:

The c1lsc computer has been moved over to the 1Y3 rack, just above the c1lsc IO chassis. 

It will talking to the c1sus computer via a Dolphin PCIe reflected memory card.  The cards have been installed into c1lsc and c1sus this morning.

It will talk to its IO chassis via the usual short IO chassis cable.

 

To Do:

The Dolphin fiber still needs to be strung between c1sus and c1lsc.

The DAQ cable between c1lsc and the DAQ router (which lets the frame builder talk directly with the front ends) also needs t to be strung.

c1lsc needs to be configured to use fb as a boot server, and the fb needs to be configured to handle the c1lsc machine.

  4015   Mon Dec 6 16:49:43 2010 josephbUpdateCDSc1lsc halfway to working

C1LSC Status:

The c1lsc computer is running Gentoo off of the fb server. It has been connected to the DAQ network and is handling mx_streams properly (so we're not flooding the network error messages like we used to with c1iscex).  It is using the old c1lsc ip address (192.168.113.62). It can ssh'd into.

However, it is not talking properly to the IO chassis.  The IO chassis turns on when the computer turns on, but the host interface board in the IO chassis only has 2 red lights on (as opposed to many green lights on the host interface boards in the c1sus, c1ioo, and c1iscex IO chassis).  The c1lsc IO processor (called c1x04) doesn't see any ADCs, DACs, or Binary cards.  The timing slave is receiving 1PPS and is locked to it, but because the chassis isn't communicating, c1x04 is running off the computer's internal clock, causing it to be several seconds off. 

Need to investigate why the computer and chassis are not talking to each other.

General Status:

The c1sus and c1ioo computers are not talking properly to the frame builder.  A reboot of c1iscex fixed the same problem earlier, however, as Kiwamu and Suresh are working in the vacuum, I'm leaving those computers alone for the moment, but a reboot and burt restore probably should be done later today for c1sus and c1ioo

 

Current CDS status:

MC damp dataviewer diaggui AWG c1ioo c1sus c1iscex RFM Dolphin RFM Sim.Plant Frame builder TDS
                       
  4019   Tue Dec 7 12:12:40 2010 kiwamuUpdateCDSadded some more DAQ channels

[Joe and Kiwamu]

We added some more DAQ channels on c1sus.

We wanted to try diagonalizing the input matrices of the ITMX OSEMs because the motion of ITMX looked noisier than the others

So for this purpose we tried adding DAQ channels so that we can take spectra anytime.

After some debugging, now they are happily running.

 


(DAQ activation code)

There is a code which activates DAQ channels written by Yuta in this October.

       /cvs/cds/rtcds/caltech/c1/chans/daq/activateDAQ.py

If you just execute this code, it is supposed to activate the DAQ channels automatically by editing C1AAA.ini files.

However there were some small bugs in the code, so we fixed them.

Now the code seems fine.

 

(reboot fb DAQ process)

When new DAQ channels are added, one has to reboot the DAQ process running on fb.

To do this, log in to a certain port on fb,

          telnet fb 8088

     shutdown

Then the process will automatically recover by itself.

After doing the above reboo job, we found tpman on C1IOO got down.

We don't fully understand why only C1IOO was affected, but anyway rebooting of the c1ioo front end machine fixed the problem.

 

  4020   Tue Dec 7 16:09:53 2010 josephbUpdateCDSc1iscex status

I swapped out the IO chassis which could only handle 3 PCIe cards with the another chassis which has space for 17, but which previously had timing issues.  A new cable going between the timing slave and the rear board seems to have fixed the timing issues. 

I'm hoping to get a replacement PCI extension board which can handle more than 3 cards this week from Rolf and then eventually put it in the Y-end rack.  I'm also still waiting for a repaired Host interface board to come in for that as well.

At this point, RFM is working to c1iscex, but I'm still debugging the binary outputs to the analog filters.  As of this time they are not working properly (turning the digital filters on and off seems to have no effect on the transfer function measured from an excitation in SUSPOS, all the way around to IN1 of the sensor inputs (but before measuring the digital fitlers).  Ideally I should see a difference when I switch the digital filters on and off (since the analog ones should also switch on and off), but I do not.

  4023   Tue Dec 7 19:34:58 2010 kiwamuUpdateCDSrebooted DAQ and all the front end machines

I found that all the front end machine showed the red light indicators of DAQ on the XXX_GDS_TP.adl screens.

Also I could not get any data from both test points and DAQ channels.

First I tried fixing by telneting and rebooting fb, but it didn't help.

So I rebooted all the front end machines, and then everything became fine.

 

  4025   Wed Dec 8 12:26:56 2010 josephbUpdateCDSmegatron set up - as a test front end

[josephb, Osamu]

Megatron Setup:

To show Osamu how to setup a a front end as well as provide a test computer for Osamu's use, we used the new megatron (sunfire x4600 with 16 cores and 8 gigabytes of memory) as a front end without an IO chassis.

The steps we followed are in the wiki, here.

The new megatron's IP address is 192.168.113.209.  It is running the c1x99 front end code.

  4027   Wed Dec 8 14:46:19 2010 josephb, kiwamuUpdateCDSWhy the ETMX daq channels were not recorded last night

When adding the ETMX DAQ channels using the daqconfig gui (located in /opt/rtcds/caltech/c1/scripts/) on C1SCX.ini, we forgot to set the acquire flag to 1 from 0.

So the frame builder was receiving the data, but not recording it.

We have since then added ETMX and the C1SCX.ini file to Yuta's useful "activateDAQ.py" script in /opt/rtcds/caltech/c1/chans/daq/, so that it now sets the sensor and SUSPOS like channels to be acquired at 2k when run.  You still need to restart the frame builder (telnet fb 8087 and then shutdown) for these changes to take effect.

The script now also properly handles files which already have had channels activated, but not acquired.

  4028   Wed Dec 8 14:51:09 2010 josephbUpdateCDSc1pem now recording data

Problem:

c1pem model was reporting all zeros for all the PEM channels.

Solution:

Two fold.  On the software end, I added ADCs 0, 1, and 2 to the model.  ADC 3 was already present and is the actual ADC taking in PEM information.

There was a problem noted awhile back by Alex and Rolf that there's a problem with the way the DACs and ADCs are number internally in the code.  Missing ADCs or DACs prior to the one you're actually using can cause problems.

At some point that problem should be fixed by the CDS crew, but for now, always include all ADCs and DACs up to and including the highest number ADC/DAC you need to use for that model.

On the physical end, I checked the AA filter chassis and found the power was not plugged in.  I plugged it in.

Status:

We now have PEM channels being recorded by the FB, which should make Jenne happier.

  4029   Wed Dec 8 17:05:39 2010 josephbUpdateCDSPut in dolphin fiber between c1sus and c1lsc

[josephb,Suresh]

We put in the fiber for use with the Dolphin reflected memory between c1sus and c1lsc (rack 1X4 to rack 1Y3).  I still need to setup the dolphin hub in the 1X4 rack, but once that is done, we should be able to test the dolphin memory tomorrow.

  4037   Thu Dec 9 12:28:52 2010 josephb, alexUpdateCDSThe Dolphin is in (Reflected memory that is)

Setting the Configurations files:

On the fb machine in /etc/dis/ there are several configurations files that need to be set for our dolphin network.

First, we modify networkmanager.conf.

We set  "-dimensionX 2;" and leave the dimensionY and dimensionZ as 0.  If we had 3 machines on a single router, we'd set X to 3, and so forth.

We then modify dishosts.conf.

We add an entry for each machine that looks like:

#Keyword name nodeid adapter link_width
HOSTNAME: c1sus
ADAPTER:  c1sus_a0 4 0 4

The nodeids (the first number after the name)  increment by 4 each time, so c1lsc is:

HOSTNAME: c1lsc
ADAPTER:  c1lsc_a0 8 0 4

The file cluster.conf is automatically updated by the code by parsing the dishosts.conf and networkmanager.conf files.

Getting the code to automatically start:

We uncommented the following lines in the rc.local file in /diskless/root/etc on the fb machine:

# Initialize Dolphin
sleep 2
# Have to set it first to node 4 with dxconfig or dis_nodemgr fails. Unexplai   ned.
/opt/DIS/sbin/dxconfig -c 1 -a 0 -slw 4 -n 4
/opt/DIS/sbin/dis_nodemgr -basedir /opt/DIS

For the moment we left the following lines commented out:

# Wait for Dolphin to initialize on all nodes
#/etc/dolphin_wait
We were unsure of the effect of the dolphin_wait script on the front ends without Dolphin cards.  It looks like the script it calls waits until there are no dead nodes.

In /etc/conf.d/ on the fb machine we modified the local.start file by uncommenting:

/opt/DIS/sbin/dis_networkmgr&

This starts the Dolphin network manager on the fb machine.  The fb machine is not using a Dolphin connection, but controls the front end Dolphin connections via ethernet.

The Dolphin network manager can be interacted with by using the dxadmin program (located in /opt/DIS/sbin/ on the fb machine).  This is a GUI program so use ssh -X when logging into the fb before use.

Setting up the front ends models:

Each IOP model (c1x02, c1x04) that runs on a machine using the Dolphin RFM cards needs to have the flag pciRfm=1 set in the configuration box (usually located in the upper left of the model in Simulink).  Similarly, the models actually making use of the Dolphin connections should have it set as well.  Use the PCIE_SignalName parts from IO_PARTS in the CDS_PARTS.mdl file to send and receive communications via the Dolphin RFM.

  4045   Mon Dec 13 11:56:32 2010 josephb, alexUpdateCDSDolphin is working

Problem:

The dolphin RFM was not sending data between c1lsc and c1sus.

Solution:

Dig into the controller.c code located in /opt/rtcds/caltech/c1/core/advLigoRTS/src/fe/.  Find this bit of code on line 2173:

 

2173 #ifdef DOLPHIN_TEST
2174 #ifdef X1X14_CODE
2175         static const target_node = 8; //DIS_TARGET_NODE;
2176 #else
2177         static const target_node = 12; //DIS_TARGET_NODE;
2178 #endif
2179         status = init_dolphin(target_node);

Replace it with this bit of code:

2173 #ifdef DOLPHIN_TEST
2174 #ifdef C1X02_CODE
2175         static const target_node = 8; //DIS_TARGET_NODE;
2176 #else
2177         static const target_node = 4; //DIS_TARGET_NODE;
2178 #endif
2179         status = init_dolphin(target_node);

Basically this was hard coded for use at the site on their test stands.  When starting up, the dolphin adapter would look for a target node to talk to, that could not be itself.  So, all the dolphin adapters would normally try to talk to target_node 12, unless it was the X1X14 front end code, which happened to be the one with dolphin node id 12.  It would try to talk to node 8.

Unfortunately, in our setup, we only had nodes 4 and 8.  Thus, both our codes would try to talk to a nonexistent node 12.  This new code has everyone talk to node 4, except the c1x02 process which talks to node 8 (since it is node 4 and can't talk to itself).

I'm told this stuff is going away in the next revision and shouldn't have this hard coded stuff.

 
Different Dolphin Problem and Fix:

Apparently, the only models which should have pciRfm=1 are the IOP models which have a dolphin connection.  Front end models that are not IOP models (like c1lsc and c1rfm) should not have this flag set.  Otherwise they include the dolphin drivers and causes them and the IOP to refuse to unload when using rmmod.

So pciRfm=1 only in IOP models using Dolphin, everyone else should not have it or should have pciRfm=-1.

 

Current CDS status:

MC damp dataviewer diaggui AWG c1ioo c1sus c1iscex RFM Dolphin RFM Sim.Plant Frame builder TDS
                       
  4046   Mon Dec 13 17:18:47 2010 josephbUpdateCDSBurt updates

Problem:

Autoburt wouldn't restore settings for front ends on reboot

What was done:

First I moved the burt directory over to the new directory structure.

This involved moving /cvs/cds/caltech/burt/ to /opt/rtcds/caltech/c1/burt.

Then I updated the burt.cron file in the new location, /opt/rtcds/caltech/c1/burt/autoburt/.  This pointed to the new autoburt.pl script.

I created an autoburt directory in the /opt/rtcds/caltech/c1/scripts directory and placed the autoburt.pl script there.

I modified the autoburt.pl script so that it pointed to the new snapshot location.  I also modified it so it updates a directory called "latest" located in the /opt/rtcds/caltech/c1/burt/autoburt directory.  In there is a set of soft links to the latest autoburt backup.

Lastly, I edited the crontab on op340m (using crontab -e) to point to the new burt.cron file in the new location.

This was the easiest solution since the start script is just a simple bash script and I couldn't think of a quick and easy way to have it navigate the snapshots directory reliably.

I then modified the Makefile located in /opt/rtcds/caltech/c1/core/advLigoRTS/ which actually generates the start scripts, to point at the "latest" directory when doing restores.  Previously it had been pointing to /tmp/ which didn't really have anything in it.

So in the future, when building code, it should point to the correct snapshots now.  Using sed I modified all the existing start scripts to point to the latest directory when grabbing snapshots.

Future:

According to Keith directory documentation (see T1000248) , the burt restores should live in the individual target system directory i.e. /target/c1sus/burt, /target/c1lsc/burt, etc.  This is a distinctly different paradigm from what we've been using in the autoburt script, and would require a fairly extensive rewrite of that script to handle this properly.  For the moment I'm keeping the old style, everything in one directory by date.  It would probably be worth discussing if and how to move over to the new system.

  4053   Tue Dec 14 11:24:35 2010 josephbUpdateCDSburt restore

I had updated the individual start scripts, but forgotten to update the rc.local file on the front ends to handle burt restores on reboot.

I went to the fb machine and into /diskless/root/etc/ and modified the rc.local file there.

Basically in the loop over systems, I added the following line:

/opt/epics-3.14.9-linux/base/bin/linux-x86/burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/latest/${i}epics.snap  -l /opt/rtcds/caltech/c1/burt/autoburt/logs/${i}epics.log.restore -v

The ${i} gets replaced with the system name in the loop (c1sus, c1mcs, c1rms, etc)

  4057   Wed Dec 15 13:36:44 2010 josephbUpdateCDSETMY IO chassis update

I gave Alex a sob story over lunch about having to go and try to resurrect dead VME crates.  He and Rolf then took pity on me and handed me their last host interface board from their test stand, although I was warned by Rolf that this one (the latest generation board from One Stop) seems to be flakier than previous versions, and may require reboots if it starts in a bad state.

Anyways, with this in hand I'm hoping to get c1iscey damping by tomorrow at the latest.

  4058   Wed Dec 15 14:23:32 2010 KojiUpdateCDSETMY IO chassis update

Great!

I wish this board works fine at least for several days...

Quote:

I gave Alex a sob story over lunch about having to go and try to resurrect dead VME crates.  He and Rolf then took pity on me and handed me their last host interface board from their test stand, although I was warned by Rolf that this one (the latest generation board from One Stop) seems to be flakier than previous versions, and may require reboots if it starts in a bad state.

Anyways, with this in hand I'm hoping to get c1iscey damping by tomorrow at the latest.

 

  4060   Wed Dec 15 17:21:20 2010 josephbUpdateCDSETMY controls status

Status:

The c1iscey was converted over to be a diskless Gentoo machine like the other front ends, following the instructions found here.  Its front end model, c1scy was copied and approriately changed from the c1scx model, along with the filter banks.  A new IOP c1x05 was created and assigned to c1iscey.

The c1iscey IO chassis had the small 4 PCI slot board removed and a large 17 PCI slot board put in.  It was repopulated with an ADC/DAC/BO and RFM card.  The host interface board from Rolf was also put in. 

On start up, the IOP process did not see or recognize any of the cards in the IO chassis.

Four reboots later, the IOP code had seen the ADC/DAC/BO/RFM card once.  And on that reboot, there was a time out on the ADC which caused the IOP code to exit.

In addition to the not seeing the PCI cards most of the time, several cables still need to be put together for plugging into the the adapter boards and a box need to be made for the DAC adapter electronics.

 

  4065   Thu Dec 16 15:10:18 2010 josephb, kiwamuUpdateCDSETMY working at the expense of ETMX

I acquired a second full pair of Host interface board cards (one for the computer and one for the chassis) from Rolf (again, 2nd generation - the bad kind).

However, they exhibited the same symptoms as the first one that I was given. 

Rolf gave a few more suggestions on getting it to work.  Pull the power plugs.  If its got slow flashing green lights, just soft cycle, don't power cycle.  Alex suggested turning the IO chassis on before the computer.

None of it seemed to help in getting the computer talking to the IO chassis.

 

I finally decided to simply take the ETMX IO chassis and place it at the Y end.  So for the moment, ETMY is working, while ETMX is temporarily out of commission. 

We also made the necessary cables (2x 37 d-sub female to 40 pin female and 40 pin female to 40 pin female) .  Kiwamu also did nice work on creating a DAC adapter box, since Jay had given me a spare board, but nothing to put it in.

  4068   Fri Dec 17 02:22:06 2010 kiwamuUpdateCDSETMY damping: not good

  I made some efforts in order to damp ETMY, however it still doesn't happily work.

 It looks like something wrong is going on around the whitening filters and the AA filter borad.

I will briefly check those analog parts tomorrow morning.

 

- - -(symptom)

Signs of the UL and the SD readouts are flipped, which I don't know why.

At the testpoints on the analog PD interface board, all the signs are the same. This is good.

But after the signals go through the whitening filters and AA filters, UL and SD become sign-flipped.

I tried compensating the sign-flips by changing the sign by means of the software, but it didn't help the damping. 

In fact the suspension got crazy when I activated the damping. So I have no idea if we are looking at exactly right readouts or some sort of different signals.

 

- - -(fixing DAC connector)

 I fixed a connector of the DAC ribbon cable since the solderless connector was loosely locked to its cable.

Before fixing this connector I couldn't apply voltages on some of the coils  but now it is working well.

  4075   Mon Dec 20 10:06:36 2010 kiwamuUpdateCDSETMY damped

  Last Saturday I succeeded in damping the ETMY suspension eventually.

This means now ALL the suspensions are happily damped. 

It looked like some combination of gains and control filters had made unstabie conditions.

 2010Dec18.png

  I actually was playing with the on/off switches of the control filters and the gain values just for fun.

Then finally I found it worked when the chebyshev filters were off. This is the same situation as Yuta told me about two months before.

Other things like the input and the output matrix looked nothing is wrong, except for the sign flips at ULSEN and SDSEN as I mentioned in the last entry (see here).

So we still should take a look at the analog filters in order to make sure why the signs are flipped.

  4097   Fri Dec 24 09:01:33 2010 josephbUpdateCDSBorrowed ADC

Osamu has borrowed an ADC card from the LSC IO chassis (which currently has a flaky generation 2 Host interface board).  He has used it to get his temporary Dell test stand running daqd successfully as of yesterday.

This is mostly a note to myself so I remember this in the new year, assuming Osamu hasn't replaced the evidence by January 7th.

  4101   Sat Jan 1 19:13:40 2011 ranaUpdateCDSc1pem now recording data

 I found that there was no PEM data nor any other data (no SUS or otherwise. No testpoints, no DAQ).

I went through the procedure that Jenne has detailed in the Wiki but it didn't work.

1) Firstly, the 'telnet fb 8088' step doesn't work. It says "Connected to fb.martian" but then just hangs. To replicate the effect of this step I tried ssh'ing to fb and doing a 'pkill daqd'. That works to restart the daqd process.

2) The wiki instructions had a problem. In the GUI step, it should say 'Save' after the Acquire bit has been set to 1. Even so, this works to get the .ini file right and the DTT can see the correct channel list, but none of the channels are available. There are just 'Unable to obtain measurement data'.

3) I tried running 'startc1pem', but no luck. I also tried rebooting c1sus from the command line. That worked so far as to come back up with all the right processes running, but still no data. The actual /frames directory shows that there are frames, but we just can't see the data. I also tried to get data usind the DTT-NDS2 method, but still no luck. (*** ITMX and ITMY both came back with all their filters off; worth checking if their BURTs are working correctly.)

Using DataViewer, however, I AM able to see the data (although the channel name is RED). In fact, I am able to see the trend data ever since I changed the Acquire bit to 1. Plot attached as evidence. Why does DTT not work anymore???

Attachment 1: Untitled.png
Untitled.png
  4126   Sat Jan 8 21:12:12 2011 ranaUpdateCDSMegatron is back

 I started reverting Megatron into a standard Ubuntu workstation after Joe/Osamu's attempt to steal it for their real time mumbo jumbo.

First, I installed a hard drive that was sitting around on top of it. That whole area is still a mess; I'm not surprised that we have so many CDS problems in such a chaotic state. There's another drive sitting around there called 'RT Linux' which I didn't use yet.

Second, I removed the ethernet cables and installed a monitor/keyboard/mouse on it.

Then I popped in the Ubuntu 10.04 LTS DVD, wiped the existing CentOS install and started the standard graphical installation of Ubuntu.

megatron.jpg

Megatron's specs attached: 

Attachment 2: sysinfo.text
  4129   Mon Jan 10 16:39:36 2011 josephb, alex, rolfUpdateCDSNew Time server for frame builder and 1PPS

Alex and Rolf came over today with a Tempus LX  GPS network timing server.  This has an IRIG-B output and a 1PPS output.  It can also be setup to act as an NTP server (although we did not set that up).

This was placed at waist height in the 1X7 rack.  We took the cable running to the presumably roof mounted antenna from the VME timing board and connected it to this new timing server.  We also moved the source of the 1PPS signal going to the master timer sequencer (big blue box in 1X7 with fibers going to all the front ends) to this new time server.  This system is currently working, although it took about 5 minutes to actually acquire a timing signal from the GPS satellites.  Alex says this system should be more stable, with no time jumps. 

I asked Rolf about the new timing system for the front ends, he had no idea when that hardware would be available to the 40m.

Currently, all the front ends and the frame builder agree on the time.  Front ends are running so the 1 PPS signal appears to be working as well.

  4130   Mon Jan 10 16:47:08 2011 josephb, alex, rolfUpdateCDSFixed c1lsc dolphin reflected memory

While Alex and Rolf were visiting, I pointed out that the Dolphin card was not sending any data, not even a time stamp, from the c1lsc machine.

After some poking around, we realized the IOP (input/output processor) was coming up before the Dolphin driver had even finished loading. 

We uncommented the line

#/etc/dolphin_wait

in the /diskless/root/etc/rc.local file on the frame builder.  This waits until the dolphin module is fully loaded, so it can hand off a correct pointer to the memory location that the Dolphin card reads and writes to.  Previously, the IOP had been receiving a bad pointer since the Dolphin driver had not finished loading.

So now the c1lsc machine can communicate with c1sus via Dolphin and from there the rest of the network via the traditional Ge Fanuc RFM.

  4132   Tue Jan 11 11:19:13 2011 josephbSummaryCDSStoring FE harddrives down Y arm

Lacking a better place, I've chosen the cabinet down the Y arm which had ethernet cables and various VME cards as a location to store some spare CDS computer equipment, such as harddrives.  I've added (or will add in 5 minutes) a label "FE COMPUTER HARD DRIVES" to this cabinet.

  4134   Tue Jan 11 13:32:52 2011 josephb, kiwamuUpdateCDSUpdated some DAQ channel names

[Joe, Kiwamu]

We modified the activateDAQ.py script which lives in /opt/rtcds/caltech/c1/chans/daq/ and updates the C1SUS.ini, C1MCS.ini, C1RMS.ini, C1SCX.ini and C1SCY.ini files.  These files contain the DAQ channels for all the optics.

It has been modified so that channels like C1:SUS-ITMX_ULSEN_OUT_DAQ become C1:SUS-ITMX_SENSOR_UL.  Similarly the oplev signals go from C1:SUS-ITMX_OLPIT_OUT to C1:SUS-ITMX_OPLEV_PERROR.

After some debugging, we ran the script successfully and checked the output was correct.  We then restarted the frame builder (telnet fb 8088 and then shutdown) and also hit the DAQ reload button for all the front ends.

I tested in dataviewer that I could go back several years as well as going back just 1 hour in the history and see data for C1:SUS-ITMX_SENSOR_LL as well as C1:SUS-ITMX_OPLEV_YERROR.  I also tested realtime is also working for these channels.

 

The contents of the script are below.

 

inputfiles=["C1SUS.ini","C1RMS.ini","C1MCS.ini","C1SCX.ini","C1SCY.ini"]
prefix="[C1:SUS-"
optics=["BS_","ITMX_","ITMY_","PRM_","SRM_","MC1_","MC1_","MC2_","MC3_","ETMX_"]
#channels=["SUSPOS_IN1","SUSPIT_IN1","SUSYAW_IN1","SUSSIDE_IN1","ULSEN_OUT","URSEN_OUT","LRSEN_OUT","LLSEN_OUT","SDSEN_OUT","OL_SUM_IN1","OLPIT_IN1","OLYAW_IN1"]
channels_dict = {'SUSPOS_IN1':'SUSPOS_IN1_DAQ',
'SUSPIT_IN1':'SUSPIT_IN1_DAQ',
'SUSYAW_IN1':'SUSYAW_IN1_DAQ',
'SUSSIDE_IN1':'SUSSIDE_IN1_DAQ',
'ULSEN_OUT':'SENSOR_UL',
'URSEN_OUT':'SENSOR_UR',
'LRSEN_OUT':'SENSOR_LR',
'LLSEN_OUT':'SENSOR_LL',
'SDSEN_OUT':'SENSOR_SIDE',
'OLPIT_OUT':'OPLEV_PERROR',
'OLYAW_OUT':'OPLEV_YERROR',
'OL_SUM_OUT':'OPLEV_SUM'}

suffix="_DAQ]\n"

## set datarate
datarate=2048

## read the ini files
for inputfile in inputfiles:
    print inputfile
    outputfile=inputfile
    ifile = open(inputfile,'r')
    lines = ifile.readlines()
    ifile.close()

    for k in range(len(lines)):
        for op in optics:
            for ch in channels_dict:
                if (prefix+op+ch+suffix) in lines[k]:
                    lines[k]=prefix + op + channels_dict[ch] + "]\n"
                    lines[k+1]=lines[k+1].lstrip("#").rstrip(lines[k+1].split("=")[1])+"1\n"
                    lines[k+2]=lines[k+2].lstrip("#")
                    lines[k+3]=lines[k+3].lstrip("#").rstrip(lines[k+3].split("=")[1])+str(datarate)+"\n"
                    lines[k+4]=lines[k+4].lstrip("#")
    ofile = open(outputfile,'w')
    for k in range(len(lines)):
        ofile.write(lines[k])
        #print lines[k]
    ofile.close()

  4136   Tue Jan 11 16:04:17 2011 josephbUpdateCDSScript to update web views of models for all installed front ends

I wrote a new script that is in /opt/rtcds/caltech/c1/scripts/AutoUpdate/ called  webview_simlink_update.m. 

This m-file when run in matlab will go to the /opt/rtcds/caltech/c1/target directory and for each c1 front end, generate the corresponding webview files for that system and place them in the AutoUpdate directory. 

Afterwards the files can be moved on Nodus to the /users/public_html/FE/ directory with:

mv /opt/rtcds/caltech/c1/scripts/AutoUpdate/*slwebview* /users/public_html/FE/

This was run today, and the files can be viewed at:

https://nodus.ligo.caltech.edu:30889/FE/

Long term, I'd like to figure out a way of automating this to produce automatically updated screens without having to run it manually.  However, simulink seems to stubbornly require an X window to work.

  4144   Wed Jan 12 17:50:21 2011 josephbUpdateCDSWorked on c1lsc, MC2 screens

[josephb, osamu, kiwamu]

We worked over by the 1Y2 rack today, trying to debug why we didn't get any signal to the c1lsc ADC.

We turned off the power to the rack several times while examining cards, including the whitening filter board, AA board, and the REFL 33 demod board.  I will note, I incorrectly turned off power in the 1Y1 rack briefly. 

We noticed a small wire on the whitening filter board on the channel 5 path.  Rana suggested this was to part of a fix for the channels 4 and 5 having too much cross talk.  A trace was cut and this jumper added to fix that particular problem.

We confirmed would could pass signals through each individual channel on the AA and whitening filter boards.  When we put them back in, we did noticed a large offset when the inputs were not terminated.  After terminating all inputs, values at the ADC were reasonable, measuring on from 0 to about -20 counts.  We applied a 1 Hz, 0.1 Vpp signal and confirmed we saw the digital controls respond back with the correct sine wave.

We examined the REFL 33 demod board and confirmed it would work for demodulating 11 MHZ, although without tuning, the I and Q phases will not be exactly 90 degrees apart.

The REFL 33  I and Q outputs have been connected to the whitening board's 1 and 2 inputs, respectively.  Once Kiwamu  adds approriate LO and PD signals to the REFL 33 demod board he should be able to see the resulting I and Q signals digitally on the PD1 I and Q channels.

 

In an unrelated fix, we examined the suspensions screens, specifically the Dewhitening lights.  Turns out the lights were still looking at SW2 bit 7 instead of SW2 bit 5.  The actual front end models were using the correct bit (21 which corresponds to the 9th filter bank), so this was purely a display issue.  Tomorrow I'll take a look at the binary outputs and see why the analog filters aren't actually changing.

 

 

 

  4146   Wed Jan 12 22:33:24 2011 kiwamuUpdateCDSMC2 dewhitening are healthy except for UR

I briefly checked the MC2 analog dewhitening filters.

It turned out that the switching of the dewhitening filters from epics worked correctly except for the UR path.

I couldn't get a healthy transfer function for the UR path probably because the UR monitor at the front panel on either the AI filter or the dewhitening filter maybe broken.

Need a check again.

Quote:  #4144

Tomorrow I'll take a look at the binary outputs and see why the analog filters aren't actually changing.

  4150   Thu Jan 13 14:21:13 2011 josephbUpdateCDSWebview of front end model files automated

After Rana pointed me to Yoichi's MEDM snapshot script, I learned how to use Xvfb, which is what Yoichi used to write screens without a real screen.  With this I wrote a new cron script, which I added to Mafalda's cron tab to be run once a day at 6am.

The script is called webview_update.cron and is in /opt/rtcds/caltech/c1/scripts/AutoUpdate/.

#!/bin/bash
DISPLAY=:6
export DISPLAY
#Check if Xvfb server is already running
pid=`ps -eaf|grep vfb | grep $DISPLAY | awk '{print $2}'`
if [ $pid ]; then
        echo "Xvfb already running [pid=${pid}]" >/dev/null
else
# Start Xvfb
echo "Starting Xvfb on $DISPLAY"
Xvfb $DISPLAY -screen 0 1600x1200x24 >&/dev/null &
fi
pid=$!
echo $pid > /opt/rtcds/caltech/c1/scripts/AutoUpdate/Xvfb.pid
sleep 3

#Running the matlab process
/cvs/cds/caltech/apps/linux/matlab/bin/matlab -display :6 -logfile /opt/rtcds/caltech/c1/scripts/AutoUpdate/webview.log -r webview_simlink_update

  4152   Thu Jan 13 16:41:07 2011 josephbUpdateCDSChannel names for LSC updated

I renamed most of the filter banks in the c1lsc model.  The input filters are now labeled based on the RF photodiode's name, plus I or Q.  The last set of filters in the OM subsystem (output matrix) have had the TO removed, and are now sensibly named ETMX, ETMY, etc.

We also removed the redundant filter banks between the LSCMTRX and the LSC_OM_MTRX.  There is now only one set, the DARM, CARM, etc ones.

The webview of the LSC model can be found here.

  4160   Fri Jan 14 20:39:20 2011 ranaUpdateCDSUpdated some DAQ channel names

I like this activateDAQ script, but someone (Jenne with Joe's help) still needs to add the PEM channels - we still cannot see any seismic trends.

  4171   Thu Jan 20 00:39:22 2011 kiwamuHowToCDSDAQ setup : another trick

Here is another trick for the DAQ setup when you add a DAQ channel associated with a new front end code.

 

 Once you finish setting up the things properly according to this wiki page (this page ), you have to go to 

      /cvs/cds/rtcds/caltech/c1/target/fb

and then edit the file called master

This file contains necessary path where fb should look at, for the daqd initialization.

Add your path associated with your new front end code on this file, for example:

        /opt/rtcds/caltech/c1/chans/daq/C1LSC.ini

       /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1lsc.par

After editing the file, restart the daqd on fb by the usual commands:

             telnet fb 8088

             shutdown

  4173   Thu Jan 20 04:03:02 2011 kiwamuUpdateCDSc1scy error

 I found that c1scy was not running due to a daq initialization error.

 I couldn't figure out how to fix it, so I am leaving it to Joe.


 Here is the error messages in the dmesg on c1iscey
[   39.429002] c1scy: Invalid num daq chans = 0
[   39.429002] c1scy: DAQ init failed -- exiting
 
 
Before I found this fact, I rebooted c1iscey in order to recover the synchronization with fb.
The synchronization had been lost probably because I shutdowned the daqd on fb.
  4175   Thu Jan 20 10:15:50 2011 josephbUpdateCDSc1scy error

This is caused by an insufficient number of active DAQ channels in the C1SCY.ini file located in /opt/rtcds/caltech/c1/chans/daq/.  A quick look (grep -v # C1SCY.ini) indicates there are no active channels.  Experience tells me you need at least 2 active channels.

Taking a look at the activateDAQ.py script in the daq directory, it looks like the C1SCY.ini file is included, by the loop over optics is missing ETMY.  This caused the file to improperly updated when the activateDAQ.py script was run.  I have fixed the C1SCY.ini file (ran a modified version of the activate script on just C1SCY.ini).

I have restarted the c1scy front end using the startc1scy script and is currently working.

Quote:
 Here is the error messages in the dmesg on c1iscey
[   39.429002] c1scy: Invalid num daq chans = 0
[   39.429002] c1scy: DAQ init failed -- exiting
 

 

  4179   Thu Jan 20 18:20:55 2011 josephbUpdateCDSc1iscex computer and c1sus computer swapped

Since the 1U sized computers don't have enough slots to hold the host interface board, RFM card, and a dolphin card, we had to move the 2U computer from the end to middle to replace c1sus.

We're hoping this will reduce the time associated with reads off the RFM card compared to when its in the IO chassis.  Previous experience on c1ioo shows this change provides about a factor of 2 improvement, with 8 microseconds per read dropping to 4 microseconds per read, per this elog.

So the dolphin card was moved into the 2U chassis, as well as the RFM card.  I had to swap the PMC to PCI adapter on the RFM card since the one originally on it required an external power connection, which the computer doesn't provide.  So I swapped with one of the DAC cards in the c1sus IO chassis.

But then I forgot to hit submit on this elog entry..............

  4183   Fri Jan 21 15:26:15 2011 josephbUpdateCDSc1sus broken yesterday and now fixed

[Joe, Koji]
Yesterday's CDS swap of c1sus and c1iscex left the interfometer in a bad state due to several issues.

The first being a need to actually power down the IO chassis completely (I eventually waited for a green LED to stop glowing and then plugged the power back in) when switching computers.  I also plugged and plugged the interface cable from the IO chassis and computer while powered down.  This let the computer actually see the IO chassis (previously the host interface card was glowing just red, no green lights).

Second, the former c1iscex computer and now new c1sus computer only has 6 CPUs, not 8 like most of the other front ends.  Because it was running 6 models (c1sus, c1mcs, c1rms, c1rfm, c1pem, c1x02) and 1 CPU needed to be reserved for the operating system, 2 models were not actually running (recycling mirrors and PEM).  This meant the recycling mirrors were left swinging uncontrolled.

To fix this I merged the c1rms model with the c1sus model.  The c1sus model now controls BS, ITMX, ITMY, PRM, SRM.  I merged the filter files in the /chans/ directory, and reactivated all the DAQ channels.  The master file for the fb in the /target/fb directory had all references to c1rms removed, and then the fb was restarted via "telnet fb 8088" and then "shutdown".

My final mistake was starting the work late in the day.

So the lesson for Joe is, don't start changes in the afternoon.

Koji has been helping me test the damping and confirm things are really running.  We were having some issues with some of the matrix values.  Unfortunately I had to add them by hand since the previous snapshots no longer work with the models.

  4184   Fri Jan 21 17:59:27 2011 josephb, alexUpdateCDSFixed Dolphin transmission

The orientation of the Dolphin cards seems to be opposite on c1lsc and c1sus.  The wide part is on top on c1lsc and on the bottom on c1sus.  This means, the cable is plugged into the left Dolphin port on c1lsc and into the right Dolphin port on c1sus.  Otherwise you get a wierd state where you receive but not transmit.

  4200   Tue Jan 25 15:20:38 2011 josephbUpdateCDSUpdated c1rfm model plus new naming convention for RFM/Dolphin

After sitting down for 5 minutes and thinking about it, I realized the names I had been using for internal RFM communication were pretty bad.  It was because looking at a model didn't let you know where the RFM connection was coming from or going to.  So to correct my previous mistakes, I'm instituting the following naming convention for reflected memory, PCIE reflected memory (dolphin) and shared memory names.  These don't actually get used anywhere but the models, and thus don't show up as channel names anywhere else.  They are replaced by raw hex memory locations in the actual code through the use of the IPC file (/opt/rtcds/caltech/c1/chans/ipc/C1.ipc).  However it will make understanding the models easier for anyone looking at them or modifying them.

 

The new naming convention for RFM and Dolphin channels is as follows.

SITE:Sending Model-Receiving Model_DESCRIPTION_HERE

The description should be unique to that data being transferred and reused if its the same data.  Thus if its transfered to another model, its easy to identify it as the same information.

The model should be the .mdl file name, not the subsystem its a part of.  So SCX is used instead of SUS.  This is to make it easier to track where data is going.

In the unlikely case of multiple models receiving, it should be of the form SITE:Sending Model-Receiving Model 1-Receiving Model 2_DESCRIPTION_HERE.  Seperate models by dashes and description by underscores.

Example:

C1:LSC-RFM_ETMX_LSC

This channel goes from the LSC model (on c1lsc) to the RFM model (on c1sus).  It transfers ETMX LSC position feedback.  The second LSC may seem redundant until we look at the next channel in the chain.

C1:RFM-SCX_ETMX_LSC

This channel goes from the RFM model to the SCX model (on c1iscex). It contains the same information as the first channel, i.e. ETMX LSC position feedback.

 

I have updated all the models that had RFM and SHMEM connections, as well as adding all the LSC communciation connections to c1rfm.  This includes c1sus, c1rfm, c1mcs, c1ioo, c1gcv, c1lsc, c1scx, c1scy.  I have not yet built all the models since I didn't finish the updates until this afternoon.  I will build and test the code tomorrow morning.

 

 

 

  4203   Tue Jan 25 22:49:13 2011 KojiUpdateCDSFront End multiple crash

STATUS:

  • Rebooted c1lsc and c1sus. Restarted fb many times.
  • c1sus seems working.
  • All of the suspensions are damped / Xarm is locked by the green
  • Thermal control for the green is working
  • c1lsc is frozen
  • FB status: c1lsc 0x4000, c1scx/c1scy 0x2bad
  • dataviewer not working

1. DataViewer did not work for the LSC channels (liek TRX)

2. Rebooted LSC. There was no instruction for the reboot on Wiki. But somehow the rebooting automatically launched the processes.

3. However, rebooting LSC stopped C1SUS processes working

4. Rebooted C1SUS. Despite the rebooting description on wiki, none of the FE processes coming up.

5. Probably, I was not enough patient to wait for the completion of dorphine_wait? Rebooted C1SUS again.

6. Yes. That was true. This time I wait for everything going up automatically. Now all of c1pemfe,c1rfmfe,c1mcsfe,c1susfe,c1x02fe are running.
FB status for c1sus processes all green.

7. burtrestored c1pemfe,c1rfmfe,c1mcsfe,c1susfe,c1x02fe with the snapshot on Jan 25 12:07, 2010.

8. All of the OSEM filters are off, and the servo switches are incorrectly on. Pushing many buttons to restore the suspensions.

9. I asked Suresh to restore half of the suspensions.

10. The suspensions were restored and damped. However, c1lsc is still freezed.

11. Rebooting c1lsc freezed the frontends on c1sus. We redid the process No. 5 to No.10

12. c1x04 seems working. c1lsc, however, is still frozen. We decided to leave C1LSC in this state.

 

  4206   Wed Jan 26 10:58:48 2011 josephbUpdateCDSFront End multiple crash

Looking at dmesg on c1lsc, it looks like the model is starting, but then eventually times out due to a long ADC wait. 

[  114.778001] c1lsc: cycle 45 time 23368; adcWait 14; write1 0; write2 0; longest write2 0
[  114.779001] c1lsc: ADC TIMEOUT 0 1717 53 181

I'm not sure what caused the time out, although there about 20 messages indicating a failed time stamp read from c1sus (its sending TRX information to c1lsc via the dolphin connection) before the time out.

Not seeing any other obvious error messages, I killed the dead c1lsc model by typing:

sudo rmmod c1lscfe

I then tried starting just the front end model again by going to the /opt/rtcds/caltech/c1/target/c1lsc/bin/ directory and typing:

sudo insmod c1lscfe.ko

This started up just the FE again (I didn't use the restart script because the EPICs processes were running fine since we had non-white channels).  At the moment, c1lsc is now running and I see green lights and 0x0 for FB0 status  on the C1LSC_GDS_TP screen.

At this point I'm not sure what caused the timeout.  I'll be adding some more trouble shooting steps to the wiki though.  Also, c1scx, c1scy are probably in need of restart to get them properly sync'd to the framebuilder.

I did a quick test on dataviewer and can see LSC channels such as C1:LSC-TRX_IN1, as well other channels on C1SUS such as BS sensors channels.

Quote:

STATUS:

  • Rebooted c1lsc and c1sus. Restarted fb many times.
  • c1sus seems working.
  • All of the suspensions are damped / Xarm is locked by the green
  • Thermal control for the green is working
  • c1lsc is frozen
  • FB status: c1lsc 0x4000, c1scx/c1scy 0x2bad
  • dataviewer not working 

 

ELOG V3.1.3-