40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 256 of 349  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  4545   Wed Apr 20 11:02:18 2011 josephbUpdateCDSMEDM screens and Front Ends updated to new Matrices
We simply didn't any matrices larger than 16x16. If we had, than that matrix would not have worked properly since the beginning.

Quote:

Just a curiosity:

I just wonder how you have distingushed the difference between _111 and _111.

They are equivalent alone themselves. Have you looked at the contexts of the lines?
Or you just did not have the larger matrix than 16x16, did you?

 

  4580   Thu Apr 28 10:53:50 2011 josephbUpdateCDSAdventures in Hyper-threading

What was done:

1) Turn off MC1, MC2, MC3, BS, ITMX, ITMY, PRM, SRM watchdogs.

2) Turn c1sus computer off (sudo shutdown now)

3) Go connect monitor and keyboard to c1sus.  Turn c1sus on.

4) Hit "del" key at the right time to go to setup (BIOS).

5) Go to BIOS advanced tab, CPU options, enable Multi-threading.

6) Hit F10 to save and let the computer continue booting.

What went wrong:

Once c1sus was up, I noticed several red lights and dead keep alives for the c1sus models.

Typing dmesg on c1sus revealed many messages like:

[  107.583420] c1x02: cycle 33737 time 20; adcWait 10; write1 0; write2 0; longest write2 0
[  107.583771] c1x02: cycle 33760 time 19; adcWait 11; write1 0; write2 0; longest write2 0

This indicates the Input/Output Processor (IOP) is not completing its duties within the 15 microseconds (1/64 kHz) that it has.  These lines indicate its take 20 or 19 microseconds.  (I saw messages ranging from 16 to 22 microseconds).

So this seems to agree with Rolf's observations that hyperthreading can cause a 5-10 microsecond increase in computation time.

So the next thing to do is modify which core the codes are running on, and try to get them paired up on the same physical core.

  4581   Thu Apr 28 12:25:11 2011 josephbUpdateCDSFurther adventures in Hyper-threading

First, I disabled front end starts on boot up, and brought c1sus up.  I rebuilt the models for the c1sus computer so they had a new specific_cpu numbers, making the assumption that 0-1 were one real core, 2-3 were another, etc.

Then I ran the startc1SYS scripts one by one to bring up the models.  Upon just loading the c1x02 on "core 2" (the IOP), I saw it fluctuate from about 5 to 12.  After bringing up c1sus on "core 3", I saw the IOP settle down to about 7 consistently.  Prior to hyper-threading it was generally 5. 

Unfortunately, the c1sus model was between 60 and 70 microseconds, and was producing error messages a few times a second

[ 1052.876368] c1sus: cycle 14432 time 65; adcWait 0; write1 0; write2 0; longest write2 0
[ 1052.936698] c1sus: cycle 15421 time 74; adcWait 0; write1 0; write2 0; longest write2 0

Bringing up the rest of the models (c1mcs on 4, c1rfm on 5, and c1pem on 6), saw c1mcs occasionally jumping above the 60 microsecond line, perhaps once a minute.   It was generally hovering around 45 microseconds.  Prior to hyper-threading it was around 25-28 microseconds.

c1rfm was rock solid at 38, which it was prior to hyper-threading.  This is most likely due to the fact it has almost no calculation and only RFM reads slowing it down.

c1pem continued to use negligible time, 3 microseconds out of its 480.

I tried moving c1sus to core 8 from core 3, which seemed to bring it to the 58 to 65 microsecond range, with long cycles every few seconds.

 

I built 5 dummy models (dua on 7, dub on 9, duc on 10, dud on 11, due on 1) to ensure that each virtual core had a model on it, to see if it helped with stabilizing things.  The models were basically copies of the c1pem model.

Interestingly, c1mcs seemed to get somewhat better and only taking to 30-32 microseconds, although still not as good as its pre-hyper-threading 25-28.  Over the course of several minutes it was no longer having a long cycle.

c1sus got worse again, and was running long cycles 4-5 times a second.

 

At this point, without surgery on which models are controlling which optics (i.e. splitting the c1sus model up) I am not able to have hyper-threading on and have things working.  I am proceeding to revert the control models and c1sus computer to the hyper-threading state.

 

 

  4608   Tue May 3 10:41:35 2011 josephbUpdateCDSMorning maintenance

1) Filled in the C1SUS_BS_OLMATRIX properly so as to make the BS oplev work for Steve.

2) Turned on the ITMX damping.  Apparently it had tripped this morning, possibly due to work in the lab area.

3) The ETMX FE controller (c1scx) had ADC timed out and died sometime around 8:30 am.  The c1x01 (the IOP on the ETMX computer) was also indicating a FB status error (mismatch in DAQ channels).

The reported error in dmesg on c1iscex was:

[1628690.250002] c1spx: ADC TIMEOUT 0 3541 21 3605
[1628690.250002] c1scx: ADC TIMEOUT 0 3541 21 3605

Just to be safe, I rebuilt the c1x01 and c1scx models, ran ./activateDAQ.py, and used the scripts killc1spx, killc1scx, and killc1x01.

I finally restarted the process with startc1x01, startc1scx, and startc1spx.  Everything is currently alive and indicating all green.

  4609   Tue May 3 10:59:31 2011 josephbUpdateCDS1Y2 binary output adapter board now powered

I temporarily turned off the power to the 1Y2 rack this morning while wiring in the binary output adapter board power (+/- 15V) into the cross connects.

The board is now powered and we can proceed to testing if can actually control the LSC whitening filters.

  4665   Mon May 9 13:14:48 2011 josephbUpdateLSCC1:LSC-TRIG_MTRX : wrong matrix size

[Joe, Kiwamu]

There is a feature/bug of the RCG code that you can only have 1 receiving tag for every sending tag.  There were 5 tags which were being received by two tags each, for two different matrices.  Only the first tag was receiving, the second was apparently ignored.

This has been fixed temporarily by putting in direct lines in place of these 5 tags.

Quote:

I found that C1:LSC-TRIG_MTRX has a wrong matrix size. It needs to be fixed.

It is designed to have a 11x8 matrix in the simlink model file, but it's been compiled as a 6x8 matrix.

 

  4666   Mon May 9 15:21:36 2011 josephbUpdatePSLFixed channel names for PSL QPDs, fixed saturation, changed signs

[Valera, Joe]

Software Changes:

First we changed all the C1:IOO-QPD_*_* channels to C1:PSL-QPD_*_* channels in the /cvs/cds/caltech/target/c1iool0/c1ioo.db file, as well as the /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini file.  We then rebooted the frame builder via "telnet fb 8087" and then "shutdown".

This change breaks continuity for these channels prior to today.

The C1:PSL-QPD_POS_HOR and C1:PSL-QPD_POS_VERT channels were found to be backwards as well.  So we modified the /cvs/cds/caltech/target/c1iool0/c1ioo.db file to switch them.

Lastly, we changed the ASLO and AOFF values for the C1:PSL-QPD_POS_SUM and the C1:PSL-QPD_ANG_SUM so as to provide positive numbers.  This was done by flipping the sign for each entry.

ASLO went from 0.004883 to -0.004883, and AOFF when from -10 to 10 for both channels.

Hardware Changes:

The C1:PSL-QPD_ANG_SUM channel had been saturated at -10V.  Valera reduced the power on the QPD to drop it to about 4V by placing an ND attenuator in the ANG QPD path.

  4677   Tue May 10 10:06:23 2011 josephbUpdatePSLFixed channel names for PSL QPDs, fixed saturation, changed signs

I added calculation entries to the /cvs/cds/caltech/target/c1iool0/c1ioo.db file which are named C1:IOO-QPD_*_*, as the channels were originally named.  These calculation channels have the identical data to the C1:PSL-QPD_*_* channels.  I then added the channels to the C0EDCU.ini file, so as to once again have continuity for the channels, in addition to having the newer, better named channels.

The c1iool0 machine ("telnet c1iool0", "reboot") and the framebuilder process ("telnet fb 8087", "shutdown") were both restarted after these changes.

These channels were brought up in dataviewer and compared.  The approriate channels were identical.

Quote:

[Valera, Joe]

Software Changes:

First we changed all the C1:IOO-QPD_*_* channels to C1:PSL-QPD_*_* channels in the /cvs/cds/caltech/target/c1iool0/c1ioo.db file, as well as the /opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini file.  We then rebooted the frame builder via "telnet fb 8087" and then "shutdown".

This change breaks continuity for these channels prior to today.

 

  4679   Tue May 10 16:42:49 2011 josephbUpdateCDSNew upconversion model (c1uct)

[Ryan, Joe]

Ryan added the c1uct (upconversion tester) model to the c1ioo machine.   It is DCU_ID 32, CPU 6.  The relevant wiki page has been updated. It has been added to /diskless/root/etc/rtsystab on the fb machine and automatically comes up when the c1ioo computer is turned on. 

Joe still needs to add its status to the status screen.

It is soft linked from:

/opt/rtcds/caltech/c1/userapps/trunk/CDS/c1/models/c1uct.mdl

Ryan will expand upon its uses later.

  4680   Tue May 10 16:45:19 2011 josephbUpdateCDSc1ass now receiving AS55I from c1lsc

[Valera, Joe]

We added a cdsPCIx_SHMEM connection between the c1lsc and c1ass models.  This connection is called C1:LSC-ASS_AS55I, and sends the normalized AS55I data to Lockin 11 of the c1ass model.

In addition, in order to get the c1ass model to compile, we had to place all the non-IO parts inside a subsystem block, which we called ASS, and gave the top_names tag to.

The c1lsc and c1ass models were rebuilt, the frame builder restarted, and the models restarted.

  4748   Thu May 19 12:09:41 2011 josephbUpdateCDSAA filter box pulled from 1X5, optic suspensions currently off

[Steve, Joe]

Steve pulled the top AA filter box from 1X5 which handled some of the suspensions channels.  We turned off all the watchdogs before pulling it out, as well as recorded which cables were connected to which inputs.

The case  is undergoing a structural modification to have the ADC adapter card which previously was loosely connected via cables, securely attached to the case.

Steve still wants to do some cabling in the rack while the box is out, and will return it this afternoon once he has finished that.

  4770   Tue May 31 11:26:29 2011 josephbUpdateCDSCDS Maintenance

1) Checked in the changes I had made to the c1mcp.mdl model just before leaving for Elba.

2) The c1x01 and c1scx kernel modules had stopped running due to an ADC timeout. 

According to dmesg on c1iscex, they died at 3426838 seconds after starting (which corresponds to ~39 days).  "uptime" indicates c1iscex was up for 46 days, 23 hours. So my guess is about 8 days ago (last Monday or Tuesday),  they both died when the ADCs failed to respond quick enough for an unknown reason.

I used the kill scripts (in /opt/rtcds/caltech/c1/scripts/) to kill c1spx, c1scx, and c1x01.  I then used the start scripts to start c1x01, then c1scx, and then finally c1spx.  They all came up fine.

Status screen is now all green.  I renabled damping on ETMX and it seems to be happy. A small kick of the optic shows the approriately damped response.

  4776   Wed Jun 1 11:31:50 2011 josephbUpdateCDSMC1 LR digital reading close to zero, readback ~0.7 volts

There appears to be a bad cable connection somewhere on the LR sensor path for the MC1 optic.

The channel C1:SUS-MC1_LRPDMon is reading back 0.664 volts, but the digital sensor channel, C1:SUS-MC1_LRSEN_INMON, is reading about -16.  This should be closer to +1000 or so.

We've temporarily turned off the LRSEN filter module output while this is being looked into.

I briefly went out and checked the cables around the whitening and AA boards for the suspension sensors, but even after wiggling and making sure everything was plugged in solidly.  There was one semi-loose connection, but it wasn't on the MC1 board, but I pushed it all the way in anyways.  The monitor point on the AA board looks correct for the LR channels, although ITMX LR struck me as being very low at about -0.05 Volts.

According to data viewer, the MC1 LR sensor channel went bad roughly two weeks ago, around 00:40 on 5/18 UTC, or 17:40 on 5/17 PDT.

 

UPDATE:

It appears the AA board (or possibly the SCSI cable connected to it) is the problem in the chain.

  4800   Thu Jun 9 16:18:03 2011 josephbUpdateCDSSecond trends only go back 12 days

While answering a quick question by Kiwamu, I noticed we only had second trends going back to 99050000 GPS time, May 27th 2011. 

Trends (I thought) were intended to be kept forever, and certainly longer than full data, which currently goes back several months.

Jamie will need to look into this.

  4918   Thu Jun 30 06:54:07 2011 josephbUpdateCDSModified the automated scripts for producing model webviews

Dave Barker pointed out last week that the webview of our simulink model files, generated from the installed models (i.e. in /opt/rtcds/caltech/c1/target/<system name>/simLink/) was not handling libraries properly.  Essentially the web pages generated couldn't see inside library parts.

This was caused by 2 problems.  The first being the userapps not being in the matlab path when the slwebview call was done, so it couldn't even find the libraries.  The second problem is the slwebview code by default doesn't follow libraries and links, and needs a special command to be told to do so.

I added the following lines to the webview_simlink_update.m file:

addpath('/opt/rtcds/caltech/c1/core/trunk/src/epics/simLink/lib')
for sub = {'cds','isc','isi','sus','psl'}
 for spath = {'common/models','c1/models/lib'}
   addpath(['/opt/rtcds/caltech/c1/userapps/release/' sub{1} '/' spath{1}]);
 end
end

I also changed the following:

temp = slwebview(final_files{x},'viewFile',false);

became

temp = slwebview(final_files{x},'viewFile',false,'FollowLinks','on','FollowModelReference','on');

After confirming these changes worked, I have sent a corrected version to Dave and Keith.

The webview results can be found at: https://nodus.ligo.caltech.edu:30889/FE/

 

 

  6051   Wed Nov 30 11:04:26 2011 josephbUpdateCDSFiltering Noise issue tracked down ???

Quote:

For now, I suppose we can just change this number to 1e-40 or so. I don't know how to calculate what the right number should be. Not sure why this underflow is not an issue for the BiQuad, however.

According to the RCG SVN logs, the reason it was removed was a more general change done to the compiled code, not specific to just the biquad.  Basically, the ability to have an underflow number (subnormal) has been turned off completely by having any number that underflows set to zero. I'm not positive, but from a quick search looks that the smallest number before hitting is an underflow as a double is 2.2250738585072014e-308.

Alex's entry from the SVN log for 2663:

Added new fz_daz() function to turn on two bits in the FPU SSE control register.
Bits FZ (flush underflows to zero) and DOZ (denorms are zeros) are set to
avoid runaway code on float/double denorms (really small numbers).
Ref: http://software.intel.com/en-us/articles/how-to-avoid-performance-penalties-for-gradual-underflow-behavior/

SVN log 2664:

Removed +- 1e-20 limiting code, this is taken care of by setting FZ/DOZ bits
in the CPU SEE control register (see mathInline.h)

SVN log 2665:

Kill the underflows and roll down float denorms to zero,
see fz_doz() in mathInline.h.

  1892   Wed Aug 12 13:35:03 2009 josephb, AlexConfigurationComputersTested old Framebuilder 1.5 TB raid array on Linux1

Yesterday, Alex attached the old frame builder 1.5 TB raid array to linux1, and tested to make sure it would work on linux1.

This morning he tried to start a copy of the current /cvs/cds structure, however realized at the rate it was going it would take it roughly 5 hours, so he stopped.

Currently, it is planned to perform this copy on this coming Friday morning.

  3128   Mon Jun 28 13:40:53 2010 josephb, AlexUpdateCDSChanges added to CDS SVN, new checkout, new features, some changes made

Last week Alex merged in the changes I had made to the local 40m copy of the Real Time Code Generator.  These were to add a new part, called FiltMuxMatrix, which is a matrix of filter banks, as well as fixing the filter medm generation code so the filter banks actually have working time stamps.

I checked out a new version of the CDS SVN with these changes merged in.  Changes that will be added in the near future by Rolf and Alex include the addition of "tags".  These are pieces in simulink which act as a bridge between two points, so you can reduce the amount of wire clutter on diagrams.  Otherwise they have no real affect on the generated C code.  Also the ADC/DAC channel selector and in fact the ADC/DAC parts will be changing.  The MIT group has requested the channel selector be freed up for its original purpose in matlab, so Rolf is working on that.

The new checkout includes the new directory scheme Rolf is pushing.  So when you run the code generator and more specifically, install SYS, it places code in /opt/rtcds/caltech/c1/ type directories, like medm, chans, target, scripts.

For the time being, Alex has created a directory /rtcds on Linux1 under /home/cds.  He then created softlinks to that directory on megatron, c1iscex, and allegra in the /opt directory.  This was an easy way to have a shared path.

However, it does mean on each new FE  machine after setting up the mounting of /home/cds from Linux1, we also need to create the /opt/rtcds link to /cvs/cds/rtcds.

After checking out the CDS SVN, we discovered there some files missing that Alex had added to his version, but not the main branch.  Alex came over to the 40m and proceeded to get all those files checked in.  We then checked it out again.  Changes were made to the awg, framebuilder, and nds codes and needed to be rebuilt. 

There's a new naming scheme for models.  You need to include the site before the 3 letter system name.  So lsc.mdl become c1lsc.mdl.

Certain other file name conventions were also changed.  Instead of tpchn_c1.par, tpchn_c2.par, etc, its now tpchn_c1lsc.par, tpchn_c2lsp.par, etc.  The system name is included at the end of the filename, to help make it clearer what file goes with what.

This required an edit of the chnconf file, which has explicit calls to those file names.  Once we edited that file, we had to reload the xinetd service which its apparently a subpart of (this can be accomplished by /etc/init.d/xinetd stop, then /etc/init.d/xinetd start).

/etc/rc.d/rc.local also had to be edited for the new model names (c1lsc, c1lsp, etc).

The daqdrc file (for the framebuilder) now parses which dcu_rate to use from the tpchn_c1lsc.par type files, so the dcu_rate 20 = 16384 lines have been removed.  set gds_server has also been removed, and replaced with tpconfig "/opt/rtcds/caltech/c1/target/gds/param/testpoint.par";  from which it can get the hostname.  This information is now derived from the c1SYS.mdl file.

Hostname needs to be added to the .mdl files, in the cdsParameter block (i.e. host=megatron).

After that Alex informed me the IOP processor needs to be running for the other models to work properly, as well as for the Framebuilder to work.

The models and framebuilder now get their timing signal from the IOP (input/output processor).  This must be running in order for the other models or FB to run properly.  Its generally named c1x00 or c1x01 or similar.  The last two numbers ideally are unique to each FE computer.

Initially there was a problem running on Megatron, because the IOP gets its timing signal from the IO chassis, and there was none connected to megatron.  However, he has since modified the code so that if there's no IO chassis, the IOP processor just uses the system clock.  It has been tested and runs on megatron now.

 

  506   Fri May 30 12:03:08 2008 josephb, AndreyConfigurationCamerasHead to head comparison of cameras
Andrey and myself - Joseph B. - have examined the output of the GC650 (CCD) and GC750 (CMOS) prosilica cameras. We did several live motion tests (i.e. rotate the turning mirror, move and rotate the camera, etc) and also used a microscope slide to try to eliminate back reflections and interference.

Both the GC650 and GC750 produce dark lines in the images, some of which look parallel, while others are in much stranger shapes, such as circles and arcs.

Moving the GC750 camera physically, we have the spot moving around, with the dark lines appearing to be fixed to the camera itself, and remain in the same location on the detector. I.e. coming back to the same spot keeps showing a circle. In reasonably well behaved sections, these lines are about 10% dips in power, and could in principle be subtracted out. Its possible that the camera was damaged with too much light incident in the past, although going back to the pmc_trans images that were taken, similar lines are still visible.

Moving the GC650 camera physically seems to change the position of the lines (if one also rotates the turning mirror to get to the same spot on the CCD). It seems as if a slight change in angle has a large effect on these dark bands, which can either be thin, or very large, bordering on the size of the spot size. My guess is (as the vendor suggested) the light is interacting with the electronics behind the surface layer rather than a surface defect producing these lines. Using a microscope slide in between the turning mirror and the GC650, we were able to produce new fringes, but didn't affect the underlying ones.

Placing a microscope slide in between the last turning mirror and the GC750 does not affect the dark lines (although it does seem to add some), nor does turning the final turning mirror, so it seems unlikely to be caused by back reflection in this case.

So it seems the CMOS may be more consistent, although we need to determine if the current line problems are due to exposure to too much light at some point in the past (i.e. I broke it) or they come that way from the factory.

Attached are the results of image-processing of the images from the two our cameras using Andrey's new Matlab script.
Attachment 1: Waveform_Reconstruction_May30-2008.pdf
Waveform_Reconstruction_May30-2008.pdf Waveform_Reconstruction_May30-2008.pdf Waveform_Reconstruction_May30-2008.pdf Waveform_Reconstruction_May30-2008.pdf Waveform_Reconstruction_May30-2008.pdf Waveform_Reconstruction_May30-2008.pdf
  4406   Fri Mar 11 18:32:45 2011 josephb, Chris, JamieUpdateCDSDebugging simplant damping

The FM1 filter module change for XXSEN was propagated to the ETMX suspension.  This was a change from a 30,100:3 with a DC gain of 1 to a 3:30 which just compensates the hardware filter.

The good gains for the Sim damping were found to be:  100 for ETMX_SUSPOS, 0.1 ETMX_SUSPIT, and 0.1 ETMX_SUSYAW, ETMX_SUSSIDE is -70.  Gains much higher tended to saturate the simulated coils (i.e. hitting 10V limit) and then had to have the histories cleared for the RESPONSE matrix.

These seem to work to damp the real ETMX as well.

  558   Tue Jun 24 17:12:10 2008 josephb, EricConfigurationCamerasGC750 setup, 1X4 Hub connected, ETMX images
The GC750 camera has been setup to look at ETMX. In addition, the new 1X4 rack mounted switch (131.215.113.200) has been connected via new cat6 cable to the control room hub (131.215.113.1?), thanks to Eric. The camera is now plugged into 1X4 rack switch and now has a gigabit connection to the control room computers as well as Mafalda (131.215.113.23).

By using ssh -X mafalda or ssh -X 131.215.113.23, then typing:

target
cd Prosilica/bin-pc/x86/
./Sampleviewer

A viewer will be brought up. By clicking on the 3rd icon from the left (looks like an eye) will bring up a live view.

Closing the view, and then cd ../../40mCode, and then running ./Snap --help will tell you how to use a simple code for taking .tiff images as well as setting things such as exposure length and size of image (in pixels) to send.

When the interferometer was set to an X-arm only configuration, we took two series of 200 images each, with two different exposure lengths.

Attached are three pdf images. The first is just a black and white single image, the second is an average of 100 images, and the third is the standard deviation of the 100 images.
Attachment 1: GC750_ETMX_E30000_single.pdf
GC750_ETMX_E30000_single.pdf
Attachment 2: GC750_ETMX_E30000_avg.pdf
GC750_ETMX_E30000_avg.pdf
Attachment 3: GC750_ETMX_E30000_std.pdf
GC750_ETMX_E30000_std.pdf
  681   Wed Jul 16 15:59:04 2008 josephb, EricConfigurationCamerasPMC trans camera path
In order to reduce saturation, we placed a Y1 plate (spare from the SP table) in transmission just before the GC650 camera looking at the PMC transmision. The reflection (most of the light) was dumped to a convient razor blade dump. We also removed the 0.3 and 0.5 ND filters and placed them in the 24 hour loan ND filter box.

Good exposure values to view are now around 3000 for that camera.
  693   Fri Jul 18 12:24:15 2008 josephb, EricConfigurationCamerasChanged Lenses on GC750 at ETMX
We removed the giant TV zoom lens and replaced it with a much smaller fixed zoom lens. Currently it views the entire optic. We have another (also small) zoom lens which focuses much better on the spot itself. With how far back the camera is currently placed, neither of these fixed zoom lenses can touch or hit the view port or the chamber while still attached to camera and mount, even using all of the mount's motion range. So this should be less of a safety issue.

Ideally, we'd like to get some images of the full optic (including osems and so forth) with the X-arm locked, and then use the higher zoom lens while still locked, to get images we can use to calibrate the x and y length scales.
  767   Wed Jul 30 13:09:40 2008 josephb, EricConfigurationPSLPMC scan experiment
We turned the PSL power down by a factor of 4, blocked one half of the Mach Zehnder and scanned the PMC by applying a ramp signal to PMC PZT. Eric will adding plots later today of those results.

We returned the power to close to original level and removed the block on the Mach Zehnder, and then relocked the PMC.
  899   Fri Aug 29 12:41:26 2008 josephb, EricConfigurationComputersMore front ends moved to new network
Used Cat6 cables to finish moving all the front ends in 1Y4 and 1Y5 over to the new GigE network switches, specifically to the switch in 1Y6. This included the ones labeled c1susvme2, c1sosvme, and c1dscl1epics0.
  932   Fri Sep 5 09:56:14 2008 josephb, EricConfigurationComputersFunny channels, reboots, and ethernet connections
1) Apparently the I00-ICS type channels had gotten into a funny state last night, where they were showing just noise, exactly when Rana changed the accelerometer gains and did major reboots. A power cycle of the c1ioo crate and appropriate restarts fixed this.

2) c1asc looks like it was down all night. When I walked out to look at the terminal, it claimed to be unable to read the input file from the command line I had entered the previous night ( < /cvs/cds/caltech/target/c1asc/startup.cmd). In addition we were unable to telnet in, suggesting an ethernet breakdown and inability to mount the appropriate files. So we have temporarily run a new cat6 cable from the c1asc board to the ITMX prosafe switch (since there's a nice knee high cable tray right there). One last power cycle and we were able to telnet in and get it running.
  922   Thu Sep 4 11:33:25 2008 josephb, Eric, JenneConfigurationComputersAttempt to increase gain for C1:PSL-ISS_INMONPD_F via 110B
We were attempting to increase the gain on the channel C1:PSL-ISS_INMONPD_F in preparation to do a scan of the PMC at very low input power.

We started by adding a line to the C1:IOOF.ini file in /cvs/cds/caltech/chans/daq/ under that channel that said "gain=10.0". Before touching anything, the channel was outputting around 4000 counts.

We hit the reconfig button for c1iovme16k, then rebooted c1iovme (which turned out to do nothing) and then the framebuilder, in a method consistent with the wiki. This turned out to put the channel in an odd state, where it was showing very rapid, random spikes, virtually but still around 4000ish counts. We returned the file back to its original format, hit reconfig, and then rebooted the framebuilder. The channel however, was still behaving in the same broken way.

After poking around the PSL table, looking at some direct outputs, we came back and rebooted c1iovme and the framebuilder again, which fixed the channel, such that it was reading out correctly. Taking this as a sign that maybe we should reboot the framebuilder, then c1iovme to get the channel to load changes, we changed the file again to have "gain=10.0". Upon reboot of the framebuilder, the channel was still reading out fine, but at the same level. So we continued with the reboot of c1iovme. This still had no effect on the channel output.

The ini file has been set back at this point, however since Yoichi is working, I'm holding off doing a reconfig and reboot on the framebuilder until later.
  4509   Mon Apr 11 13:30:04 2011 josephb, JamieUpdateCDSNo Wiper script - Frames full over weekend

Problem:

The daqd process was dying every minute or so when it couldn't write frame.  This was slowing down the network by writing a 2.9G core dump over NFS every minute or so. (In /opt/rtcds/caltech/c1/target/fb/).

The problem was /frames/ was 100% full.

Apparently, when we switched the fb over to Gentoo, we forgot to install crontab and a wiper script.

Solution:

We will install crontab and get the wiper script installed.

  822   Mon Aug 11 11:36:11 2008 josephb, SteveConfigurationComputersc1susvme1 minor problems
Around 11 am c1susvme1 start having issues. Namely C1:SUS-PRM_FE_SYNC was railing at some large value like 16384 (2^14). I presume this means the computer was running catastophically late.

I turned off the BS and ITM watch dogs (the PRM was already off), tried hitting reset and sshing in, and running startup, but this didn't help. I then turned off the c1susvme2 associated watch dogs (MC1-3, SRM) and went out to do a hard reboot by switching the crate power off. c1susvme2 came back up fine, was restarted and associated watch dogs turned back on. However, c1susvme1 came back up without mounting /cvs/cds/.

As a test, I replaced the ethernet connection with a CAT6 cable to the Prosafe switch in 1Y6, and then ran reboot on c1susvme1. When it came back up, it had mounted properly, and I was able to run the ./startup.cmd file. At this point it seems to be happy. The new cable is in the trays coming in from the top of the 1Y4 and 1Y6 and approriately labeled.

Edit: Apparently ITMX and ITMY became excited after the reboot (perhaps I turned the watchdogs back on too early? Although that was after the DAQ light was listed as green for c1susvme). Steve noticed this when the alarms went off again (I had turned them off after the reboot seemed successful), and he damped them. Interestingly, the BS remained unexcited.
  1673   Mon Jun 15 15:17:33 2009 josephb, SteveConfigurationVACVacuum control and monitor screens

We updated the vacuum control and monitor screens  (C0VAC_MONITOR.adl and C0VAC_CONTROL.adl).  We also updated the /cvs/cds/caltech/target/c1vac1/Vac.db file.

1) We changed the C1:Vac-TP1_lev channel to C1:Vac-TP1_ala channel, since it now is an alarm readback on the new turbo pump rather than an indication of levitation.  The logic on printing the "X" was changed from X is printed on a 1 = ok status) to X is printed on a 0 = problem status.  All references within the Vac.db file to C1:Vac-TP1_lev were changed.  The medm screens also now are labeled Alarm, instead of Levitating.

2) We changed the text displayed by the CP1 channel (C1:Vac-CP1_mon in Vac.db) from "On" and "Off" to "Cold - On" and "Warm - OFF".

3) We restarted the c1vac1 front end as well as the framebuilder after these changes.

  1258   Thu Jan 29 16:50:53 2009 josephb, albertoConfigurationComputersMegatron fixed
The warning light on megatron and the fans at full speed were fixed by not just power cycling, but completely unplugging megatron from power, waiting for a minute, and then reconnecting the power cables.

Apparently, the Sunfire X4600s at Hanford have also had intermittent problems, which required unplugging the machines completely. In their case, they were unable to access the machine normally, nor could they access the the Intergrated Lights Out Manager (ILOM).

Here, we could interact normally with megatron, but were unable to connect to the ILOM. We were able to get BIOS, but unable to change any of the setting for the ILOM connection. Since the ILOM is a seperate processor and effectively always on, even when the power light is off, a normal shutdown won't reset it. Hence the need to completely disconnect the power supply.
  1555   Thu May 7 15:22:19 2009 josephb, albertoConfigurationComputersfb40m

Quote:

Having determined that Rana (the computer) was having to many issues with testing the new Raid array due to age of the system, we proceeded to test on fb40m.

 

We brought it down and up several times between 11 and noon.  We  eventually were able to daisy chain the old raid and the new raid so that fb40m sees both.  At this time, the RAID arrays are still daisy chained, but the computer is setup to run on just the original raid, while the full 14 TB array is initialized (16 drives, 1 hot spare, RAID level 5 means 14 TB out of the 16 TB are actually available).  We expect this to take a few hours, at which point we will copy the data from the old RAID to the new RAID (which I also expect to take several hours).  In the meantime, operations should not be affected.  If it is, contact one of us.

 

 

 

 

This afternoon the alignment script chrashed after returning sysntax errors. We found that the tpman wasn't running on the framebuilder becasue it had probably failed to get restarted in one of the several reboots executed in the morning by Alex and Jo.

Restarting the tpman was then sufficient for the alignment scripts to get back to work.

  1668   Thu Jun 11 14:54:18 2009 josephb, albertoUpdateComputersWireless network

After poking around for a few minutes several facts became clear:

1) At least one GPIB interface has a hard ethernet connection (and does not currently go through the wireless).

2) The wireless on the laptop works fine, since it can connect to the router.

3) The rest of the martian network cannot talk to the router.

This led to me replugging the ethernet cord back into the wireless router, which at some point in the past had been unplugged.  The computers now seem to be happy and can talk to each other.

 

  1554   Thu May 7 12:21:36 2009 josephb, alexConfigurationComputersfb40m

Having determined that Rana (the computer) was having to many issues with testing the new Raid array due to age of the system, we proceeded to test on fb40m.

 

We brought it down and up several times between 11 and noon.  We  eventually were able to daisy chain the old raid and the new raid so that fb40m sees both.  At this time, the RAID arrays are still daisy chained, but the computer is setup to run on just the original raid, while the full 14 TB array is initialized (16 drives, 1 hot spare, RAID level 5 means 14 TB out of the 16 TB are actually available).  We expect this to take a few hours, at which point we will copy the data from the old RAID to the new RAID (which I also expect to take several hours).  In the meantime, operations should not be affected.  If it is, contact one of us.

 

 

  2215   Mon Nov 9 14:59:34 2009 josephb, alexUpdateComputersThe saga of Megatron continues

Apparently the random file system failure on megatron was unrelated to the RFM card (or at least unrelated to the physical card itself, its possible I did something while installing it, however unlikely).

We installed a new hard drive, with a duplicate copy of RTL and assorted code stolen from another computer.  We still need to get the host name and a variety of little details straightened out, but it boots and can talk to the internet.  For the moment though, megatron thinks its name is scipe11.

You still use ssh megatron.martian to log in though.

We installed the RFM card again, and saw the exact same error as before.  "NMI EVENT!" and "System halted due to fatal NMI".

Alex has hypothesized that the interface card the actual RFM card plugs into, and which provides the PCI-X connection might be the wrong type, so he has gone back to Wilson house to look for a new interface card.  If that doesn't work out, we'll need to acquire a new RFM card at some point

After removing the RFM card, megatron booted up fine, and had no file system errors.  So the previous failure was in fact coincidence.

 

  2225   Tue Nov 10 10:51:00 2009 josephb, alexUpdateComputersMegatron on, powercycled c1omc, and burt restored from 3am snapshot

Last night around 5pm or so, Alex had remotely logged in and made some fixes to megatron.

First, he changed the local name from scipe11 to megatron.  There were no changes to the network, this was a purely local change.  The name server running on Linux1 is what provides the name to IP conversions.  Scipe11 and Megatron both resolve to distinct IPs. Given c1auxex wasn't reported to have any problems (and I didn't see any problems with it yesterday), this was not a source of conflict.  Its possible that Megatron could get confused while in that state, but it would not have affected anything outside its box.

Just to be extra secure, I've switched megatron's personal router over from a DMZ setup to only forwarding port 22.  I have also disabled the dhcp server on the gateway router (131.215.113.2).

Second, he turned the mdp and mdc codes on.  This should not have conflicted with c1omc.

This morning I came in and turned megatron back on around 9:30 and began trying to replicate the problems from last night between c1omc and megatron.  I called Alex and we rebooted c1omc while megatron was on, but not running any code, and without any changes to the setup (routers, etc).  We were able to burt restore.  Then we turned the mdp, mdc and framebuilder codes on, and again rebooted c1omc, which appeared to burt restore as well (I restored from 3 am this morning, which looks reasonable to me). 

Finally, I made the changes mentioned above to the router setups in the hope that this will prevent future problems but without being able to replicate the issue I'm not sure.

  2266   Fri Nov 13 10:28:03 2009 josephb, alexUpdateComputersMegatron is back to its old self

I called Alex this morning and explained the problems with megatron.

Turns out when he had been setting up megatron, he thought a startup script file, rc.local was missing in the /etc directory.  So he created it.  However, the rc.local file in the /etc directory is normally just a link to the /etc/rc.d/rc.local file.  So on startup (basically when we rebooted the machine yesterday), it was running an incorrect startup script file.  The real rc.local includes line:

/usr/bin/setup_shmem.rtl mdp mdc&

Hence the errors we were getting with shm_open().  We changed the file into a soft link, and resourced the rc.local script and mdp started right up.  So we're back to where we were 2 nights ago (although we do have an RFM card in hand).

Update:  The tst module wouldn't start, but after talking to Alex again, it seems that I need to add the module tst to the /usr/bin/setup_shmem.rtl mdp mdc& line in order for it to have a shared memory location setup for it.  I have edited the file (/etc/rc.d/rc.local), adding tst at the end of the line.  On reboot and running starttst, the code actually loads, although for the moment, I'm still getting blank white blocks on the medm screens.

  2305   Fri Nov 20 11:01:58 2009 josephb, alexConfigurationComputersWhere to find RFM offsets

Alex checked out the old rts (which he is no longer sure how to compile) from CVS to megatron, to the directory:

/home/controls/cds/rts/

In /home/controls/cds/rts/src/include you can find the various h files used.  Similarly, /fe has the c files.

In the h files, you can work out the memory offset by noting the primary offset in iscNetDsc40m.h

A line like suscomms.pCoilDriver.extData[0] determines an offset to look for.

0x108000 (from suscomms )

Then pCoilDriver.extData[#] determines a further offset.

sizeof(extData[0]) = 8240  (for the 40m - you need to watch the ifdefs, we were looking at the wrong structure for awhile, which was much smaller).

DSC_CD_PPY is the structure you need to look in to find the final offset to add to get any particular channel you want to look at.

The number for ETMX is 8, ETMY 9 (this is in extData), so the extData offset from 0x108000 for ETMY should be 9 * 82400.  These numbers (i.e. 8 =ETMX, 9=ETMY) can be found in losLinux.c in /home/controls/cds/rts/src/fe/40m/.  There's a bunch of #ifdef and #endif which define ETMX, ETMY, RMBS, ITM, etc.  You're looking for the offset in those.

So for ETMY LSC channel (which is a double) you add 0x108000 (a hex number) + (9 * 82400 + 24) (not in hex, need to convert) to get the final value of 0x11a160 (in hex).

-----------

A useful program to interact with the RFM network can be found on fb40m.  If you log in and go to:

/usr/install/rfm2g_solaris/vmipci/sw-rfm2g-abc-005/util/diag

you can then run rfm2g_util, give it a 3, then type help.

You can use this to read data.  Just type help read.  We had played around with some offsets and various channels until we were sure we had the offsets right.  For example, we fixed an offset into the ETMY LSC input, and saw the corresponding memory location change to that value.  This utility may also be useful for when we do the RFM test to check the integrity of the ring, as there are some diagnostic options available inside it.

  2306   Fri Nov 20 11:14:22 2009 josephb, alexConfigurationComputerstest points working on megatron and we may have filters with switch outputs built in

Alex tooked at the channel definitions (can be seen in tpchn_C1.par), and noticed the rmid was 0. 

However, we had set in testpoint.par the tst system to C-node1 instead of C-node0.  The final number inf that and the rmid need to be equal.   We have changed this, and the test points appear to be working now.

However, the confusing part is in the tst model, the gds_node_id is set to 1.  Apparently, the model starts counting at 1, while the code starts counting at 0, so when you edit the testpoint.par file by hand, you have to subtract one from whatever you set in the model.

In other news, Alex pointed me at a CDS_PARTS.mdl, filters, "IIR FM with controls".  Its a light green module with 2 inputs and 2 outputs.  While the 2nd set of input and outputs look like they connect to ground, they should be iterpreted by the RCG to do the right thing (although Alex wasn't positive it works, it worth trying it and seeing if the 2nd output corresponds to a usable filter on/off switch to connect to the binary I/O to control analog DW.  However, I'm not sure it has the sophistication to wait for a zero crossing or anything like that - at the moment, it just looks like a simple on/off switch based on what filters are on/off.

  2487   Fri Jan 8 11:43:22 2010 josephb, alexUpdateComputersRFM and Megatron

Alex came over with a short RFM cable this morning.  We used it to connect the rfm card in c1iscey to the rfm card megatron

Alex renamed startup.cmd in /cvs/cds/caltech/target/c1iscey/ to startup.cmd.sav, so it doesn't come up automatically.  At the end we moved it back.

Alex used the vxworks command d to look at memory locations on c1iscey.  Such as d 0xf0000000, which is the start of the rfm code location.  So to look at 0x11a1c8 (lscPos) in the rfm memory, he typed "d 0xf011a1c8".  After doing some poking around, we look at the raw tst front end code (in /home/controls/cds/advLigo/src/fe/tst), and realized it was trying to read doubles.  The old rts code uses floats, so the code was reading incorrectly.

As a quick fix, we changed the code to floats for that part.  They looked like:

etmy_lsc = filterModuleD(dsp_ptr,dspCoeff,ETMY_LSC,cdsPciModules.pci_rfm[0]? *(\
(double *)(((void *)cdsPciModules.pci_rfm[0]) + 0x11a1c8)) : 0.0,0);

And we simply changed the double to float in each case.  In addition we changed the RCG scripts locally as well (if we do a update at some point, it'll get overwritten).  The file we updated was /home/controls/cds/advLigo/src/epics/util/lib/RfmIO.pm

Line 57 and Line 84 were changed, with double replaced with float.

return "cdsPciModules.pci_rfm[0]? *((float *)(((void *)cdsPciModules.pci
_rfm[$card_num]) + $rfmAddressString)) : 0.0";

. "  *((float *)(((char *)cdsPciModules.pci_rfm[$card_nu
m]) + $rfmAddressString)) = $::fromExp[0];\n"

This fixed our ability to read the RFM card, which now can read the LSC POS channel, for example.

Unfortunately, when we were putting everything the way it was with RFM fibers and so forth, the c1iscey started to get garbage (all the RFM memory locations were reading ffff).  We eventually removed the VME board, removed the RFM card, looked at it, put the RFM card back in a different slot on the board, and returned c1iscey to the rack.  After this it started working properly.  Its possible in all the plugging and unplugging that the card somehow had become loose.

The next step is to add all the channels that need to be read into the .mdl file, as well as testing and adding the channel which need to be written.

 

  2488   Fri Jan 8 15:40:14 2010 josephb, alexUpdateComputersRFM and RCG

Alex added a new module to the RCG, for generating RFMIO using floats.  This has been commited to CVS.

  2542   Fri Jan 22 12:33:37 2010 josephb, alexUpdateComputersModified CDS_PARTS for Binary output

Turns out the CDSO32 part (representing the Contec BO-32L-PE binary output) rquires two inputs. One for the first 16 bits, and one for the second set of 16 bits.  So Alex added another input to the part in the library.  Its still a bit strange, as it seems the In1 represents the second set of 16 bits, and the In2 represents the first set of 16 bits.

I added two sliders on the CustomAdls/C1TST_ETMY.adl control screen (upper left), along with a bit readout display, which shows the bitwise and of the two slider channels. For the moment, I still can't see any output voltage on any of the DO pins, no matter what output I set.

 

  2591   Thu Feb 11 18:33:54 2010 josephb, alexUpdateComputersStatus of the IP change over

A few machines have still not been changed over, including a few laptops, mafalda, ottavia, and c0rga.

All the front ends have been changed over.

fb40m died during a reboot and was replaced with a spare Sun blade 1000 that Larry had.  We had to swap in our old hard drive and memory.

All the front ends, belladonna, aldabella, and the control room machines have been switched over. Nodus was changed over after we realized we hosed the elog and svn by switching linux1's IP.

At this point, 90% of the machines seem to be working, although c0daqawg seems to be having some issues with its startup.cmd code.

  2603   Sat Feb 13 18:58:31 2010 josephb, alexUpdateComputersfb40m testpoints fixed

I received an e-mail from Alex indicating he found the testpoint problem and fixed it today:

Quote from Alex: "After we swapped the frame builder computer it has reconfigured all device files and I needed to create some symlinks on /dev/ to make tpman work again. I test the testpoints and they do work now."

 

  2983   Tue May 25 16:40:27 2010 josephb, alexUpdateCDSFinally tracked down why new models wouldn't talk to each other

The problem with the new models using the new shared memory/dolphin/RFM defined as names in a single .ipc file.

The first is the no_oversampling flag should not be used.  Since we have a single IO processor handling ADCs and DACs at 64k, while the models run at 16k, there is some oversampling occuring.  This was causing problems syncing between the models and the IOP.

It also didn't help I had a typo in two channels which I happened to use as a test case to confirm they were talking.  However, that has been fixed.

  3055   Tue Jun 8 15:58:25 2010 josephb, alexUpdateCDSNew multi-filter matrix part added to RCG (at the 40m at least)

A new webview of the LSP model is available at:

https://nodus.ligo.caltech.edu:30889/FE/lsp_slwebview_files/

This model include a couple example noise generators as well as the new Matrix of Filter banks (5 inputs x 15 outputs = 75 Filters!).  The attached png shows where these parts can be found in the CDS_PARTS library.  I'm still working on the automatic generation of the matrix and filter bank medm screens for this part.  The plan is to have a matrix screen similar to current ones, except that the value entry points to the gain setting of the associated filter.  In addition, underneath each value, there will be a link to the full filter bank screen.  Ideally, I'd like to have the filter adl files located in a sub-directory of the system, to keep clutter down.

I've cut and past the new Foton file generated by the LSP model below.  The first number following the MTRX is the input the filter is taking data from and the second number is the output its pushing data to.  This means for the script parsing Valera's transfer functions, I need to input which channel corresponds to which number, such as DARM = 0, MICH = 1, etc.  So the next step is to write this script and populate the filter banks in this file.

# FILTERS FOR ONLINE SYSTEM
#
# Computer generated file: DO NOT EDIT
#
# MODULES DOF2PD_AS11I DOF2PD_AS11Q DOF2PD_AS55I DOF2PD_AS55Q
# MODULES DOF2PD_ASDC DOF2PD_POP11I DOF2PD_POP11Q DOF2PD_POP55I
# MODULES DOF2PD_POP55Q DOF2PD_POPDC DOF2PD_REFL11I DOF2PD_REFL11Q
# MODULES DOF2PD_REFL55I DOF2PD_REFL55Q DOF2PD_REFLDC Mirror2DOF_f2x1
# MODULES Mirror2DOF_f2x2 Mirror2DOF_f2x3 Mirror2DOF_f2x4 Mirror2DOF_f2x5
# MODULES Mirror2DOF_f2x6 Mirror2DOF_f2x7 DOF2PD_MTRX_0_0 DOF2PD_MTRX_0_1
# MODULES DOF2PD_MTRX_0_2 DOF2PD_MTRX_0_3 DOF2PD_MTRX_0_4 DOF2PD_MTRX_0_5
# MODULES DOF2PD_MTRX_0_6 DOF2PD_MTRX_0_7 DOF2PD_MTRX_0_8 DOF2PD_MTRX_0_9
# MODULES DOF2PD_MTRX_0_10 DOF2PD_MTRX_0_11 DOF2PD_MTRX_0_12 DOF2PD_MTRX_0_13
# MODULES DOF2PD_MTRX_0_14 DOF2PD_MTRX_1_0 DOF2PD_MTRX_1_1 DOF2PD_MTRX_1_2
# MODULES DOF2PD_MTRX_1_3 DOF2PD_MTRX_1_4 DOF2PD_MTRX_1_5 DOF2PD_MTRX_1_6
# MODULES DOF2PD_MTRX_1_7 DOF2PD_MTRX_1_8 DOF2PD_MTRX_1_9 DOF2PD_MTRX_1_10
# MODULES DOF2PD_MTRX_1_11 DOF2PD_MTRX_1_12 DOF2PD_MTRX_1_13 DOF2PD_MTRX_1_14
# MODULES DOF2PD_MTRX_2_0 DOF2PD_MTRX_2_1 DOF2PD_MTRX_2_2 DOF2PD_MTRX_2_3
# MODULES DOF2PD_MTRX_2_4 DOF2PD_MTRX_2_5 DOF2PD_MTRX_2_6 DOF2PD_MTRX_2_7
# MODULES DOF2PD_MTRX_2_8 DOF2PD_MTRX_2_9 DOF2PD_MTRX_2_10 DOF2PD_MTRX_2_11
# MODULES DOF2PD_MTRX_2_12 DOF2PD_MTRX_2_13 DOF2PD_MTRX_2_14 DOF2PD_MTRX_3_0
# MODULES DOF2PD_MTRX_3_1 DOF2PD_MTRX_3_2 DOF2PD_MTRX_3_3 DOF2PD_MTRX_3_4
# MODULES DOF2PD_MTRX_3_5 DOF2PD_MTRX_3_6 DOF2PD_MTRX_3_7 DOF2PD_MTRX_3_8
# MODULES DOF2PD_MTRX_3_9 DOF2PD_MTRX_3_10 DOF2PD_MTRX_3_11 DOF2PD_MTRX_3_12
# MODULES DOF2PD_MTRX_3_13 DOF2PD_MTRX_3_14 DOF2PD_MTRX_4_0 DOF2PD_MTRX_4_1
# MODULES DOF2PD_MTRX_4_2 DOF2PD_MTRX_4_3 DOF2PD_MTRX_4_4 DOF2PD_MTRX_4_5
# MODULES DOF2PD_MTRX_4_6 DOF2PD_MTRX_4_7 DOF2PD_MTRX_4_8 DOF2PD_MTRX_4_9
# MODULES DOF2PD_MTRX_4_10 DOF2PD_MTRX_4_11 DOF2PD_MTRX_4_12 DOF2PD_MTRX_4_13
# MODULES DOF2PD_MTRX_4_14  
# MODULES 

Attachment 1: CDS_Library.png
CDS_Library.png
  3534   Tue Sep 7 15:31:28 2010 josephb, alexUpdateCDSBinary Output working/ IO chassis fixed

As noted previously, we were having problems with getting multiple 32 channel binary output cards working.  Alex came by and we eventually tracked the problem down to an incorrect counter in the c code.  This has been fixed and checked into the CDS svn repository.  I tested the actual hardware and we are in fact able to turn our test LEDs on with multiple binary output boards.

 

Alex and I also looked at the non-functional IO chassis (the one which wouldn't sync with the 1PPS signal and wasn't turning on when the computer turned on.  We discovered one corner of the trenton board wasn't screwed down and was in fact slightly warped.  I screwed it down properly, straightening the board out in the process.  After this, the IO chassis worked with a host interface board to the computer and started properly.  We were able to see the boards attached as well with lspci.  So that chassis looks to be in working condition now.

Onwards to the RFM test.

  3600   Thu Sep 23 12:05:20 2010 josephb, alexUpdateCDSfb40m down, new fb in progress

Alex came over this morning and we began work on the frame builder change over.  This required fb40m be brought down and disconnected from the RAID array, so the frame builder is not available.

He brought a Netgear switch which we've installed at the top of the 1X7 rack.  This will eventually be connected, via Cat 6 cable, to all the front ends.  It is connected to the new fb machine via a 10G fiber.

Alex has gone back to Downs to pickup a Symmetricon (sp?) card for getting timing information into the frame builder.  He will also be bringing back a harddrive with the necessary framebuilder software to be copied onto the new fb machine.

He said he'd like to also put a Gentoo boot server on the machine.  This boot server will not affect anything at the moment, but its apparently the style the sites are moving towards.  So you have a single boot server, and diskless front end computers, running Gentoo.  However for the moment we are sticking with our current Centos real time kernel (which is still compatible with the new frame builder code).  However this would make a switch over to the new system possible in the future.

At the moment, the RAID array is doing a file system check, and is going slowly while it checks terabytes of data.  We will continue work after lunch. 

 Punchline: things still don't work.

  3602   Thu Sep 23 21:01:11 2010 josephb, alexUpdateCDSfb40m still down, new fb still in progress
Unfortunately, copying the data to the USB/SATA drive over at downs took longer than expected for Alex. We will be installing the new code on the new fb machine tomorrow and running it. We will be running off of a timer on that machine until Monday. On Monday, a Symmetricom card will be arriving from LLO so that we can connect an IRIG-B timing signal into the frame builder and use a proper time signal. There is no running frame builder for tonight and thus will be no trends until we get the new FB running tomorrow morning.
  3620   Wed Sep 29 12:08:28 2010 josephb, alexSummaryCDSLast burt save of old controls

This is being recorded for posterity so we know where to look for the old controls settings.

The last good burt restore that was saved before turning off scipe25 aka c1dcuepics was on September 29, 11:07.

ELOG V3.1.3-