40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 38 of 344  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  3926   Mon Nov 15 16:26:46 2010 josephbUpdateCDSc1iscex is now running and the network hasn't died

Problem:

c1iscex was spamming the network with error messages.

Solution:

Updated the front end codes to current standards (they were on the order of months out of date).  After fixing them up and rebuilding the codes on c1iscex, it no longer had problems connecting to the frame builder.\

Status:

I can look at test points for ETMX.  It is not currently damping however.

To Do:

Move filters for ETMX into the correct files. 

Need to add a Binary output blue and gold box to the end rack, and plug it into the binary output card.  Confirm the binary output logic is correct for the OSEM whitening, coil dewhitening, and QPD whitening boards. 

Get ETMX damped.

Figure out what we're going to do with the aux crate which is currently running y-end code at the new x-end.  Koji suggested simply swapping auxilliary crates - this may be the easiest.  Other option would be to change the IP address, so that when it PXE boots it grabs the x-end code instead of the y-end code.

Current CDS status:

MC damp dataviewer diaggui AWG c1ioo c1sus c1iscex RFM Sim.Plant Frame builder TDS
                     
  3935   Tue Nov 16 21:42:31 2010 ranaUpdateCDSScreen Time Fix
I learned today that the following python code will do a find/replace to fix the TIME string on any MEDM screen which has a whited out time field.
Previously, this field was sourced from the c1dscepics of c1losepics process. Now we have to get it from the IOO or SUS front ends

Here's the python code:

import re
o = open("output.adl","w")
data = open("test.adl").read()
o.write( re.sub("C0:TIM-PACIFIC_STRING","C1:FEC-34_TIME_STRING",data)  )
o.close()

Where 'output.adl' could be the same name as 'test.adl' if you want to
replace the existing file. Also FEC-34 just refers to which FE you're running.
It could, in principle, be any one of them.
 

The next step is to figure out how to apply this to all the files in a directory.

  3938   Wed Nov 17 10:39:20 2010 josephbUpdateCDSScreen Time Fix

An improved python code to apply a replacement to all *.adl files in a directory would be:

import re, os
files = os.listdir("./")
  for file in files:
    if ".adl" in str(file):
      data = open(file).read()
      o = open(file,"w")
      o.write( re.sub("C0:TIM-PACIFIC_STRING","C1:FEC-34_TIME_STRING",data)  )
      o.close()

Of course, this entire python script can be replaced with a single sed command:

sed -i 's/C0:TIM-PACIFIC_STRING/C1:FEC-34_TIME_STRING/g' *

A more complicated script could be written which looks for key identifiers either in the file header or inside the file to determine which front end is appropriate, using a dictionary like:

dcuid_dict = {"BS":21,"PRM":37,"SRM":37,"ITMX":21,"ITMY":21,"MC1":36,"MC2":36,"MC3":36,"ETMX":24,"ETMY":26}

and then using for loops and if statements.

 

  3940   Wed Nov 17 16:02:30 2010 josephbUpdateCDSModified feCodeGen.pl to fix filtMuxMatrix name generation

Problem:

Sometime in the last 3 weeks, probably when Alex brought his latest changes from Hanford to the 40m and did an SVN update, the code which generates the names of the filter .adl files links for the overall matrix view broke.

Fix:

I modified FE code gen to use $basename instead of the base name after the top name transform (this changes _ to - after the first 3 letters

@@ -3520,11 +3522,11 @@
 
                  my $tn = top_name_transform($basename);
                  my $basename1 = $usite . ":" . $tn . "_";
-                 my $filtername1 = $usite . $tn;
+                 my $filtername1 = $usite . $basename;

Still having problems:

The filter modules built with the matrix of filter modules run (offests/gains work), but will not load filter coefficients/filter names.  All the other filter modules outside the matrix seem to load fine.  At this point, doing a rebuild of any of the front end machines may cause the A2L filter banks to be unloadable.

 

  3941   Wed Nov 17 20:44:59 2010 yutaSummaryCDSno QPD channels on c1sus machine today

(Joe, Suresh, Yuta)

Currently, only 2 ADC cards work on c1sus machine.
No QPD inputs(e.g. MC2 trans), and no RFM.


Summary:
  We wanted to have PEM(physical environment montor) channels, so we moved a ADC card in c1sus machine.
  It ended up with destroying one of the 3 ADCs.

What we did:
  1. Moved ADC card at PCIe expansion board slot 0 to other empty slot.
     What we call PCI slot 0 was "DO NOT USE" in LIGO-T10005230-v1, so we moved it.

  2. Connected that ADC card to PEM channel box at 1X7 via SCSI cable.

  3. ADC card order is changed, so we checked ADC number assinging and re-labeled the cable.

  4. Found RFM is not working(c1sus and c1ioo not talking) and fb is in a weird state(Status: 0x4000 in GDS screens)

  5. Swapped the cabling so that ADC card 0 will be connected to timing interface card at slot1, but didn't help.
     More than that, we suffered ADC timeout.

  6. Tried ADC card swapping, slot position changing, taking out some of the ADC cards, etc.
     We found that ADC timeout doesn't happen with 2 ADC cards.
     But if we connect one of the ADC card to the timing interface card at slot 8, c1sus ADC timeouts with 2 ADC cards, too.
     So, I think that timing interface card is bad.

  7. Stopped rebooting c1sus again and again. We decided to investigate the problem tomorrow.
     We only need ADC card 0 and 1 for MC damping.(see this wiki page)
       ADC card 0: all UL/UR/LR/LL SENs
       ADC card 1: all SD SENs     
       ADC card 2: all QPDs

Result:
  We can damp optics and lock MC.
  We can't do A2L because RFM is not working.
  We can't see MC2 trans because we currently don't have ADC card 2.

  3945   Thu Nov 18 11:06:20 2010 josephbUpdateCDSc1sus and ADCs

Problem:

ADCs are timing out on c1sus when we have more than 3.

Talked with Rolf:

Alex will be back tomorrow (he took yesterday and today off), so I talked with Rolf.

He said ordering shouldn't make a difference and he's not sure why would be having a problem. However, when he loads the chassis, he tends to put all the ADCs on the same PCI bus (the back plane apparently contains multiples).  Slot 1 is its own bus, Slots 2-9 should be the same bus, and 10-17should be the same bus.

He also mentioned that when you use dmesg and see a line like "ADC TIMEOUT # ##### ######", the first number should be the ADC number, which is useful for determining which one is reporting back slow.

Plan:

Disconnect c1sus IO chassis completely, pull it out, pull out all cards, check connectors, and repopulate with Rolf's suggestions and keeping this elog in mind.

In regards to the RFM, it looks like one of the fibers had been disconnected from  the c1sus chassis RFM card (its plugged in in the middle of the chassis so its hard to see) during all the plugging in and out of the cables and cards last night.

  3946   Thu Nov 18 14:05:06 2010 josephb, yutaUpdateCDSc1sus is alive!

Problem:

We broke c1sus by moving ADC cards around.

Solution:

We pulled all the cards out, examined all contacts (which looked fine), found 1 poorly connected cable internally, going between an ADC and ADC timing interface card  (that probably happened last night), and one of the two RFM fiber cables pulled out of its RFM card.

We then placed all of the cards back in with a new ordering, tightened down everything, and triple checked all connections were on and well fit.

 

Gotcha!

Joe forgot that slot 1 and slot 2 of the timing interface boards have their last channels reserved for duotone signals.  Thus, they shouldn't be used for any ADCs or DACs that need their last channel (such as MC3_LR sensor input).  We saw a perfect timing signal come in through the MC3_LR sensor input, which prevented damping. 

We moved the ADC timing interface card out of the 1st slot  of the timing interface board and into slot 6 of the timing interface board, which resolved the problem.

Final Configuration:

 

 Timing Interface Board

Timing Interface Slot 1 (Duotone) 2 (Duotone) 3 4 5 6 7 8 9 10 11 12 13
Card None DAC interface (can't use last channel) ADC Interface ADC interface ADC interface

ADC

interface

None None None DAC interface DAC interface None None

 PCIe Chassis

Slot 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
PCIe Number Do Not Use 1 6 5 4 9 8 7 3 2 14 13 12 17 16 15 11 10
Card None ADC DAC ADC ADC ADC BO BO BO BO DAC DAC BIO RFM None None None None

Still having Issues with:

ITM West damps.  ITM South damps, but the coil gains are opposite to the other optics in order to damp properly.

We also need to look into switching the channel names for the watchdogs on ITMX/Y in addition to the front end code changes.

  3947   Thu Nov 18 14:19:01 2010 josephbUpdateCDSSwapped c1auxex and c1auxey codes

Problem:

We had not switched the c1aux crates when we renamed the arms, thus the watchdogs labeled ETMX were really watching ETMY and vice-versa.

Solution:

I used telnet to connect to c1auxey, and then c1auxex.

I used the bootChange command to change the IP address of c1auxey to 192.168.113.59 (c1auxex's IP), and its startup script.  Similarly c1auxex was changed to c1auxey and then both were rebooted.

 

c1auxey > bootChange

'.' = clear field;  '-' = go to previous field;  ^D = quit

boot device          : ei
processor number     : 0
host name            : linux1
file name            : /cvs/cds/vw/mv162-262-16M/vxWorks
inet on ethernet (e) : 192.168.113.60:ffffff00 192.168.113.59:ffffff00
inet on backplane (b):
host inet (h)        : 192.168.113.20
gateway inet (g)     :
user (u)             : controls
ftp password (pw) (blank = use rsh):
flags (f)            : 0x0
target name (tn)     : c1auxey c1auxex
startup script (s)   : /cvs/cds/caltech/target/c1auxey/startup.cmd /cvs/cds/caltech/target/c1auxex/startup.cmd
other (o)            :

value = 0 = 0x0

c1auxex > bootChange

'.' = clear field;  '-' = go to previous field;  ^D = quit

boot device          : ei
processor number     : 0
host name            : linux1
file name            : /cvs/cds/vw/mv162-262-16M/vxWorks
inet on ethernet (e) : 192.168.113.59:ffffff00 192.168.113.60:ffffff00
inet on backplane (b):
host inet (h)        : 192.168.113.20
gateway inet (g)     :
user (u)             : controls
ftp password (pw) (blank = use rsh):
flags (f)            : 0x0
target name (tn)     : c1auxex c1auxey
startup script (s)   : /cvs/cds/caltech/target/c1auxex/startup.cmd /cvs/cds/caltech/target/c1auxey/startup.cmd
other (o)            :

value = 0 = 0x0

  3948   Thu Nov 18 16:32:21 2010 yutaSummaryCDScurrent damping status for all optics c1sus handles

Summary:
   I set Q-values for each ringdown of PRM, BS, ITMX, ITMY, MC1, MC2, MC3 to ~5 using QAdjuster.py.
   Here are the results;
c1susdampings.png

  Red ringdowns indicate the second try after gain setting.

Note:

  - ITMX and ITMY are referred according to MEDM screens in this entry.
  - ITMX(south) OSEM positions are currently so bad(LL and SD are all the way in/out).
        I have to change IFO_ALIGN slider values to check the damping servo. For SIDE, I couldn't do that. I reverted the slider change after the damping checking.
  - ITMY(west) somehow has opposite coil gain sign.
       Usually for the other optics, UL,UR,LR,LL is 1,-1,1,-1. But for ITMY to damp, they are -1,1,-1,1.
  - PRM damps, but ringdown doesn't look nice. There must be something funny going on.
  - SRM doesn't have OSEMs put in now.

  3954   Fri Nov 19 12:53:50 2010 josephbUpdateCDSTestpoints on c1iscex now working

Problem:

c1iscex did not have test points working last night.

Solution:

The diag -i command indicated that :

awg 19 0 192.168.113.80 822095891 1 192.168.113.80

awg 45 0 192.168.113.80 822095917 1 192.168.113.80

The first number after the awg should be the DCUID number.  The IP address 192.168.113.80 corresponds to c1iscex.  So we had awg and testpoints setup for DCUI 19 and 45 on c1iscex.  DCUID 19 is c1x01 (the IOP), but 45 was used for a test awhile back. 

Turns out that in the testpoint.par file located in /cvs/cds/rtcds/caltech/c1/target/gds/param, there were two entries for c1scx, one with DCUID 24 and also DCUID 45.  The model at the time was running with DCUID 24.

So I changed the model DCUID to 45, deleted the [C-node24] entry in the testpoint.par file, and restarted the machine, and also did a "telnet fb 8088" and "shutdown" to restart the frame builder.

  3957   Fri Nov 19 17:12:22 2010 yutaUpdateCDSETMX damped, but with weird TO_COIL matrix

Background:
  c1iscex machine is currently being setup and RT model c1scx is running.
  But ETMX(south) didn't seem to be damped, so I checked it.

What I did:
  1. Checked the wiring. It seemed to be OK.
    Looked LEMO monitor output of SUS PD Whitening Board(D000210) with oscilloscope and they seemed to be getting some sensor signal except SDSEN.
      SDSEN is funny. C1:SUS-ETMX_SPDMon decreases slowly when PD input cable is disconnected, and increases slowly when connected.
      There might be some problem in the circuits.
    Looked LEMO monitor output of SOS Coil Driver Module(D010001) with oscilloscope and they seemed to be receiving correct signal from DAC.
      When ULCOIL offset is added, ch1 increased and so on.

  2. Checked the direction of SUSDOF motion when kicked with one coil.
    The result was;

kick (+) POS PIT YAW
ULCOIL + + +
URCOIL + - +
LRCOIL + - -
LLCOIL + + -


    This table tells you, when ULCOIL_OFFSET increases, SUSPOS increases and so on.
    If URCOIL and LLCOIL are swapped, they look correct.
    Also, they have opposite sign to the usual optics(e.g. MCs, BS, PRM).

  3. Changed TO_COIL matrix according to the table above(see Attachment #1). Changed signs of XXCOIL_GAINs.

  4. ETMX damped!

Plan:
  - Check the wiring after SOS Coil Driver Module and circuit around SDSEN
  - Check whitening and dewhitening filters. We connected a binary output cable, but didn't checked them yet.
  - Make a script for step 2
  - Activate new DAQ channels for ETMX (what is the current new fresh up-to-date latest fb restart procedure?)

Attachment 1: ETMXdamping.png
ETMXdamping.png
  3959   Sat Nov 20 01:58:56 2010 yutaHowToCDSeditting RT models and MEDM screens

(Suresh, Yuta)

If you come up with a good idea and want to add new things to current RT model;

 1. Go to simLink directory and open matlab;
    cd /cvs/cds/rtcds/caltech/c1/core/advLigoRTS/src/epics/simLink
    matlab


 2. In matlab command line, type;
    addpath lib

 3. Open a model you want to edit.
    open modelname

 4. Edit! CDS_PARTS has useful CDS parts.
    open CDS_PARTS
    There are some traps. For example, you cannot put cdsOsc in a subsystem

 5. Compile your new model. See my elog #3787.
 
 6. If you want to burt restore things;
    cd /cvs/cds/caltech/burt/autoburt/snapshots/YEAR/MONTH/DATE/TIME/
    burtgooey


 7. Edit MEDM screens
    cd /cvs/cds/rtcds/caltech/c1/medm
    medm


 8. Useful wiki page on making a new suspension MEDM screens;
    http://lhocds.ligo-wa.caltech.edu:8000/40m/How_to_make_new_suspension_medm_screens

  3960   Sat Nov 20 02:25:30 2010 yutaUpdateCDS2 LOCKINs for suspension models

(Suresh, Koji, Yuta)

Background:
  No AWG. No tdssine.
  ...... LOCKIN!

What we did:
  1. Added 2 LOCKINs for c1sus model.
   Currently, we cannot put cdsOsc in a subsystem.
   So, we put LOCKINs just for BS for a test.
   The signal going into LOCKIN can be anything. For now, we just put a matrix for selecting the signal and connected the input signals to the ground.

   See the following page for the current simlink diagram of c1sus model.
     https://nodus.ligo.caltech.edu:30889/FE/c1sus_slwebview_files/index.html

  2. Edited MEDM screens. (see Attachment #1)

Result:
  We succeeded in putting 2 LOCKINs and exciting BS.
  During the update, we might destroyed things. For example, fb status is red in GDS screens.
  We will wait for Joe to fix them.

Plan:
 - Fix cdsOsc and put LOCKINs for all the other optics
 - Come up with a good idea what to do with this LOCKIN. Remember, LOCKIN is not just a replacement for excitation points.
 - Enhance an oscillator so that we can put a random noise

Attachment 1: LockinRoll.png
LockinRoll.png
  3961   Sat Nov 20 03:37:11 2010 yutaSummaryCDSCDS time delay measurement - the ripple

(Koji, Joe, Yuta)

Motivation:
  We wanted to know more about CDS.

Setup:
  Same as in elog #3829.

What we did:

  1. Made test RT models c1tst and c1nio for c1iscex.
     c1tst has only 2 filter module(minimum limit of a model), 2 inputs, 2 outputs and it runs with IOP c1x01.
     c1nio is the same as c1tst except it runs(or, should run) without IOP.

  2. Measured the time delay of ADC through DAC using different machine, different sampling rate by measuring transfer functions.

  3. c1nio(without IOP) didn't seem to be running correctly and we couldn't measure the TF.
     "1 PPS" error appeared in GDS screen(C1:FEC-39_TIME_ERR).
     It looks like c1nio is receiving the signal as we could see in the MEDM screen, but the signal doesn't come out from the DAC.

TF we expected:
  All the filters and gains are set to 1.

  We have DA's TF when putting 64K signal out to analog world.
    D(f)=exp(-i*pi*f*Ts)*sin(pi*f*Ts)/(pi*f*Ts)  (Ts: sample time)

  We have AA filter and AI filter when downsampling and upsampling.
    A(f)=G*(1+b11/z+b12/z/z)/(1+a11/z+a12/z/z)*(1+b21/z+b22/z/z)/(1+a21/z+a22/z/z)       z=exp(i*2*pi*f*Ts)
  Coefficients can be found in /cvs/cds/rtcds/caltech/c1/core/advLigoRTS/src/fe/controller.c.

/* Coeffs for the 2x downsampling (32K system) filter */
static double feCoeff2x[9] =
        {0.053628649721183,
        -1.25687596603711,    0.57946661417301,    0.00000415782507,    1.00000000000000,
        -0.79382359542546,    0.88797791037820,    1.29081406322442,    1.00000000000000};
/* Coeffs for the 4x downsampling (16K system) filter */
static double feCoeff4x[9] =
    {0.014805052402446, 
    -1.71662585474518,    0.78495484219691,   -1.41346289716898,   0.99893884152400,
    -1.68385964238855,    0.93734519457266,    0.00000127375260,   0.99819981588176};


  For 64K system, we expect H=1.

  We also have a delay.
    S(f)=exp(-i*2*pi*f*dt)   (dt: delay time)

  So, total TF we expect is;
    H(f)=a*A(f)^2*D(f)*S(f)
  a is a constant depending on the range of ADC and DAC(I think). Currently, a=1/4.

  We may need to think about TF when upsampling.(D(f) is TF of upsampling 64K to analog)

Result:

  Example plot is attached.
  For other plots and the raw data, see /cvs/cds/caltech/users/yuta/scripts/CDSdelay2/ directory.
  As you can see, TFs are slightly different from what we expect.
  They show ripple we don't understand at near cut off frequency.

  If we ignore the ripple, here is the result of delay time at each condition;

data file    host    FE    IOP        rate    sample time    delay        delay/Ts
c1rms16K.dat    c1sus      c1rms    adcSlave    16K    61.0usec    110.4usec    1.8
c1scx16K.dat    c1iscex    c1scx    adcSlave    16K    61.0usec     85.5usec    1.4
c1tst16K.dat    c1iscex    c1tst    adcSlave    16K    61.0usec     84.3usec    1.4
c1tst32K.dat    c1iscex    c1tst    adcSlave    32K    30.5usec     53.7usec    1.8
c1tst64K.dat    c1iscex    c1tst    adcSlave    64K    15.3usec     38.4usec    2.5

  The delay time shown above does not include the delay of DA. To include, add 7.6usec(Ts/2).

  - delay time is different for different machine
  - number of filters (c1scx has full of filters for ETMX suspension, c1tst has only 2) doen't seem to effect much to delay time
  - higher the sampling rate, larger the (delay time)/(sample time) ratio

Plan:

 - figure out how to run a model without IOP
 - where do the ripples come from?
 - why we didn't see significant ripple at previous measurement on c1sus?

Attachment 1: c1tst16Kdelay.png
c1tst16Kdelay.png
  3962   Mon Nov 22 12:00:18 2010 josephbUpdateCDSUpdated Computer Restart Procedures for FB

I've updated the  Computer Restart Procedures  page in the wiki with the latest fb restart procedure.

To just restart just the daqd (frame builder) process, do:

1) telnet fb 8088

2) shutdown

The init process will take care of the rest and restart daqd automatically.

 

Background:
Plan:
  - Check the wiring after SOS Coil Driver Module and circuit around SDSEN
  - Check whitening and dewhitening filters. We connected a binary output cable, but didn't checked them yet.
  - Make a script for step 2
  - Activate new DAQ channels for ETMX (what is the current new fresh up-to-date latest fb restart procedure?)

 

  3963   Mon Nov 22 13:16:52 2010 josephbSummaryCDSCDS Plan for the week

CDS Objectives for the Week:

Monday/Tuesday:

1) Investigate ETMX SD sensor problems

2) Fully check out the ETMX suspension and get that to a "green" state.

3) Look into cleaning up target directories (merge old target directory into the current target directory) and update all the slow machines for the new code location.

4) Clean up GDS apps directory (create link to opt/apps on all front end machines).

5) Get Rana his SENSOR, PERROR, etc channels.

Tuesday/Wednesday:

3) Install LSC IO chassis and necessary cabling/fibers.

4) Get LSC computer talking to its remote IO chassis

Wednesday:

5) If time, connect and start debugging Dolphin connection between LSC and SUS machines

 

  3964   Mon Nov 22 16:16:04 2010 josephbUpdateCDSDid an SVN update on the CDS code

Problem:

The CDS oscillator part doesn't work inside subsystems.

Solution:

Rolf checked in an older version of the CDS oscillator which includes an input (which you just connect to a ground).  This makes the parser work properly so you can build with the oscillator in a subsystem.

So I did an SVN checkout and confirmed that the custom changes we have here were not overwritten.

Edit:

Turns out the latest svn version requires new locations for certain codes, such as EPICS installs.  I reverted back to version 2160, which is just before the new EPICs and other rtapps directory locations, but late enough to pick up the temporary fix to the CDS oscillator part.

  3965   Mon Nov 22 17:48:11 2010 josephbUpdateCDSc1iscex is not seeing its Binary Output card

Problem:

c1iscex does not even see its 32 channel Binary output card.  This means we have no control over the state of the analog whitening and dewhitening filters.  The ADC, DAC, and the 1616 Binary Input/Output cards are recognized and working.

Things tried:

Tried recreating the IOP code from the known working c1x02 (from the c1sus front end), but that didn't help.

Checked seating of the card, but it seems correctly socketed and tightened down nicely with a screw.

Tomorrow will try moving cards around and see if there's an issue with the first slot, which the Binary Output card is in.

Current Status:

The ETMX is currently damping, including POS, PIT, YAW and SIDE degrees of freedom.  However, the gds screen is showing a 0x2bad status for the c1scx front end (the IOP seems fine with a 0x0 status).  So for the moment, I can't seem to bring up c1scx testpoints.  I was able to do so earlier when I was testing the status of the binary outputs, so during one of the rebuilds, something broke. I may have to undo the SVN update and/or a change made by Alex today to allow for longer filter bank names beyond 19 characters.

  3974   Tue Nov 23 10:53:20 2010 josephbUpdateCDStiming issues

Problem:

Front ends seem to be experiencing a timing issue.  I can visibly see a difference in the GPS time ticks between models running on c1ioo and c1sus. 

In addition, the fb is reporting a 0x2bad to all front ends.  The 0x2000 means a mismatch in config files, but the 0xbad indicates an out of sync problem between the front ends and the frame builder.

Plan:

As there are plans to work on the optic tables today and suspension damping is needed, we are holding off on working on the problem until this afternoon/evening, since suspensions are still damping.  It does mean the RFM connections are not available.

At that point I'd like to do a reboot of the front ends and framebuilder and see if they come back up in sync or not.

  3975   Tue Nov 23 11:20:30 2010 josephbUpdateCDSCleaning up old target directory

Winter Cleaning:

I cleaned up the /cvs/cds/caltech/target/ directory of all the random models we had built over the last year, in preparation for the move of the old /cvs/cds/caltech/target slow control machine code into the new /opt/rtcds/caltech/c1/target directories.

I basically deleted all the directories generated by the RCG code that were put there, including things like c1tst, c1tstepics, c1x00, c1x00epics, and so forth.  Pre-RCG era code was left untouched.

  3978   Tue Nov 23 16:55:14 2010 josephbUpdateCDSUpdated apps

Updated Apps:

I created a new setup script for the newest build of the gds tools (DTT, foton, etc), located in /opt/apps (which is a soft link from /cvs/cds/apps) called gds-env.csh.

This script is now sourced by cshrc.40m for linux 64 bit machines.  In addition, the control room machines have a soft link in the /opt directory to the /cvs/cds/apps directory.

So now when you type dtt or foton, it will bring up the Centos compiled code Alex copied over from Hanford last month.

  3982   Tue Nov 23 23:13:40 2010 kiwamuSummaryCDSplan: we will install C1LSC

 [Joe, Suresh, Kiwamu]

 We will fully install and run the new C1LSC front end machine tomorrow.

And finally it is going to take care of the IOO PZT mirrors as well as LSC codes. 

 


 (background stroy)

 During the in-vac work today, we tried to energize and adjust the PZT mirrors to their midpoints.

However it turned out that C1ASC, which controls the voltage applying on the PZT mirrors, were not running.

We tried rebooting C1ASC by keying the crate but it didn't come back.

 The error message we got in telnet  was :

   memory init failure !!

 

 We discussed how to control the PZT mirrors from point of view of both short term and long term operation.

We decided to quit using C1ASC and use new C1LSC instead.

A good thing of this action is that, this work will bring the CDS closer to the final configuration. 

 

(things to do)

 - move C1LSC to the proper rack (1X4).

 - pull out the stuff associated with C1ASC from the 1Y3 rack.

 - install an IO chasis to the 1Y3 rack.

- string a fiber from C1LSC to the IO chasis.

- timing cable (?)

- configure C1LSC for Gentoo

- run a simple model to check the health

- build a model for controlling the PZT mirrors

  3983   Tue Nov 23 23:52:49 2010 ranaUpdateCDSUpdated apps

Wow. I typed DTT on rossa and it actually worked! No complaints about testpoints, etc. I was also able to use its new 'NDS2' function to get data off of the CIT cluster (L1:DARM_ERR from February). You have to use the kinit/kdestroy stuff to use NDS2 as usual (look up NDS2 in DASWG if you don't know what I mean).

  3986   Thu Nov 25 02:49:39 2010 kiwamuUpdateCDSinstallation of C1LSC: still going on

 [Joe, Kiwamu]

 We tried installing C1LSC but it's not completely done yet due to the following issues.

    (1)  A PCIe optical fiber which is supposed to connect C1LSC and its IO chasis is broken at a high probability.

    (2)  Two DAC boards (blue and golden board) are missing.

 We will ask the CDS people at Downs and take some more of those stuff from there.


( works we did )

 - took the whole C1ASC crate out from the 1Y3 rack.

 - installed an IO chasis to the place where C1ASC was.

 - strung a timing optical fiber to the IO chasis.

 - checked the functionality of the PICe optical fiber and found it doesn't work.

 

 

Fig.1  c1asc taken out from the rack                                                                         Fig.2 IO chasis installed to the rack

DSC_2723_ss.jpg      DSC_2724_ss.jpg

 

Fig.3 PCIe extension fiber (red arrow for an obvious bended point) 

DSC_2727_ss.jpg 

  3995   Tue Nov 30 12:25:08 2010 josephbUpdateCDSLSC computer to chassis cable dead

Problem:

We seemed to have a broken fiber link for use between the LSC and its IO chassis.  It is unclear to mean when this damage occurred.  The cable had been sitting in a box with styrofoam padding, and the kink is in the middle of the fiber, with no other obvious damage near by.  The cable however may have previously been used by the people in Downs for testing and possibly then.  Or when we were stringing it, we caused a kink to happen.

Tried Solutions:

I talked to Alex yesterday, and he suggested unplugging the power on both the computer and the IO chassis completely, then plugging in the new fiber connector, as he had to do that once with a fiber connection at Hanford.  We tried this this morning, however, still no joy.  At this point I plan to toss the fiber as I don't know of any way to rehabilitate kinked fibers.

Note this means that I rebooted c1sus and then did a burt restore from the Nov/30/07:07 directory for c1suspeics, c1rmsepics, c1mcsepics.  It looks like all the filters switched on.

Current Plan:

We do, however, have the a Dolphin fiber which originally was intended to go between the LSC and its IO chassis, before Rolf was told it doesn't work well that way.  However, we were going to connect the LSC machine to the rest of the network via Dolphin.

We can put the LSC machine next to its chassis in the LSC rack, and connect the chassis to the rest of the front ends by the Dolphin fiber.  In that case we just need the usual copper style cable going between the chassis and the computer.

 

  3999   Tue Nov 30 16:02:18 2010 josephbUpdateCDSstatus

Issues:

1) Turns out the /opt/rtcds/caltech/c1/target/gds/param/testpoint.par file had been emptied or deleted at one point, and the only entry in it was c1pem.  This had been causing us a lack of test points for the last few days.  It is unclear when or how this happened.  The file has been fixed to include all the front end models again.  (Fixed)

2) Alex and I worked on tracking down why there's a GPS difference between the front ends and the frame builder, which is why we see a 0x4000 error on all the front end GDS screens. This involved several rebuilds of the front end codes and reboots of the machines involved. (Broken)

3) Still working on understanding why the RFM communication, which I think is related to the timing issues we're seeing.  I know the data is being transferred on the card, but it seems to being rejected after being red in, suggesting a time stamp mismatch. (Broken)

4) The c1iscex binary output card still doesn't work.  (Broken)

Plan:

Alex and I will be working on the above issues tomorrow morning.

Status:

Currently, the c1ioo, c1sus and c1iscex computers are running with their front ends. They all still have 0x4000 error.  However, you can still look at channels on dataviewer for example.  However, there's a possibility of inconsistent timing between computer (although all models on a single computer will be in sync).

All the front ends where burt restorted to 07:07 this morning.  I spot checked several optic filter banks and they look to have been turned on.

  4003   Wed Dec 1 12:02:49 2010 josephb, alexUpdateCDSRebuilding frame builder with latest code

Problem:

The front ends seem to have different gps timestamps on the data than the frame builder has when receiving them.

One theory is we have fairly been doing SVN checkouts of the code for the front ends once a week or every two weeks, but the frame builder has not been rebuilt for about a month. 

Current Action:

Alex is currently rebuilding the frame builder with the latest code changes.

It also suggests I should try rebuilding the frame builder on a semi-regular basis as updates come in.

 

 

  4004   Wed Dec 1 13:41:21 2010 josephb, alex, rolfUpdateCDSTiming is back

Problem:

We had timing problems across the front ends.

Solution:

Noticing that the 1PPS reference was not blinking on the Master Timing Distribution box.  It was supposed to be getting a signal from the c0dcu1 VME crate computer, but this was not happening.

We disconnected the timing signal going into c0dcu1, coming from c0daqctrl, and connected the 1PPS directly from c0daqctrl to the Ref In for the Master Timing distribution box (blue box with lots of fibers coming out of it in 1X5).

We now have agreement in timing between front ends.

After several reboots we now have working RFM again, along with computers who agree on the current GPS time along with the frame builder.

Status:

RFM is back and testpoints should be happy.

We still don't have a working binary output for the X end.  I may need to get a replacement backplane with more than 4 slots if the 1st slot of this board has the same problem as the large boards.

I have burt restored the c1ioo, c1mcs, c1rms, c1sus, and c1scx processes, and optics look to be damped.

 

  4008   Fri Dec 3 14:34:23 2010 ranaUpdateCDSfooling around in the FB rack

This morning (~0100) I started to redo some of the wiring in the rack with the FB in it. This was in an effort to activate the new Megatron (Sun Fire 4600) which we got from Rolf.

Its sitting right above the Frame Builder (FB). The fibers in there are a rats nest. Someone needs to team up with Joe to neaten up the cabling in that rack - its a mini-disaster.

While fooling around in there I most probably disturbed something, leading to the FB troubles today.

  4009   Fri Dec 3 15:37:10 2010 josephbUpdateCDSfb, front ends fixed - tested RFM between c1ioo and c1iscex

Problem:

The front ends and fb computers were unresponsive this morning.

This was due to the fb machine having its ethernet cable plugged into the wrong input.   It should be plugged into the port labeled 0.

Since all the front end machines mount their root partition from fb, this caused them to also hang.

Solution:

The cable has been relabled to "fb" on both ends, and plugged into the correct jack.  All the front ends were rebooted.

 

Testing RFM for green locking:

I tested the RFM connection between c1ioo and c1scx.  Unfortunately, on the first test, it turns out the c1ioo machine had its gps time off by 1 second compared to c1sus and c1iscex.  A second reboot seems to have fixed the issue.

However, it bothers me that the code didn't come up with the correct time on the first boot.

The test was done using the c1gcv model and by modifying the c1scx model.  At the moment, the MC_L channel is being passed the MC_L input of the ETMX suspension.  In the final configuration, this will be a properly shaped error signal from the green locking.

The MC_L signal is currently not actually driving the optic, as the ETMX POS MATRIX currently has a 0 for the MC_L component.

  4014   Mon Dec 6 11:59:41 2010 josephbUpdateCDSNew c1lsc computer moved to lsc rack

Computer moved:

The c1lsc computer has been moved over to the 1Y3 rack, just above the c1lsc IO chassis. 

It will talking to the c1sus computer via a Dolphin PCIe reflected memory card.  The cards have been installed into c1lsc and c1sus this morning.

It will talk to its IO chassis via the usual short IO chassis cable.

 

To Do:

The Dolphin fiber still needs to be strung between c1sus and c1lsc.

The DAQ cable between c1lsc and the DAQ router (which lets the frame builder talk directly with the front ends) also needs t to be strung.

c1lsc needs to be configured to use fb as a boot server, and the fb needs to be configured to handle the c1lsc machine.

  4015   Mon Dec 6 16:49:43 2010 josephbUpdateCDSc1lsc halfway to working

C1LSC Status:

The c1lsc computer is running Gentoo off of the fb server. It has been connected to the DAQ network and is handling mx_streams properly (so we're not flooding the network error messages like we used to with c1iscex).  It is using the old c1lsc ip address (192.168.113.62). It can ssh'd into.

However, it is not talking properly to the IO chassis.  The IO chassis turns on when the computer turns on, but the host interface board in the IO chassis only has 2 red lights on (as opposed to many green lights on the host interface boards in the c1sus, c1ioo, and c1iscex IO chassis).  The c1lsc IO processor (called c1x04) doesn't see any ADCs, DACs, or Binary cards.  The timing slave is receiving 1PPS and is locked to it, but because the chassis isn't communicating, c1x04 is running off the computer's internal clock, causing it to be several seconds off. 

Need to investigate why the computer and chassis are not talking to each other.

General Status:

The c1sus and c1ioo computers are not talking properly to the frame builder.  A reboot of c1iscex fixed the same problem earlier, however, as Kiwamu and Suresh are working in the vacuum, I'm leaving those computers alone for the moment, but a reboot and burt restore probably should be done later today for c1sus and c1ioo

 

Current CDS status:

MC damp dataviewer diaggui AWG c1ioo c1sus c1iscex RFM Dolphin RFM Sim.Plant Frame builder TDS
                       
  4019   Tue Dec 7 12:12:40 2010 kiwamuUpdateCDSadded some more DAQ channels

[Joe and Kiwamu]

We added some more DAQ channels on c1sus.

We wanted to try diagonalizing the input matrices of the ITMX OSEMs because the motion of ITMX looked noisier than the others

So for this purpose we tried adding DAQ channels so that we can take spectra anytime.

After some debugging, now they are happily running.

 


(DAQ activation code)

There is a code which activates DAQ channels written by Yuta in this October.

       /cvs/cds/rtcds/caltech/c1/chans/daq/activateDAQ.py

If you just execute this code, it is supposed to activate the DAQ channels automatically by editing C1AAA.ini files.

However there were some small bugs in the code, so we fixed them.

Now the code seems fine.

 

(reboot fb DAQ process)

When new DAQ channels are added, one has to reboot the DAQ process running on fb.

To do this, log in to a certain port on fb,

          telnet fb 8088

     shutdown

Then the process will automatically recover by itself.

After doing the above reboo job, we found tpman on C1IOO got down.

We don't fully understand why only C1IOO was affected, but anyway rebooting of the c1ioo front end machine fixed the problem.

 

  4020   Tue Dec 7 16:09:53 2010 josephbUpdateCDSc1iscex status

I swapped out the IO chassis which could only handle 3 PCIe cards with the another chassis which has space for 17, but which previously had timing issues.  A new cable going between the timing slave and the rear board seems to have fixed the timing issues. 

I'm hoping to get a replacement PCI extension board which can handle more than 3 cards this week from Rolf and then eventually put it in the Y-end rack.  I'm also still waiting for a repaired Host interface board to come in for that as well.

At this point, RFM is working to c1iscex, but I'm still debugging the binary outputs to the analog filters.  As of this time they are not working properly (turning the digital filters on and off seems to have no effect on the transfer function measured from an excitation in SUSPOS, all the way around to IN1 of the sensor inputs (but before measuring the digital fitlers).  Ideally I should see a difference when I switch the digital filters on and off (since the analog ones should also switch on and off), but I do not.

  4023   Tue Dec 7 19:34:58 2010 kiwamuUpdateCDSrebooted DAQ and all the front end machines

I found that all the front end machine showed the red light indicators of DAQ on the XXX_GDS_TP.adl screens.

Also I could not get any data from both test points and DAQ channels.

First I tried fixing by telneting and rebooting fb, but it didn't help.

So I rebooted all the front end machines, and then everything became fine.

 

  4025   Wed Dec 8 12:26:56 2010 josephbUpdateCDSmegatron set up - as a test front end

[josephb, Osamu]

Megatron Setup:

To show Osamu how to setup a a front end as well as provide a test computer for Osamu's use, we used the new megatron (sunfire x4600 with 16 cores and 8 gigabytes of memory) as a front end without an IO chassis.

The steps we followed are in the wiki, here.

The new megatron's IP address is 192.168.113.209.  It is running the c1x99 front end code.

  4027   Wed Dec 8 14:46:19 2010 josephb, kiwamuUpdateCDSWhy the ETMX daq channels were not recorded last night

When adding the ETMX DAQ channels using the daqconfig gui (located in /opt/rtcds/caltech/c1/scripts/) on C1SCX.ini, we forgot to set the acquire flag to 1 from 0.

So the frame builder was receiving the data, but not recording it.

We have since then added ETMX and the C1SCX.ini file to Yuta's useful "activateDAQ.py" script in /opt/rtcds/caltech/c1/chans/daq/, so that it now sets the sensor and SUSPOS like channels to be acquired at 2k when run.  You still need to restart the frame builder (telnet fb 8087 and then shutdown) for these changes to take effect.

The script now also properly handles files which already have had channels activated, but not acquired.

  4028   Wed Dec 8 14:51:09 2010 josephbUpdateCDSc1pem now recording data

Problem:

c1pem model was reporting all zeros for all the PEM channels.

Solution:

Two fold.  On the software end, I added ADCs 0, 1, and 2 to the model.  ADC 3 was already present and is the actual ADC taking in PEM information.

There was a problem noted awhile back by Alex and Rolf that there's a problem with the way the DACs and ADCs are number internally in the code.  Missing ADCs or DACs prior to the one you're actually using can cause problems.

At some point that problem should be fixed by the CDS crew, but for now, always include all ADCs and DACs up to and including the highest number ADC/DAC you need to use for that model.

On the physical end, I checked the AA filter chassis and found the power was not plugged in.  I plugged it in.

Status:

We now have PEM channels being recorded by the FB, which should make Jenne happier.

  4029   Wed Dec 8 17:05:39 2010 josephbUpdateCDSPut in dolphin fiber between c1sus and c1lsc

[josephb,Suresh]

We put in the fiber for use with the Dolphin reflected memory between c1sus and c1lsc (rack 1X4 to rack 1Y3).  I still need to setup the dolphin hub in the 1X4 rack, but once that is done, we should be able to test the dolphin memory tomorrow.

  4037   Thu Dec 9 12:28:52 2010 josephb, alexUpdateCDSThe Dolphin is in (Reflected memory that is)

Setting the Configurations files:

On the fb machine in /etc/dis/ there are several configurations files that need to be set for our dolphin network.

First, we modify networkmanager.conf.

We set  "-dimensionX 2;" and leave the dimensionY and dimensionZ as 0.  If we had 3 machines on a single router, we'd set X to 3, and so forth.

We then modify dishosts.conf.

We add an entry for each machine that looks like:

#Keyword name nodeid adapter link_width
HOSTNAME: c1sus
ADAPTER:  c1sus_a0 4 0 4

The nodeids (the first number after the name)  increment by 4 each time, so c1lsc is:

HOSTNAME: c1lsc
ADAPTER:  c1lsc_a0 8 0 4

The file cluster.conf is automatically updated by the code by parsing the dishosts.conf and networkmanager.conf files.

Getting the code to automatically start:

We uncommented the following lines in the rc.local file in /diskless/root/etc on the fb machine:

# Initialize Dolphin
sleep 2
# Have to set it first to node 4 with dxconfig or dis_nodemgr fails. Unexplai   ned.
/opt/DIS/sbin/dxconfig -c 1 -a 0 -slw 4 -n 4
/opt/DIS/sbin/dis_nodemgr -basedir /opt/DIS

For the moment we left the following lines commented out:

# Wait for Dolphin to initialize on all nodes
#/etc/dolphin_wait
We were unsure of the effect of the dolphin_wait script on the front ends without Dolphin cards.  It looks like the script it calls waits until there are no dead nodes.

In /etc/conf.d/ on the fb machine we modified the local.start file by uncommenting:

/opt/DIS/sbin/dis_networkmgr&

This starts the Dolphin network manager on the fb machine.  The fb machine is not using a Dolphin connection, but controls the front end Dolphin connections via ethernet.

The Dolphin network manager can be interacted with by using the dxadmin program (located in /opt/DIS/sbin/ on the fb machine).  This is a GUI program so use ssh -X when logging into the fb before use.

Setting up the front ends models:

Each IOP model (c1x02, c1x04) that runs on a machine using the Dolphin RFM cards needs to have the flag pciRfm=1 set in the configuration box (usually located in the upper left of the model in Simulink).  Similarly, the models actually making use of the Dolphin connections should have it set as well.  Use the PCIE_SignalName parts from IO_PARTS in the CDS_PARTS.mdl file to send and receive communications via the Dolphin RFM.

  4045   Mon Dec 13 11:56:32 2010 josephb, alexUpdateCDSDolphin is working

Problem:

The dolphin RFM was not sending data between c1lsc and c1sus.

Solution:

Dig into the controller.c code located in /opt/rtcds/caltech/c1/core/advLigoRTS/src/fe/.  Find this bit of code on line 2173:

 

2173 #ifdef DOLPHIN_TEST
2174 #ifdef X1X14_CODE
2175         static const target_node = 8; //DIS_TARGET_NODE;
2176 #else
2177         static const target_node = 12; //DIS_TARGET_NODE;
2178 #endif
2179         status = init_dolphin(target_node);

Replace it with this bit of code:

2173 #ifdef DOLPHIN_TEST
2174 #ifdef C1X02_CODE
2175         static const target_node = 8; //DIS_TARGET_NODE;
2176 #else
2177         static const target_node = 4; //DIS_TARGET_NODE;
2178 #endif
2179         status = init_dolphin(target_node);

Basically this was hard coded for use at the site on their test stands.  When starting up, the dolphin adapter would look for a target node to talk to, that could not be itself.  So, all the dolphin adapters would normally try to talk to target_node 12, unless it was the X1X14 front end code, which happened to be the one with dolphin node id 12.  It would try to talk to node 8.

Unfortunately, in our setup, we only had nodes 4 and 8.  Thus, both our codes would try to talk to a nonexistent node 12.  This new code has everyone talk to node 4, except the c1x02 process which talks to node 8 (since it is node 4 and can't talk to itself).

I'm told this stuff is going away in the next revision and shouldn't have this hard coded stuff.

 
Different Dolphin Problem and Fix:

Apparently, the only models which should have pciRfm=1 are the IOP models which have a dolphin connection.  Front end models that are not IOP models (like c1lsc and c1rfm) should not have this flag set.  Otherwise they include the dolphin drivers and causes them and the IOP to refuse to unload when using rmmod.

So pciRfm=1 only in IOP models using Dolphin, everyone else should not have it or should have pciRfm=-1.

 

Current CDS status:

MC damp dataviewer diaggui AWG c1ioo c1sus c1iscex RFM Dolphin RFM Sim.Plant Frame builder TDS
                       
  4046   Mon Dec 13 17:18:47 2010 josephbUpdateCDSBurt updates

Problem:

Autoburt wouldn't restore settings for front ends on reboot

What was done:

First I moved the burt directory over to the new directory structure.

This involved moving /cvs/cds/caltech/burt/ to /opt/rtcds/caltech/c1/burt.

Then I updated the burt.cron file in the new location, /opt/rtcds/caltech/c1/burt/autoburt/.  This pointed to the new autoburt.pl script.

I created an autoburt directory in the /opt/rtcds/caltech/c1/scripts directory and placed the autoburt.pl script there.

I modified the autoburt.pl script so that it pointed to the new snapshot location.  I also modified it so it updates a directory called "latest" located in the /opt/rtcds/caltech/c1/burt/autoburt directory.  In there is a set of soft links to the latest autoburt backup.

Lastly, I edited the crontab on op340m (using crontab -e) to point to the new burt.cron file in the new location.

This was the easiest solution since the start script is just a simple bash script and I couldn't think of a quick and easy way to have it navigate the snapshots directory reliably.

I then modified the Makefile located in /opt/rtcds/caltech/c1/core/advLigoRTS/ which actually generates the start scripts, to point at the "latest" directory when doing restores.  Previously it had been pointing to /tmp/ which didn't really have anything in it.

So in the future, when building code, it should point to the correct snapshots now.  Using sed I modified all the existing start scripts to point to the latest directory when grabbing snapshots.

Future:

According to Keith directory documentation (see T1000248) , the burt restores should live in the individual target system directory i.e. /target/c1sus/burt, /target/c1lsc/burt, etc.  This is a distinctly different paradigm from what we've been using in the autoburt script, and would require a fairly extensive rewrite of that script to handle this properly.  For the moment I'm keeping the old style, everything in one directory by date.  It would probably be worth discussing if and how to move over to the new system.

  4053   Tue Dec 14 11:24:35 2010 josephbUpdateCDSburt restore

I had updated the individual start scripts, but forgotten to update the rc.local file on the front ends to handle burt restores on reboot.

I went to the fb machine and into /diskless/root/etc/ and modified the rc.local file there.

Basically in the loop over systems, I added the following line:

/opt/epics-3.14.9-linux/base/bin/linux-x86/burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/latest/${i}epics.snap  -l /opt/rtcds/caltech/c1/burt/autoburt/logs/${i}epics.log.restore -v

The ${i} gets replaced with the system name in the loop (c1sus, c1mcs, c1rms, etc)

  4057   Wed Dec 15 13:36:44 2010 josephbUpdateCDSETMY IO chassis update

I gave Alex a sob story over lunch about having to go and try to resurrect dead VME crates.  He and Rolf then took pity on me and handed me their last host interface board from their test stand, although I was warned by Rolf that this one (the latest generation board from One Stop) seems to be flakier than previous versions, and may require reboots if it starts in a bad state.

Anyways, with this in hand I'm hoping to get c1iscey damping by tomorrow at the latest.

  4058   Wed Dec 15 14:23:32 2010 KojiUpdateCDSETMY IO chassis update

Great!

I wish this board works fine at least for several days...

Quote:

I gave Alex a sob story over lunch about having to go and try to resurrect dead VME crates.  He and Rolf then took pity on me and handed me their last host interface board from their test stand, although I was warned by Rolf that this one (the latest generation board from One Stop) seems to be flakier than previous versions, and may require reboots if it starts in a bad state.

Anyways, with this in hand I'm hoping to get c1iscey damping by tomorrow at the latest.

 

  4060   Wed Dec 15 17:21:20 2010 josephbUpdateCDSETMY controls status

Status:

The c1iscey was converted over to be a diskless Gentoo machine like the other front ends, following the instructions found here.  Its front end model, c1scy was copied and approriately changed from the c1scx model, along with the filter banks.  A new IOP c1x05 was created and assigned to c1iscey.

The c1iscey IO chassis had the small 4 PCI slot board removed and a large 17 PCI slot board put in.  It was repopulated with an ADC/DAC/BO and RFM card.  The host interface board from Rolf was also put in. 

On start up, the IOP process did not see or recognize any of the cards in the IO chassis.

Four reboots later, the IOP code had seen the ADC/DAC/BO/RFM card once.  And on that reboot, there was a time out on the ADC which caused the IOP code to exit.

In addition to the not seeing the PCI cards most of the time, several cables still need to be put together for plugging into the the adapter boards and a box need to be made for the DAC adapter electronics.

 

  4065   Thu Dec 16 15:10:18 2010 josephb, kiwamuUpdateCDSETMY working at the expense of ETMX

I acquired a second full pair of Host interface board cards (one for the computer and one for the chassis) from Rolf (again, 2nd generation - the bad kind).

However, they exhibited the same symptoms as the first one that I was given. 

Rolf gave a few more suggestions on getting it to work.  Pull the power plugs.  If its got slow flashing green lights, just soft cycle, don't power cycle.  Alex suggested turning the IO chassis on before the computer.

None of it seemed to help in getting the computer talking to the IO chassis.

 

I finally decided to simply take the ETMX IO chassis and place it at the Y end.  So for the moment, ETMY is working, while ETMX is temporarily out of commission. 

We also made the necessary cables (2x 37 d-sub female to 40 pin female and 40 pin female to 40 pin female) .  Kiwamu also did nice work on creating a DAC adapter box, since Jay had given me a spare board, but nothing to put it in.

  4068   Fri Dec 17 02:22:06 2010 kiwamuUpdateCDSETMY damping: not good

  I made some efforts in order to damp ETMY, however it still doesn't happily work.

 It looks like something wrong is going on around the whitening filters and the AA filter borad.

I will briefly check those analog parts tomorrow morning.

 

- - -(symptom)

Signs of the UL and the SD readouts are flipped, which I don't know why.

At the testpoints on the analog PD interface board, all the signs are the same. This is good.

But after the signals go through the whitening filters and AA filters, UL and SD become sign-flipped.

I tried compensating the sign-flips by changing the sign by means of the software, but it didn't help the damping. 

In fact the suspension got crazy when I activated the damping. So I have no idea if we are looking at exactly right readouts or some sort of different signals.

 

- - -(fixing DAC connector)

 I fixed a connector of the DAC ribbon cable since the solderless connector was loosely locked to its cable.

Before fixing this connector I couldn't apply voltages on some of the coils  but now it is working well.

  4075   Mon Dec 20 10:06:36 2010 kiwamuUpdateCDSETMY damped

  Last Saturday I succeeded in damping the ETMY suspension eventually.

This means now ALL the suspensions are happily damped. 

It looked like some combination of gains and control filters had made unstabie conditions.

 2010Dec18.png

  I actually was playing with the on/off switches of the control filters and the gain values just for fun.

Then finally I found it worked when the chebyshev filters were off. This is the same situation as Yuta told me about two months before.

Other things like the input and the output matrix looked nothing is wrong, except for the sign flips at ULSEN and SDSEN as I mentioned in the last entry (see here).

So we still should take a look at the analog filters in order to make sure why the signs are flipped.

  4097   Fri Dec 24 09:01:33 2010 josephbUpdateCDSBorrowed ADC

Osamu has borrowed an ADC card from the LSC IO chassis (which currently has a flaky generation 2 Host interface board).  He has used it to get his temporary Dell test stand running daqd successfully as of yesterday.

This is mostly a note to myself so I remember this in the new year, assuming Osamu hasn't replaced the evidence by January 7th.

ELOG V3.1.3-