40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 273 of 339  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  12599   Fri Nov 4 18:31:05 2016 LydiaUpdateCDSc1auxex channels/pins for Acromag

Here are the channels we are planning to switch over from c1auxex to Acromag, and their current pin numbers on the existing VME boards. 

Analog inputs: 

C1:SUS-ETMX_UL_AIOut    #C0 S0
C1:SUS-ETMX_LL_AIOut    #C0 S1
C1:SUS-ETMX_UR_AIOut    #C0 S2
C1:SUS-ETMX_LR_AIOut    #C0 S3
C1:SUS-ETMX_Side_AIOut    #C0 S4
C1:SUS-ETMX_OL_SEG1    #C0 S5
C1:SUS-ETMX_OL_SEG2    #C0 S6
C1:SUS-ETMX_OL_SEG3    #C0 S7
C1:SUS-ETMX_OL_SEG4    #C0 S8
C1:SUS-ETMX_OL_X    #C0 S9
C1:SUS-ETMX_OL_Y    #C0 S10
C1:SUS-ETMX_OL_S    #C0 S11
C1:SUS-ETMX_ULPD    #C0 S12
C1:SUS-ETMX_LLPD    #C0 S13
C1:SUS-ETMX_URPD    #C0 S14
C1:SUS-ETMX_LRPD    #C0 S15
C1:SUS-ETMX_SPD    #C0 S16
C1:SUS-ETMX_ULV    #C0 S17
C1:SUS-ETMX_LLV    #C0 S18
C1:SUS-ETMX_URV    #C0 S19
C1:SUS-ETMX_LRV    #C0 S20
C1:SUS-ETMX_SideV    #C0 S21
C1:SUS-ETMX_ULPD_MEAN    #C0 S12
C1:SUS-ETMX_LLPD_MEAN    #C0 S13
C1:SUS-ETMX_SDPD_MEAN    #C0 S16

Analog Outputs:

C1:ASC-QPDX_S1WhiteGain    #C0 S0
C1:ASC-QPDX_S2WhiteGain    #C0 S1
C1:ASC-QPDX_S3WhiteGain    #C0 S2
C1:ASC-QPDX_S4WhiteGain    #C0 S3
C1:SUS-ETMX_ULBiasAdj    #C0 S4
C1:SUS-ETMX_LLBiasAdj    #C0 S5
C1:SUS-ETMX_URBiasAdj    #C0 S6
C1:SUS-ETMX_LRBiasAdj    #C0 S7
C1:LSC-EX_GREENLASER_TEMP    #C0 S0 This appears to have the same pin as another channel-- is it not being used? 

Binary Outputs:

C1:SUS-ETMX_UL_ENABLE    #C0 S0
C1:SUS-ETMX_LL_ENABLE    #C0 S1
C1:SUS-ETMX_UR_ENABLE    #C0 S2
C1:SUS-ETMX_LR_ENABLE    #C0 S3
C1:SUS-ETMX_SD_ENABLE    #C0 S4
C1:ASC-QPDX_GainSwitch1    #C0 S7
C1:ASC-QPDX_GainSwitch2    #C0 S8
C1:ASC-QPDX_GainSwitch3    #C0 S9
C1:ASC-QPDX_GainSwitch4    #C0 S10
C1:AUX-GREEN_X_Shutter2    #C0 S15

  12600   Sat Nov 5 15:45:44 2016 ranaUpdateCDSc1auxex channels/pins for Acromag

We don't need to record any of the AIOut channels, the OL channels (since we record them fast), or the _MEAN channels (I think they must be CALC records or just bogus).

  9438   Wed Dec 4 13:37:34 2013 JenneUpdateCDSc1auxex down again

Quote:

1) c1auxex - fixed

Tried telnet c1auxex => rejected by the host

Went down to the south end. Power cycled the target. Came back to the control room.
=> Confirmed the epics read/write is back.
Burtrestored the epics vars for the target to the snapshot on 31th Oct at 5:07.

 When I came in this morning, in addition to the fb being unhappy [elog 9436] (which Koji later fixed [elog 9437] ), c1auxex was down / not talking to the world nicely. 

I tried telnet-ing, but was rejected, so EricQ and I went down to the Xend and pushed the reset button on the computer.  The computer came back up just fine, and I did a burt restore to 03:07 on Nov 30th.

  13681   Tue Mar 13 20:03:16 2018 johannesConfigurationComputersc1auxex replacement

I assembled the rack-mount server that will long-term replace c1auxex, so we can return the borrowed unit to Larry.

SUPERMICRO SYS-5017A-EP Specs:

  • Intel Atom N2800 (2 cores, 1.8GHz, 1MB, 64-bit)
  • 4GB (2x2GB) DDR3 RAM
  • 128 GB SSD

IMG_20180313_105154890.jpg      IMG_20180313_133031002.jpg

I installed a standard Debian Jessie distribution, with option LXDE for minimal resource usage. Steps taken after fresh install

  1. Give controls sudo permission: usermod -aG sudo controls
  2. mkdir /cvs/cds
  3. apt-get install nfs-common
  4. Added line "chiara:/home/cds              /cvs/cds        nfs     rw,bg,nfsvers=3" to end of /etc/fstab
  5. Configured network adapter in /etc/network/interfaces
            iface eth0 inet static
            address 192.168.113.48
            netmask 255.255.255.0
            gateway 192.168.113.2
            dns-nameservers 192.168.113.104 131.215.125.1 131.215.139.100
            dns-search martian

    I first assigned the IP 192.168.113.59 of the original c1auxex, but for some reason my ssh connections kept failing mid-session. After I switched to a different IP the disruption no longer happened.
  6. Add lines "search martian" and "nameserver 192.168.113.104" to /etc/resolv.conf
  7. apt-get install openssh-server
    At this point the unit was ready for remote connections on the martian network, and I moved it to the XEND.
  8. Added lines to /home/controls/.bashrc to set paths and environment variables:
    export PATH=/cvs/cds/rtapps/epics-3.14.12.2_long/base/bin/linux-x86_64:/cvs/cds/rtapps/epics-3.14.12.2_long/extensions/bin/linux-x86_64:$PATH
    export HOST_ARCH=linux-x86_64
    export EPICS_HOST_ARCH=linux-x86_64
    export RPN_DEFNS=~/.defns.rpn
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/cvs/cds/rtapps/epics-3.14.12.2_long/base/lib/linux-x86_64:/cvs/cds/rtapps/epics-3.14.12.2_long/modules/modbus/lib/linux-x86_64/:/cvs/cds/rtapps/epics-3.14.12.2_long/modules/asyn/lib/linux-x86_64
  9. apt-get install libmotif-common libmotif4 libxp6 (required to run burtwb utility)

The server is ready to take over for c1auxex2 and does not need any local epics compiled, since it can run the 3.14.12.2_long binaries in /cvs/cds.

Attachment 1: IMG_20180313_105154890.jpg
IMG_20180313_105154890.jpg
Attachment 2: IMG_20180313_133031002.jpg
IMG_20180313_133031002.jpg
  13682   Wed Mar 14 23:58:30 2018 johannesConfigurationComputersc1auxex replacement

I replaced the borrowed server with the permanent one today. Before Removing the current server, Before, I performed several additional preparations:

  • Updated Chiara hostables to IP 192.168.113.48 for c1auxex
  • apt-get install procserv
  • copied ETMXaux2.* files in /cvs/cds/caltech/target/c1auxex2 to ETMXaux.* and changed references from /opt/rtcds/epics (which was a local directory on c1auxex2) to /cvs/cds/rtapps/epics-3.14.12.2_long in the copied files
  • Added instruction
    Environment="LD_LIBRARY_PATH=/cvs/cds/rtapps/epics-3.14.12.2_long/base/lib/linux-x86_64:/cvs/cds/rtapps/epics-3.14.12.2_long/modules/modbus/lib/linux-x86_64/:/cvs/cds/rtapps/epics-3.14.12.2_long/modules/asyn/lib/linux-x86_64"
    to /etc/systemd/system/modbusIOC.service  (required for burtwb dependencies)

Then I replaced the server:

  1. IFO was in LSC mode with both arms locked
  2. Backed up ETMX alignment using save feature in IFOalign screen
  3. Disengaged LSC mode
  4. Shut down ETMX watchdog
  5. Disconnected ETMX satellite box
  6. Shut down c1auxex2 and c1auxex
  7. Performed the server swap
  8. Booted c1auxex
  9. Made sure EPICS channels were back online and channel defaults were restored
  10. Reconnected satellite box
  11. Turned on watchdog
  12. Turned on OpLevs
  13. Engaged LSC mode -> both arms were instantly locked

I returned c1auxex2 to Larry, who needed it back asap because of some hardware failure

Steve: Acromag XT1221 ordered 3-15-18

  13687   Mon Mar 19 14:39:09 2018 johannesConfigurationComputersc1auxex replacement

[gautam, johannes]

The temperature control output channel for the XEND seismometer wasn't working properly. The EPICS channel existed, could be written to and read from, but no physical voltage was observed on the (confirmed properly) wired connector.

The Acromag DAC that outputs this channel was completely spare in the original scheme and does not serve any other channels at the moment. We found it to be unresponsive to ping from the host machine (reminder: the Acromags are on their own subnet with IPs 192.168.114.xxx connected to the secondary ethernet adapter of c1auxex), while all others returned the ping just fine. The modules have daisy-chained ethernet connections, and the one Acromag unit behind the unresponsive one in the chain was still responding to ping and its channels were working, so it couldn't have been a problem with the (ethernet) cabling.

Gautam and I power-cycled the chassis and server, which resolved the issue. The channel is now outputting the requested voltage on the Out1 BNC connector of the chassis (front). When I was setting up the whole system and did frequent rebooting and IP-redefinitions I have seen network issues arise between server and Acromags. In particular, when changing the network settings server-side, the Acromags needed to reboot occasionally. So this whole problem was probably due to the recent server-swap, as the chassis had not been power-cycled since.

 

During the debugging we also found that the c1psl2 channels were not working. This was because I had overlooked to update the epics environment variables for the modbus path defined in /cvs/cds/caltech/target/c1psl2/npro_config.cmd from the local installation /opt/epics/ (which doesn't exist on the new server anymore) to the network location /cvs/cds/rtapps/epics-3.14.12.2_long/. This has been fixed and the slow diagnostic PSL channels are recording again.

  15250   Wed Mar 4 16:54:43 2020 gautamUpdateCDSc1auxex temporarily disconnected

To debug a problem with the new c1psl (later elog), we needed a Supermicro EPICS server that was using the shared EPICS/modbus/asyn binaries rather than a local install. Of those available in the lab (c1iscaux, c1vac, c1susaux being the others), this was the only one which uses the shared install. So I 

  • turned the slow bias voltages to 0
  • shutdown the watchdog
  • disconnected the Acromag crate in 1X9 from the 192.168.114.xxx subnet at the supermicro end
  • connected a test ADC to the local subnet using a different ethernet cable (leaving the original one dangling)
  • ran some software tests to see if we could open up a communication line to the test ADC using modbus without any errors being thrown
  • removed the test ADC and restored the ethernet connection.

At which point Jon reset the software end, I restored the slow bias voltage and re-enabled the local damping. The optic seems to have damped okay. The Oplev spot is back in ~center of the QPD and the green beam can be locked to a TEM00 mode (so the alignment is okay - the IR beam is unavailable while c1psl issues are being sorted but I judge that things are back to the nominal state now).

  13469   Fri Dec 8 12:06:59 2017 johannesOmnistructureComputersc1auxex2 ready - but need more cables

The new slow machine c1auxex2 is ready to deploy. Unfortunately we don't have enough 37pin DSub cables to connect all channels. In fact, we need a total of 8, and I found only three male-male cables and one gender changer. I asked Steve to buy more.

Over the past week I have transferred all EPICS records - soft channels and physical ones - from c1auxex to c1auxex2, making changes where needed. Today I started the in-situ testing

  1. Unplugged ETMX's satellite box
  2. Unplugged the eurocrate backplane DIN cables from the SOS Driver and QPD Whitening filter modules (the ones that receive ao channels)
  3. Measured output voltages on the relevant pins for comparison after the swap
  4. Turned off c1auxex by key, removed ethernet cable
  5. Started the modbus ioc on c1auxex2
  6. Slow machine indicator channels came online, ETMX Watchdog was responsive (but didn't have anything to do due to missing inputs) and reporting. PIT/YAW sliders function as expected
  7. Restoring the previous settings gives output voltages close to the previous values, in fact the exact values requested (due to fresh calibration)
  8. Last step is to go live with c1auxex2 and confirm the remaining channels work as expected.

I copied the relevant files to start the modbus server to /cvs/cds/caltech/target/c1auxex2, although kept local copies in /home/controls/modbusIOC/ from which they're still run.

I wonder what's the best practice for this. Probably to store the database files centrally and load them over the network on server start?

  13578   Wed Jan 24 19:17:06 2018 johannesUpdateDAQc1auxex2 startup behavior

I compiled the burt binaries on c1auxex2 which took a little fiddling with dependencies and paths but nothing too major. The complete local epics folder (/opt/epics/) which contains the base epics binaries, modbus and burt for 32-bit linux has been copied to the shared drive at /opt/rtapps/epics-3.15.5. They belong to the most recent stable release. This was so we can now automatically call burt after the IOC initialization on c1auxex2 to restore the backed-up channel values.

I also copied the database definition and modbus instruction files to /cvs/cds/caltech/target/c1auxex2, from where they are now being read upon IOC initialization. This is an excerpt of the service file:

#ExecStart=/usr/bin/procServ -f -L /home/controls/modbusIOC/modbusIOC.log -p /run/modbusioc.pid 8008 /opt/epics/modules/modbus/bin/linux-x86/modbusApp /cvs/cds/caltech/target/c1auxex2/ETMXaux2.cmd   <-- Contains logging to file, see note 1)
ExecStart=/usr/bin/procServ -f -p /run/modbusioc.pid 8008 /opt/epics/modules/modbus/bin/linux-x86/modbusApp /cvs/cds/caltech/target/c1auxex2/ETMXaux2.cmd <-- Initializes the EPICS IOC with Modbus support
ExecStop=/bin/kill -9 ` cat /run/modbusioc.pid` <-- Kills the detached process by its process ID
ExecStartPost=/bin/bash -c "/opt/epics/extensions/bin/linux-x86/burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/latest/c1auxex.snap" <-- Restores general channel values
ExecStartPost=/bin/bash -c "/opt/epics/extensions/bin/linux-x86/burtwb -f /opt/rtcds/caltech/c1/medm/MISC/ifoalign/burt/ETMX.snap" <-- Restores PIT and YAW values from align MEDM screen
ExecStartPost=/bin/bash -c ". /home/controls/modbusIOC/ETMXaux2.sh" <-- Enables writing to PIT and YAW DAC channels, see note 2)

Note 1) I removed the logging to file for now because I noticed that if there are Acromag communication issues the logfile tends to grow in size VERY fast. In the cryo lab is had gotten to over 70GB just over the winter break. I don't think it's absolutely necessary to have it, and if diagnostics are needed we can easily uncomment it temporarily.

Note 2) I modified the static EPICS records of the four OSEM bias adjust channels so they won't start updating as soon as the IOC starts up (and before the channel defaults are restored by burt). This was done by setting the OMSL (output mode select) field from "closed_loop" to "supervisory". Sample record:

record(ao,"C1:SUS-ETMX_ULBiasAdj")
{
        field(DESC,"Bias Adjust for ETMX UL Coil Output")
        field(DTYP,"asynInt32")
        field(OUT, "@asynMask(C1AUXEX_XT1541A_DAC, 0, -16)MODBUS_DATA")
        field(SCAN,".1 second")
        field(OMSL,"supervisory")  <-- Used to be "closed_loop"
        field(DOL, "C1:SUS-ETMX_ULBiasSet  PP")
        field(PREC,"3")
        field(EGUF,"10.923")
        field(EGUL,"-10.923")
        field(EGU, "Volts")
        field(LINR,"LINEAR")
        field(DRVH,"10")
        field(DRVL,"-10")
        field(HOPR,"10")
        field(LOPR,"-10")
}

Now, on reboort/IOC re-initialization the physical DAC channels are performing a one-time readback of the last stored value in the Acromag's register, then idle until the last StartPost statement executes the script ETMXaux.sh, which changes their OMSL field back to "closed_loop". This causes them to start updating their output from the calc records defined in their DOL field (which have by then recovered their default values curtesy of burt). The result is a smooth transition from idling to the controlled state with no sudden or large offset changes. yes

  13580   Wed Jan 24 23:13:30 2018 johannesUpdateDAQc1auxex2 startup behavior
Quote:

The result is a smooth transition from idling to the controlled state with no sudden or large offset changes. yes

[Gautam, Johannes]

While checking how smooth the transition is we still noticed significant motion of ETMX by looking at the locked green laser and OpLevs. We found that this motion was not caused by interruption of the slow offset adjust, but rather the Watchdog being re-initialized to its OFF state, which cuts the fast channels OFF. On other optics this is observed too, but not as severe. The cause is a rather large offset on the LR coil coming from the fast DAQ, which was reported as 50mV by the slow readback channel (while other readback values are <10mV). It is present even when turning the output of the CDS model OFF, but vanishes when the watchdog is triggered. This helped us trace it to an offset of the DAC output itself: it is present at the output of the AI board but vanishes when the DAC is disconnected. The actual offset is ~40mV, as opposed to other channels on the same board, which ahve offsets in the range 3-7mV.

While we can compensate for this offset in software - it made us  wonder if the DAC channel is somehow busted and if that's what causing the 'wandering' of ETMX that we have been observing recently. There are two free DAC channels on the AI chassis that has the side coil and the green temperature control signals. We could re-route the LR signal through a different DAC channel to fix this.

gautam: 40mV offset at the AI board output gets multiplied by 3 in the dewhitening board, so there is a 120mV DC offset going to the coil (measured at dewhite board output with DMM). The offset itself isn't hurting us, but the fact that it is several times larger than other channels led us to wonder if it could be drifting around as well. From my SOS pitch balancing forays, in my head I have the number 30mrad as being the full range of the OSEM actuation - so if the offset swings by 120mV, that's ~150urad of motion, which is quite large, and is of the order of magnitude I'm used to seeing ETMX move around by.

  15948   Fri Mar 19 19:15:13 2021 JonUpdateCDSc1auxey assembly

Today I helped Yehonathan get started with assembly of the c1auxey (slow controls) Acromag chassis. This will replace the final remaining VME crate. We cleared the far left end of the electronics bench in the office area, as discussed on Wed. The high-voltage supplies and test equipment was moved together to the desk across the aisle.

Yehonathan has begun assembling the chassis frame (it required some light machining to mount the DIN rails that hold the Acromag units). Next, he will wire up the switches, LED indicator lights, and Acromag power connectors following the the documented procedure.

  15962   Thu Mar 25 12:11:53 2021 YehonathanUpdateCDSc1auxey assembly

I finished prewiring the new c1auxey Acromag chassis (see attached pictures). I connected all grounds to the DIN rail to save some wiring. The power switches and LEDs work as expected.

I configured the DAQ modules using the old windows machine. I configured the gateway to be 192.168.114.1. The host machine still needs to be setup.

Next, the feedthroughs need to be wired and the channels need to be bench tested.

Attachment 1: 20210325_115500_HDR.jpg
20210325_115500_HDR.jpg
Attachment 2: 20210325_123033.jpg
20210325_123033.jpg
  15963   Thu Mar 25 14:16:33 2021 gautamUpdateCDSc1auxey assembly

It might be a good idea to configure this box for the new suspension config - modern Satellite Amp, HV coil driver etc. It's a good opportunity to test the wiring scheme, "cross-connect" type adapters etc.

Quote:

Next, the feedthroughs need to be wired and the channels need to be bench tested.

  15978   Tue Mar 30 17:27:04 2021 YehonathanUpdateCDSc1auxey assembly

{Yehonathan, Jon}

We poked (looked in situ with a flashlight, not disturbing any connections) around c1auxex chassis to understand better what is the wiring scheme.

To our surprise, we found that nothing was connected to the RTNs of the analog input Acromag modules. From previous experience and the Acromag manual, there can't be any meaningful voltage measurement without it.

I also did some rewiring in the Acromag chassis to improve its reliability. In particular, I removed the ground wires from the DIN rail and connected them using crimp-on butt splices.

 

  16297   Wed Aug 25 11:48:48 2021 YehonathanUpdateCDSc1auxey assembly

After confirming that, indeed, leaving the RTN connection floating can cause reliability issues we decided to make these connections in the c1auxex analog input units.

According to Johannes' wiring scheme (excluding the anti-image and OPLEV since they are decommissioned), Acromag unit 1221b accepts analog inputs from two modules. All of these channels are single-ended according to their schematics.

One option is to use the Acromag ground and connect it to the RTNs of both 1221b and 1221c. Another is to connect the minus wire of one module, which is tied to the module's ground, to the RTN. We shouldn't tie the grounds of the different modules together by connecting them to the same RTN point.

We should take some OSEM spectra of the X end arm before and after this work to confirm we didn't produce more noise by doing so. Right now, it is impossible due to issues caused by the recent power surge.

Quote:

{Yehonathan, Jon}

We poked (looked in situ with a flashlight, not disturbing any connections) around c1auxex chassis to understand better what is the wiring scheme.

To our surprise, we found that nothing was connected to the RTNs of the analog input Acromag modules. From previous experience and the Acromag manual, there can't be any meaningful voltage measurement without it.

 

  16321   Mon Sep 13 14:32:25 2021 YehonathanUpdateCDSc1auxey assembly

So we agreed that the RTNs points on the c1auxex Acromag chassis should just be grounded to the local Acromag ground as it just needs a stable reference. Normally, the RTNs are not connected to any ground so there is should be no danger of forming ground loops by doing that. It is probably best to use the common wire from the 15V power supplies since it also powers the VME crate. I took the spectra of the ETMX OSEMs (attachment) for reference and proceeding with the grounding work.

 

Attachment 1: ETMX_OSEMS_Noise.png
ETMX_OSEMS_Noise.png
  16332   Wed Sep 15 11:27:50 2021 YehonathanUpdateCDSc1auxey assembly

{Yehonathan, Paco}

We turned off the ETMX watchdogs and OpLevs. We went to the X end and shut down the Acromag chassi. We labeled the chassi feedthroughs and disconnected all the cables from it.

We took it out and tied the common wire of the power supplies (the commons of the 20V and 15V power supplies were shorted so there is no difference which we connect) to the RTNs of the analog inputs.

The chassi was put back in place. All the cables were reconnected. Power turn on.

We rebooted c1auxex and the channels went back online. We turned on the watchdogs and watched the ETMX motion get damped. We turned on the OpLev. We waited until the beam position got centered on the ETMX.

Attachment shows a comparison between the OSEM spectra before and after the grounding work. Seems like there is no change.

We were able to lock the arms with no issues.

 

Attachment 1: c1auxex_Grounding_OSEM_comparison1.pdf
c1auxex_Grounding_OSEM_comparison1.pdf
Attachment 2: c1auxex_Grounding_OSEM_comparison2.pdf
c1auxex_Grounding_OSEM_comparison2.pdf
  5974   Tue Nov 22 00:19:10 2011 kiwamuUpdateSUSc1auxey hadware rebooted

I found that the slow machine c1auxey, which controls and monitoring the ETMY suspension things, were not responding.

The machine responded to ping but I wasn't able to telnet to it.

I went down there and power-cycled it by keying the power of the VME rack, and then it came back and seems working properly.

I have no idea why it ran into such condition.

  16734   Thu Mar 17 19:12:44 2022 AnchalSummaryCDSc1auxey1 slow controls acromag chassis installed, not powered

[Anchal, Tega]

We installed c1auxey1 computer and the acromag chassis in 1Y4. The computer has been configured properly for nfs mounts to happen and we have initialized a git repo for /cvs/cds/caltech/target/c1auxey1 directory which stores all files for running modbusIOC service on this computer. We connected 18V power source but have not connected the 24V power yet  as we need to make a new connector for it. Going on what Koji recommended, we'll connect the 24V power input to 18 V strip as well as the acromags can run on that voltage too.

  16736   Fri Mar 18 18:39:13 2022 YehonathanSummaryCDSc1auxey1 slow controls acromag chassis installed, powered

{Yehonathan, Anchal}

We connected the c1auxey1 chassie to the different boxes (coil drivers, SAT amp, etc.) using DB9 cables and labeling them in the process. We ran out of 2.5 foot DB9 cables so we used 5 foot as a temporary solution.

The chassie was powered, but a two issues arised:

1. The Acromags didn't turn on.

2. When connecting the green laser shutter BNC cable, the power supply overloaded.

We took the chassie back to the bench. The wire that powers the Acromags was disconnected. We made a new longer wire and made sure it is not connected flimsily.

The issue with the BNC turned out to be a much deeper problem: The GND and EXC wires on the DIN rail connector were switched! Making the shield of the BNC to have high volatage compared to the shield of the green shutter causing current to overflow when the BNC was connected.

We switched back the EXC and GND wires. Not trusting the digital I/O tests that were done before due to this mistake we tested some of the I/Os using a spare coil driver. We tested both the inputs and the outputs and they all seemed to work.

Finally, we also noticed that the 2 RTS DB9s were wrongly female type so we switched them to males. We closed the lead on chassie and installed it back in the rack. We connected the cables and saw that the green shutter BNC cable was no longer shorting the power supply.

  16737   Fri Mar 18 19:10:51 2022 AnchalSummaryCDSc1auxey1 slow controls issues

I started the modbusIOC service on c1auxey1 and added PD variance channels for UR and SD as well.  There are unfortunately two issues here:

  • The enable monitors are reading NOT of what they should read. The optical isolator circuit might need to be changed.
  • ETMY is not damping now. This is strange and was seen in the use to other acromag chassis as well where AS4 and PR2 are unable to damp. This is weird since the acromag chassis are not part of the damping loop, maybe it is a coincidence. Next time we should check if we still have this issue when acromag chassis is disconnected from ETMY.

 

  16741   Mon Mar 21 18:42:06 2022 AnchalSummaryCDSc1auxey1 slow controls issues

Another issue, Green Y Shutter can not be controlled with the EPICS controls right now. This needs to be investigated.

  15659   Wed Nov 4 17:14:49 2020 gautamUpdateCDSc1bhd setup

I am working on the setup of a CDS FE, so please do not attempt any remote login to the IPMI interface of c1bhd until I'm done.

  15663   Fri Nov 6 14:27:16 2020 gautamUpdateCDSc1bhd setup - diskless boot

I was able to boot one of the 3 new Supermicro machines, which I christened c1bhd, in a diskless way (with the boot image hosted on fb, as is the case for all the other realtime FEs in the lab). This is just a first test, but it is reassuring that we can get this custom linux kernel to boot on the new hardware. Some errors about dolphin drivers are thrown at startup but this is to be expected since the server isn't connected to the dolphin network yet. We have the Dolphin adaptor card in hand, but since we have to get another PCIe card (supposedly from LLO according to the BHD spreadsheet), I defer installing this in the server chassis until we have all the necessary hardware on hand.

I also have to figure out the correct BIOS settings for this to really run effectively as a FE (we have to disable all the "un-necessary" system level services) - these machines have BIOS v3.2 as opposed to the older vintages for which there are instructions from K.T. et al.

There may yet be issues with drivers, but this is all the testing that can be done without getting an expansion chassis. After the vent and recovering the IFO, I may try experimenting with the c1ioo chassis, but I'd much prefer if we can do the testing offline on a subnet that doesn't mess with the regular IFO operation (until we need to test the IPC).

Quote:

I am working on the setup of a CDS FE, so please do not attempt any remote login to the IPMI interface of c1bhd until I'm done.

Attachment 1: Screenshot_2020-11-06_14-26-54.png
Screenshot_2020-11-06_14-26-54.png
  10136   Mon Jul 7 13:55:26 2014 JenneUpdateLSCc1cal model restarted

I'm not sure why the c1cal model didn't come up the last time c1lsc was rebooted, but I did an "rtcds start c1cal" on the lsc machine, and it's up and running now.

  11565   Thu Sep 3 02:30:46 2015 ranaUpdateCDSc1cal time reduced by deleting LSC sensing matrix

I experimented with removing somethings here and there to reduce the c1cal runtime. Eventually I deleted the LSC Sensing Matrix from it.

  • Ever since the upgrade, the c1cal has gone from 60 to 68 usec run time, so its constantly over.
  • Back when Jenne set it up back in Oct 2013, it was running at 39 usec.
  • The purely CAL stuff had some wacko impossible filters in it: please don't try to invert the AA filters making a filter with multiple zeros in it Masayuki.
  • I removed the weird / impossible / unstable filters.
  • I'm guessing that the sensing matrix code had some hand-rolled C-code blocks which are just not very speedy, so we need to rethink how to do the lockin / oscillator stuff so that it doesn't overload the CPU. I bet its somewhere in the weird way the I/Q signals were untangled. My suggestion is to change this stuff to use the standard CDS lockin modules and just record the I/Q stuff. We don't need to try to make magnitude and phase in the front end.

After removing sensing matrix, the run time is now down to 6 usec.

  2512   Wed Jan 13 12:01:06 2010 AlbertoUpdateComputersc1dcuepics, c1lsc rebooted this morning
Since last night the alignemtn scripts couldn't work.
c1lsc wasn't working properly because attempts to lock the X arm would try to control ETMY and attempts of locking the Y arm, wouldn't actuate any optics.
Also, another sign of a malfunctioning c1lsc was that one of the LSC filter modules, FM6, couldn't get loaded properly. It looked like only half loaded on the LSC MEDM screen.
On the other hand, plotting the trend of the last month, c1lsc's CPU didn't look more loaded that usual.
 
Rebooting and restarting C1lsc wasn't enough and I also had to reboot c1dcuepics a couple of times beforse getting things back to work.
  7011   Mon Jul 23 19:50:43 2012 JamieUpdateCDSc1gcv model renamed to c1als

I decided to rename the c1gcv model to be c1als.  This is in an ongoing effort to rename all the ALS stuff as ALS, and get rid of the various GC{V,X,Y} named stuff.

Most of what was in the c1gcv model was already in a subsystem with and ALS top names, but there were a couple of channels that were outside of that that had funky names, namely the "GCV_GREEN" channels.  This fixes that, and make things more consistent and simple.

Of course this required a bunch of other little changes:

  • rename model in userapps svn
  • target/fb/master had to be modified to point to the new chans/daq/C1ALS.ini channel file and gds/param/tpchn_c1als.par testpoint file
  • rename RFM channels appropriately, and fix in receiver models (c1scx, c1scy, c1mcs)
  • move custom medm screens in userapps svn (isc/c1/medm/c1als), and link to it at medm/c1als/master
  • moved old medm/c1gcv directory into a subdirectory of medm/c1als
  • update all medm screens that point to c1gcv stuff (mostly just ALS screens)

The above has been done.  Still todo:

  • FIX SCRIPTS!  There are almost certainly scripts that point to GC{V,X,Y} channels.  Those will have to be fixed as we come across them.
  • Fix the c1sc{x,y}/master/C1SC{X,Y}_GC{X,Y}_SLOW.adl screens.  I need to figure out a more consistent place for those screens.
  • Fix the C1ALS_COMPACT screen
  • ???

 

  6802   Tue Jun 12 11:54:50 2012 JenneUpdateGreen Lockingc1gcv recompiled

Yuta added channels so we can get the Q phase of all the beat PDs to the c1gcv model.  I showed him how to recompile/install/start.

During the install, it couldn't find: Unable to find the following file in CDS_MEDM_PATH: LOCKIN_FILTER.adl

On all the screens (ALS and SUS), lockin parts are white.  Someone changed something, then didn't go back to fix the screens.

Otherwise, things look to be working fine.

Attachment 1: c1gcv20120612.png
c1gcv20120612.png
  6808   Tue Jun 12 20:35:46 2012 yutaUpdateGreen Lockingc1gcv recompiled

[Jamie, Yuta]

We recompiled c1gcv because the order of the channels were confusing. We found some change in the phase rotation module when we did this.

I did some cabling and checked each signals are actually going to the right channel. I labeled all the cables I know, which go into the AA chasis for ADC1 of c1ioo machine.

Below is the list of the channels. If you know anything about "unknown" channels, please let me know.

Current channel assignments for ADC1 of c1ioo machine:
  Red ones were added today. Green ones existed in the past, but channel assignment were changed.

cable

# on AA chassis name in Simulink channel name

connected
but unknown

J1A    
   
not connected J1B    
   
not connected J2 adc_1_2 C1:ALS-XARM_BEAT_DC
not connected adc_1_3 C1:ALS-YARM_BEAT_DC
connected
but unknown
J3    
   
connected
but unknown
J4    
   
connected
but unknown
J5    
   
connected
but unknown
J6    
   
connected
but unknown
J7    
   
beat Y arm fine I J8A adc_1_14 C1:ALS-BEATY_FINE_I
beat Y arm fine Q adc_1_15 C1:ALS-BEATY_FINE_Q
not connected J8B    
   
connected
but unknown
J9A    
   
not connected J9B    
   
connected
but unknown
J10    
   
connected
but unknown
J11    
   
not connected J12 adc_1_22 C1:ALS-BEATX_COARSE_I
not connected adc_1_23 C1:ALS-BEATX_COARSE_Q
not connected J13 adc_1_24 C1:ALS-BEATX_FINE_I
not connected adc_1_25 C1:ALS-BEATX_FINE_Q
beat Y arm coarse I
J14 adc_1_26 C1:ALS-BEATY_COARSE_I
beat Y arm coarse Q adc_1_27 C1:ALS-BEATY_COARSE_Q
not connected J15 adc_1_28 Broken! Don't use this!!
adc_1_29 (not broken)
not connected J16A adc_1_30 (not broken)
adc_1_31 Broken? Funny signal.
not connected J16B    
   

Memorandum for me:
  Recompiling procedure;

ssh c1ioo

rtcds make c1gcv
rtcds install c1gcv
rtcds start c1gcv

Attachment 1: c1gcv20120612-2.png
c1gcv20120612-2.png
  6149   Mon Dec 26 12:04:41 2011 kiwamuUpdateCDSc1gcy.ini hand edited

I have edited c1scx.ini by hand in order to acquire some green locking related channels.

Somehow c1sus.ini, c1mcs.ini, c1scx.ini and c1scy.ini are not accessible via the daqconfig script.

As far as I remember it had been accessible via daqconfig a week ago when I edited c1scy.ini.

Anyway I had to edit it by hand. They need to be fixed at some point

  7057   Tue Jul 31 15:17:58 2012 JamieUpdateCDSc1ifo medm screens checked into CSD userapps svn

I moved the medm/c1ifo directory into the CDS userapps svn at cds/c1/medm/c1ifo, and then linked it back into the medm directory:

controls@rossa:~ 0$ ls -al /opt/rtcds/caltech/c1/medm/c1ifo
lrwxrwxrwx 1 controls controls 56 2012-07-31 11:53 /opt/rtcds/caltech/c1/medm/c1ifo -> /opt/rtcds/caltech/c1/userapps/release/cds/c1/medm/c1ifo
controls@rossa:~ 0$

I then committed whatever useful was in there.  We need to remember to commit when we make changes.

  6498   Fri Apr 6 16:35:37 2012 DenUpdateComputersc1ioo

c1ioo computer can not connect to the framebuilder and everything is red in the status for this machine, C1:FEC-33_CPU_METER is not moving.

EDIT by KI:

 We rebooted the c1ioo machine, but none of the ftont end model came back. It looked like they failed the burt process for some reasons according to dmesg.

Then we restarted each front end model one by one, and every time after immediately we restarted it we hit the 'BURT' button in the GDS screen.

Everyone came back to the normal operation.

  13349   Mon Oct 2 18:08:10 2017 gautamUpdateCDSc1ioo DC errors

I was trying to set up a DAC channel to interface with the AOM driver on the PSL table.

  • It would have been most convenient to use channels from c1ioo given proximity to the PSL table.
  • Looking at the 1X2 rack, it looked like there were indeed some spare DAC channels available.
  • So I thought I'd run a test by adding some TPs to the c1als model (because it seems to have the most head room in terms of CPU time used).
  • I added the DAC_0 block from CDS_PARTS library to c1als model (after confirming that the same part existed in the IOP model, c1x03).
  • Model recompiled fine (I ran rtcds make c1als and rtcds install c1als on c1ioo).
  • However, I got a bunch of errors when I tried to restart the model with rtcds restart c1als. The model itself never came up.
  • Looking at dmesg, I saw stuff like
    [4072817.132040] c1als: Failed to allocate DAC channel.
    [4072817.132040] c1als: DAC local 0 global 16 channel 4 is already allocated.
    [4072817.132040] c1als: Failed to allocate DAC channel.
    [4072817.132040] c1als: DAC local 0 global 16 channel 5 is already allocated.
    [4072817.132040] c1als: Failed to allocate DAC channel.
    [4072817.132040] c1als: DAC local 0 global 16 channel 6 is already allocated.
    [4072817.132040] c1als: Failed to allocate DAC channel.
    [4072817.132040] c1als: DAC local 0 global 16 channel 7 is already allocated.
    [4073325.317369] c1als: Setting stop_working_threads to 1
  • Looking more closely at the log messages, it seemed like rtcds could not find any DAC cards on c1ioo.
  • I went back to 1X2 and looked inside the expansion chassis. I could only find two ADC cards and 1 BIO card installed. The SCSI cable labelled ("DAC 0") running from the rear of the expansion chassis to the 1U SCSI->40pin IDE breakout chassis wasn't actually connected to anything inside the expansion chassis.
  • I then undid my changes (i.e. deleted all parts I added in the simulink diagram), and recompiled c1als.
  • This time the model came back up but I saw a "0x2000" error in the GDS overview MEDM screen.
  • Since there are no DACs installed in the c1ioo expansion chassis, I thought perhaps the problem had to do with the fact that there was a "DAC_0" block in the c1x03 simulink diagram - so I deleted this block, recompiled c1x03, and for good measure, restarted all (three) models on c1ioo.
  • Now, however, I get the same 0x2000 error on both the c1x03 and c1als GDS overview MEDM screens (see Attachment #1).
  • An elog search revealed that perhaps this error is related to DAQ channels being specified without recording rates (e.g. 16384, 2048 etc). There were a few DAQ channels inside c1als which didn't have recording rates specified, so I added the rates, and restarted the models, but the errors persist.
  • According to the RCG runtime diagnostics document, T1100625 (which admittedly is for RCG v 2.7 while we are running v3.4), this error has to do with a mismatch between the DAQ config files read by the RTS and the DAQD system, but I'm not sure how to debug this further.
  • I also suspect there is something wrong with the mx processes:
    controls@c1ioo:~ 130$ sudo systemctl status mx
    ● open-mx.service - LSB: starts Open-MX driver
       Loaded: loaded (/etc/init.d/open-mx)
       Active: failed (Result: exit-code) since Tue 2017-10-03 00:27:32 UTC; 34min ago
      Process: 29572 ExecStop=/etc/init.d/open-mx stop (code=exited, status=1/FAILURE)
      Process: 32507 ExecStart=/etc/init.d/open-mx start (code=exited, status=1/FAILURE)
    Oct 03 00:27:32 c1ioo systemd[1]: Starting LSB: starts Open-MX driver...
    Oct 03 00:27:32 c1ioo open-mx[32507]: Loading Open-MX driver (with  ifnames=eth1 )
    Oct 03 00:27:32 c1ioo open-mx[32507]: insmod: ERROR: could not insert module /opt/3.2.88-csp/open-mx-1.5.4/modules/3.2.88-csp/open-mx.ko: File exists
    Oct 03 00:27:32 c1ioo systemd[1]: open-mx.service: control process exited, code=exited status=1
    Oct 03 00:27:32 c1ioo systemd[1]: Failed to start LSB: starts Open-MX driver.
    Oct 03 00:27:32 c1ioo systemd[1]: Unit open-mx.service entered failed state.
  • Not sure if this is related to the DC error though.
Attachment 1: c1ioo_CDS_errors.png
c1ioo_CDS_errors.png
  13350   Mon Oct 2 18:50:55 2017 jamieUpdateCDSc1ioo DC errors
Quote:

 

  • This time the model came back up but I saw a "0x2000" error in the GDS overview MEDM screen.
  • Since there are no DACs installed in the c1ioo expansion chassis, I thought perhaps the problem had to do with the fact that there was a "DAC_0" block in the c1x03 simulink diagram - so I deleted this block, recompiled c1x03, and for good measure, restarted all (three) models on c1ioo.
  • Now, however, I get the same 0x2000 error on both the c1x03 and c1als GDS overview MEDM screens (see Attachment #1).

From page 21 of T1100625, DAQ status "0x2000" means that the channel list is out of sync between the front end and the daqd.  This usually happens when you add channels to the model and don't restart the daqd processes, which sounds like it might be applicable here.

It looks like open-mx is loaded fine (via "rtcds lsmod"), even though the systemd unit is complaining.  I think this is because the open-mx service is old style and is not intended for module loading/unloading with the new style systemd stuff.

  5030   Mon Jul 25 13:01:24 2011 kiwamuUpdateCDSc1ioo Make problem

[Suresh / Kiwamu]

HELP US Jamieeeeeeee !! We are unable to compile c1ioo.

 

It looks like something wrong with Makefile.

We ran make c1ioo -- this was successful every time. However make install-c1ioo doesn't run.

The below is the error messages we got.

        make install-target-c1ioo
        make[1]: Entering directory `/opt/rtcds/caltech/c1/core/branches/branch-2.1'
        Please make c1ioo first

Then we looked at Makefile and tried to find what was wrong. Then found the sentence (in 36th line from the top) saying

        if test $(site)no = no; then echo Please make $$system first; exit 1; fi;\

We thought the lack of the site-name specification caused the error.

So then we tried the compile it again with the site name specified by typing

     export site=c1

in the terminal window.

It went ahead a little bit further, but it still doesn't run all through the Make commands.

 

  5031   Mon Jul 25 13:09:39 2011 JamieUpdateCDSc1ioo Make problem

> It looks like something wrong with Makefile.

Sorry, this was my bad.  I was making a patch to the makefile to submit back upstream and I forgot to revert my changes.  I've reverted them now, so everything should be back to normal.

  11848   Fri Dec 4 13:12:41 2015 ericqUpdateCDSc1ioo Timing Overruns Solved

I had noticed for a while that the c1ioo frontend model had much higher variability than any of the other other 16k models, and would run longer than 60us multiple times an hour. This struck me as odd, since all it does is control the WFS loops. (You can see this on the Nov17 Summary page. (But somehow, the CDS tab seems broken since then, and I'm not sure why...))

This has happily now been solved! While poking around the model, I noticed that the MC2 transmission QPD data streams being sent over from c1mcs were using RFM blocks. This seemed weird to me, since I wasn't even aware that c1ioo had an RFM card. Since the c1sus and c1ioo frontends are both on the Dolphin network, I changed them to Dolphin blocks and voila! The cylce time now holds steady at 21usec. 


Update: I think I figured out the problem with the CDS summary pages. Looking at the .err files in /home/40m/public_html/summary/logs on the 40m LDAS account showed that C1:FEC-33_CPU_METER wasn't found in the frame files. Indeed, this channel was commented out in c1/chans/daq/C0EDCU.ini. I enabled it and restarted daqd. Hopefully the CDS tab will come back soon...

  4335   Tue Feb 22 00:18:47 2011 valeraConfiguration c1ioo and c1ass work and related fb crashes/restarts

I have been editing and reloading the c1ioo model last two days. I have restarted the frame builder several times. After one of the restarts on Sunday evening the fb started having problems which initially showed up as dtt reporting synchronization error. This morning Kiwamu and I tried to restart the fb again and it stopped working all together. We called Joe and he fixed the fb problem by fixing the time stamps (Joe will add details to describe the fix when he sees this elog).

The following changes were made to c1ioo model:

- The angular dither lockins were added for each optics to do the beam spot centering on MC mirrors. The MCL signal is demodulated digitally at 3 pitch and 3 yaw frequencies. (The MCL signal was reconnected to the first input of the ADC interface board).

- The outputs of the lockins go through the sensing matrix, DOF filters, and control matrix to the MC1,2,3 SUS-MC1(2,3)_ASCPIT(YAW) filter inputs where they sum with dither signals (CLOCK output of the oscillators).

- The MCL_TEST_FILT was removed

The arm cavity dither alignment (c1ass) status:

- The demodulated signals were minimized by moving the ETMX/ITMX optic biases and simultaneously keeping the arm buildup (TRX) high by using the BS and PZT2. The minimization of the TRX demodulated signals has not been successful for some reason.

- The next step is to close the servo loops REFL11I demodulated signals -> TMs and TRX demodulated signals -> combination of BS and PZTs.

The MC dither alignment (c1ioo) status:

- The demodulated signals were obtained and sensing matrix (MCs -> lockin outputs) was measured for pitch dof.

- The inversion of the matrix is in progress.

- The additional c1ass and c1ioo medm screens and up and down scripts are being made.

  3846   Tue Nov 2 15:24:18 2010 josephbUpdateCDSc1ioo and c1mcs only sending MC_L, MC1_PIT, MC1_YAW

In order to have the c1mcs model run, we're running with only 3 RFM channels between c1ioo and c1mcs at the moment.  This leaves the model at around 45 microseconds, and at least lets us damp.

Alex and I still need to track down why the RFM read calls are taking so much time to execute.

  6577   Fri Apr 27 08:11:19 2012 steveUpdateCDSc1ioo condition

 

 

Attachment 1: c1ioo.png
c1ioo.png
  9915   Tue May 6 10:22:28 2014 steveUpdateCDSc1ioo dolphin fiber

Quote:

Steve and I nicely routed the dolphin fiber from c1ioo in the 1X2 rack to the dolphin switch in the 1X4 rack.  I shutdown c1ioo before removing the fiber, but still all the dolphin connected models crashed.  After the fiber was run, I brought back c1ioo and restarted all wedged models.  Everything is green again:

green.png

 I put label  at the dolphin fiber end at 1X2 today.   After this I had to reset it, but it failed.

Attachment 1: dolphin1x2.png
dolphin1x2.png
  9916   Tue May 6 10:31:58 2014 jamieUpdateCDSc1ioo dolphin fiber

Quote:

I put label  at the dolphin fiber end at 1X2 today.   After this I had to reset it, but it failed.

 If by "fail" you're talking about the c1oaf model being off-line, I did that yesterday (see log 9910).  That probably has nothing to do with whatever you did today, Steve.

  9890   Thu May 1 10:23:42 2014 jamieUpdateCDSc1ioo dolphin fiber nicely routed

Steve and I nicely routed the dolphin fiber from c1ioo in the 1X2 rack to the dolphin switch in the 1X4 rack.  I shutdown c1ioo before removing the fiber, but still all the dolphin connected models crashed.  After the fiber was run, I brought back c1ioo and restarted all wedged models.  Everything is green again:

green.png

  9896   Fri May 2 01:01:28 2014 ranaUpdateCDSc1ioo dolphin fiber nicely routed

This C1IOO business seems to be wiping out the MC2_TRANS QPD servo settings each day.   What kind of BURT is being done to recover our settings after each of these activities?

(also we had to do mxstream restart on c1sus twice so far tonight -- not unusual, just keeping track)

  9903   Fri May 2 11:14:47 2014 jamieUpdateCDSc1ioo dolphin fiber nicely routed

Quote:

This C1IOO business seems to be wiping out the MC2_TRANS QPD servo settings each day.   What kind of BURT is being done to recover our settings after each of these activities?

(also we had to do mxstream restart on c1sus twice so far tonight -- not unusual, just keeping track)

I don't see how the work I did would affect this stuff, but I'll look into it.  I didn't touch the MC2 trans QPD signals.  Also nothing I did has anything to do with BURT.  I didn't change any channels, I only swapped out the IPCs.

  6580   Fri Apr 27 12:12:14 2012 DenUpdateCDSc1ioo is back

Rolf came to the 40m today and managed to figure out what the problem is. Reading just dmesg was not enough to solve the problem. Useful log was in

>> cat /opt/rtcds/caltech/c1/target/c1x03/c1x03epics/iocC1.log

Starting iocInit
The CA server's beacon address list was empty after initialization?
iocRun: All initialization complete
sh: iniChk.pl: command not found
Failed to load DAQ configuration file

iniChk.pl checks the .ini file of the model.

>> cat /opt/rtcds/rtscore/release/src/drv/param.c


int loadDaqConfigFile(DAQ_INFO_BLOCK *info, char *site, char *ifo, char *sys)
{

  strcpy(perlCommand, "iniChk.pl ");
  .........
  strcat(perlCommand, fname); // fname - name of the .ini file
  ..........
}

So the problem was not in the C1X03.ini. The code could not find the perl script though it was in the /opt/rtcds/caltech/c1/scripts directory. Some environment variables are not set. Rolf added /opt/rtcds/caltech/c1/scripts/ to $PATH variable and c1ioo models (x03, ioo, gcv) started successfully. He is not sure whether this is a right way or not, because other machines also do not have "scripts" directory in their PATH variable.

>> cat /opt/rtcds/caltech/c1/target/c1x03/c1x03epics/iocC1.log

Starting iocInit
The CA server's beacon address list was empty after initialization?
iocRun: All initialization complete

Total count of 'acquire=0' is 2
Total count of 'acquire=1' is 0
Total count of 'acquire=0' and 'acquire=1' is 2

Counted 0 entries of datarate=256     for a total of 0
Counted 0 entries of datarate=512     for a total of 0
Counted 0 entries of datarate=1024     for a total of 0
Counted 0 entries of datarate=2048     for a total of 0
Counted 0 entries of datarate=4096     for a total of 0
Counted 0 entries of datarate=8192     for a total of 0
Counted 0 entries of datarate=16384     for a total of 0
Counted 0 entries of datarate=32768     for a total of 0
Counted 2 entries of datarate=65536     for a total of 131072

Total data rate is 524288 bytes - OK

Total error count is 0

Rolf mentioned about automatic set up of variables - kiis of smth like that - probably that script is not working correctly. Rolf will add this problem to his list.

  6861   Sat Jun 23 19:57:22 2012 yutaSummaryComputersc1ioo is down

I tried to restart c1ioo becuase I can't live without him.

I couldn't ssh or ping c1ioo, so I did hardware reboot.
c1ioo came back, but now ADC/DAC stats are all red.

c1ioo was OK until 3am when I left the control room last night. I don't know what happened, but StripTool from zita tells me that MC lock went off at around 4pm.

  6865   Mon Jun 25 10:35:59 2012 JenneSummaryComputersc1ioo is down

Quote:

I tried to restart c1ioo becuase I can't live without him.

I couldn't ssh or ping c1ioo, so I did hardware reboot.
c1ioo came back, but now ADC/DAC stats are all red.

c1ioo was OK until 3am when I left the control room last night. I don't know what happened, but StripTool from zita tells me that MC lock went off at around 4pm.

 c1ioo was still all red on the CDS status screen, so I tried a couple of things.

mxstreamrestart (which aliases on the front ends to sudo /etc/init.d/mx_stream restart) didn't help

sudo shutdown -r now didn't change anything either....c1ioo came back with red everywhere and 0x2bad on the IOP

eventually doing as Jamie did for c1sus in elog 6742, rtcds stop all, then rtcds start all fixed everything.  Interestingly, when I tried rtcds start iop, I got the error
Cannot start/stop model 'iop' on host c1ioo, so I just tried rtcds start all, and that worked fine....started with c1x03, then c1ioo, then c1gcv.

  6867   Mon Jun 25 11:27:13 2012 JamieSummaryComputersc1ioo is down

Quote:

Quote:

I tried to restart c1ioo becuase I can't live without him.

I couldn't ssh or ping c1ioo, so I did hardware reboot.
c1ioo came back, but now ADC/DAC stats are all red.

c1ioo was OK until 3am when I left the control room last night. I don't know what happened, but StripTool from zita tells me that MC lock went off at around 4pm.

 c1ioo was still all red on the CDS status screen, so I tried a couple of things.

mxstreamrestart (which aliases on the front ends to sudo /etc/init.d/mx_stream restart) didn't help

sudo shutdown -r now didn't change anything either....c1ioo came back with red everywhere and 0x2bad on the IOP

eventually doing as Jamie did for c1sus in elog 6742, rtcds stop all, then rtcds start all fixed everything.  Interestingly, when I tried rtcds start iop, I got the error
Cannot start/stop model 'iop' on host c1ioo, so I just tried rtcds start all, and that worked fine....started with c1x03, then c1ioo, then c1gcv.

There seems to be a problem with how models are coming up on boot since the upgrade.  I think the IOP isn't coming up correctly for some reason, which is then preventing the rest of the models from starting since they depend on the IOP.

The simple way to fix this is to run the following:

ssh c1ioo
rtcds restart all

The "restart" command does the smart thing, by stopping all the models in the correct order (IOP last) and then restarting them also in the correct order (IOP first).

ELOG V3.1.3-