40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 96 of 339  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  12718   Sat Jan 14 12:12:03 2017 ranaUpdateDAQminute trends missing

Did we turn off minute trend writing in one of the recent FrameBuilder debug sessions? Seems we only have second trends in 2016. Maybe this explains why its so slow to get minute trends? Dataviewer has to rebuild it from second trend.

controls@nodus|frames > l
total 64
drwx------   2 root     root     16384 Jun  8  2009 lost+found/
drwxr-xr-x   2 controls controls  4096 Jul 14  2015 tmp/
-rw-r--r--   1 controls controls     0 Jul 14  2015 test-file
drwxr-xr-x   5 controls controls  4096 Apr  7  2016 trend/
drwxr-xr-x   4 root     root      4096 Apr 11  2016 archive/
drwxr-xr-x 789 controls controls 36864 Jan 13 19:34 full/
controls@nodus|frames > cd trend
controls@nodus|trend > l
total 3340
drwxr-xr-x 258 controls controls 3342336 Jul  6  2015 minute_raw/
drwxr-xr-x 387 controls controls   36864 Nov  5  2015 minute/
drwxr-xr-x 969 controls controls   36864 Jan 13 19:49 second/

  12719   Sat Jan 14 12:36:57 2017 ericqUpdateDAQminute trends missing

Yes, writing minute trends causes hourly FB crashes in the current state of things. The "raw" minute trending is turned on, but I think that these are unknown to nds.

  12829   Wed Feb 15 00:26:44 2017 JohannesUpdateDAQpanels and pcbs

I finished designing the PCBs for the VME crate back sides (see attached). The project files live on the DCC now at https://dcc.ligo.org/LIGO-D1700058. I ordered a prototype quantity (9) of the PCB printed and bought the corresponding connectors, all will arrive within the next two weeks. See also attached the front panels for the Acromag DAQ chassis and Lydia's RF amplifier unit (the lone +24V slot confuses me: I don't see a ground connector?). On the Acromag panel, six (3x2) of the DB37 connectors are reserved for VME hardware, two are reserve, and I filled the remaining space with general purpose BNC connectors for whatever comes up.

Attachment 1: acromag_chassis_panel.pdf
acromag_chassis_panel.pdf
Attachment 2: vme_backplane_panel.pdf
vme_backplane_panel.pdf
Attachment 3: rfAmp.pdf
rfAmp.pdf
  12830   Wed Feb 15 09:06:13 2017 ericqUpdateDAQpanels and pcbs

The amplifier unit should use the three pin dsub connectors (3w3?) that we use on many of the other units for DC power, and preferably go through the back panel. You can leave out the negative pin, since you just need +24 and ground.

  12832   Wed Feb 15 22:21:12 2017 LydiaUpdateDAQpanels and pcbs

This is already how it's hooked up. The hole on the from that says +24 V is for an indicator light.

Quote:

The amplifier unit should use the three pin dsub connectors (3w3?) that we use on many of the other units for DC power, and preferably go through the back panel. You can leave out the negative pin, since you just need +24 and ground.

 

  12942   Thu Apr 13 19:54:07 2017 ranaUpdateDAQcheckup on minute trends

Our minute trends are still not available through NDS2 from the outside world due to the bad config of the DAQ, but I can confirm that we still have the minute-raw capability. This is 111 days of Seismic BLRMS.

However, it seems we're only able to get ~1 week of lookback on our second trendssadno and that is low-down dirty shame. We used to have over a month of second trend lookback before the last decade of 'upgrades'.

Attachment 1: BRLMS-trend.png
BRLMS-trend.png
  13478   Thu Dec 14 23:27:46 2017 johannesUpdateDAQaux chassis design

Made a front and back panel and slot panels for DSub and IDC breakouts. I want to send this out soon, are there any comments? Preferences for color schemes?

Attachment 1: auxdaq_40m_4U_front.pdf
auxdaq_40m_4U_front.pdf
Attachment 2: auxdaq_40m_4U_rear.pdf
auxdaq_40m_4U_rear.pdf
Attachment 3: auxdaq_40m_4U_DSub37x2.pdf
auxdaq_40m_4U_DSub37x2.pdf
Attachment 4: auxdaq_40m_4U_IDC50.pdf
auxdaq_40m_4U_IDC50.pdf
  13517   Tue Jan 9 00:07:03 2018 johannesUpdateDAQetmx slow daq chassis

All parts received and assembly near complete, small problem detected because the two DSub connectors are too close together for two cables to fit at the same time. Gautam and I will make some additional slot panels tomorrow using a waterjet cutter, so we can spread the breakout boards out and remedy this.

Fast binary channels need to be spliced into DSub connectors. Aaron is on this. All other, slow connections are already wired from before and have been tested for correct pins on the backplane DIN connectors.

 

The Acromag modules require only a positive supply voltage between +12 and +30 VDC and draw a maximum of 2.8W at that. This raises the question if we want this supply rail to be regulated or take the raw power from the Sorensens. Consulting with Ben Abbott: The Acromags in LIGO do not operate with regulated power. We could easily accomodate the standard regulator boards D1000217 in the chassis, which is probably a good idea if we want to place any custom electronics inside the chassis in the future, for example for whitening or active lowpass filtering.

  13529   Wed Jan 10 22:24:28 2018 johannesUpdateDAQetmx slow daq chassis

This evening I transitioned the slow controls to c1auxex2.

  1. Disconnected satellite box
  2. Turned off c1auxex
  3. Disconnected DIN cables from backplace connectors
  4. Attached purple adapter boards
  5. Labeled DSub cables for use
  6. Connected DSub cables to adapter boards and chassis
  7. Initiated modbus IOC on c1auxex2

Gautam and I then proceeded to test basic functionality

  1. Pitch bias sliders move pitch, yaw moves yawyes.
  2. Coil enable and monitoring channels work yes
  3. Watchdog seems to work. yes We set the treshold for tripping low, excited the optic, watchdog didn't disappoint and triggered.
  4. All channels Initialize with "0" upon machine/server script restart. This means the watchdog happens to be OFF, which is good yes. It would be great if we could also initialize PIT and YAW to retain their value from before to avoid kicking the optic. This is not straightforward with EPICS records but there must be a way.
  5. We got the local damping going yes.
  6. There is some problem with the routing of the fast BIO channels through the new chassis - so the ANALOG de-whitening filter seems to be always engaged, despite our toggling the software BIO bits no. Something must be wrongly wired, as we confirmed by returning only the FAST BIO wiring to the pre-acromag state (but everything else is now controlled by acromag) and didn't have the problem anymore. Or some electrical connection is not made (I had to use gender changers on these connectors due to lack of proper cabling)
  7. The switches for the QPD gain stages did not work. no I suspect a wiring problem, since the switching of the coil enables did work.

Arms are locked, have been for ~1hour with no hickups. We will leave it like this overnight to observe, and debug further tomorrow.

  13530   Thu Jan 11 09:57:17 2018 SteveUpdateDAQacromag at ETMX

Good going Johannes!

Quote:

This evening I transitioned the slow controls to c1auxex2.

  1. Disconnected satellite box
  2. Turned off c1auxex
  3. Disconnected DIN cables from backplace connectors
  4. Attached purple adapter boards
  5. Labeled DSub cables for use
  6. Connected DSub cables to adapter boards and chassis
  7. Initiated modbus IOC on c1auxex2

Gautam and I then proceeded to test basic functionality

  1. Pitch bias sliders move pitch, yaw moves yawyes.
  2. Coil enable and monitoring channels work yes
  3. Watchdog seems to work. yes We set the treshold for tripping low, excited the optic, watchdog didn't disappoint and triggered.
  4. All channels Initialize with "0" upon machine/server script restart. This means the watchdog happens to be OFF, which is good yes. It would be great if we could also initialize PIT and YAW to retain their value from before to avoid kicking the optic. This is not straightforward with EPICS records but there must be a way.
  5. We got the local damping going yes.
  6. There is some problem with the routing of the fast BIO channels through the new chassis - so the ANALOG de-whitening filter seems to be always engaged, despite our toggling the software BIO bits no. Something must be wrongly wired, as we confirmed by returning only the FAST BIO wiring to the pre-acromag state (but everything else is now controlled by acromag) and didn't have the problem anymore. Or some electrical connection is not made (I had to use gender changers on these connectors due to lack of proper cabling)
  7. The switches for the QPD gain stages did not work. no I suspect a wiring problem, since the switching of the coil enables did work.

Arms are locked, have been for ~1hour with no hickups. We will leave it like this overnight to observe, and debug further tomorrow.

 

Attachment 1: Acromg_in_action.png
Acromg_in_action.png
  13535   Thu Jan 11 20:59:41 2018 gautamUpdateDAQetmx slow daq chassis

Some suggestions of checks to run, based on the rightmost colum in the wiring diagram here - I guess some of these have been done already, just noting them here so that results can be posted.

  1. Oplev quadrant slow readouts should match their fast DAQ counterparts.
  2. Confirm that EX Transmon QPD whitening/gain switching are working as expected, and that quadrant spectra have the correct shape.
  3. Watchdog tripping under different conditions.
  4. Coil driver slow readbacks make sense - we should also confirm which of the slow readbacks we are monitoring (there are multiple on the SOS coil driver board) and update the MEDM screen accordingly.
  5. Confirm that shadow sensor PD whitening is working by looking at spectra.
  6. Confirm de-whitening switching capability - both to engage and disengage - maybe the procedure here can be repeated.
  7. Monitor DC alignment of ETMX - we've seen the optic wander around (as judged by the Oplev QPD spot position) while sitting in the control room, would be useful to rule out that this is because of the DC bias voltage stability (it probably isn't).
  8. Confirm that burt snapshot recording is working as expected - this is not just for c1auxex, but for all channels, since, as Johannes pointed out, the 2018 directory was totally missing and hence no snapshots were being made.
  9. Confirm that systemd restarts IOC processes when the machine currently called c1auxex2 gets restarted for whatever reason.

 

  13537   Fri Jan 12 10:02:05 2018 johannesUpdateDAQetmx slow daq chassis
Quote:

There is some problem with the routing of the fast BIO channels through the new chassis - so the ANALOG de-whitening filter seems to be always engaged, despite our toggling the software BIO bits no. Something must be wrongly wired, as we confirmed by returning only the FAST BIO wiring to the pre-acromag state (but everything else is now controlled by acromag) and didn't have the problem anymore. Or some electrical connection is not made (I had to use gender changers on these connectors due to lack of proper cabling)

The switches for the QPD gain stages did not work. no I suspect a wiring problem, since the switching of the coil enables did work.

Both issues were fixed. In both cases it was two separate causes that prevented them from working.

The QPD gain stage switch software channels were assigned to wrong physical pins of the Acromag, and additionally their DSub cable was swapped with a different one.

The BIO switching had its signal and ground wires swapped on ALL connections, and part of it was also suffering from the cable-mixup.

Both issues were fixed. All backplane signals are now routed through the Acromag chassis.

 

Gautam and I did notice that occasionally ETMX alignment will start drifting as evident from the OpLev. I want to set up a diagnostic channel to see if the DAC voltages coming from the Acromag are responsible for this.

  13543   Fri Jan 12 19:15:34 2018 johannesUpdateDAQetmx slow daq chassis

Steve and I removed c1auxex from 1X9 today to make space for the DAQ chassis. Steve installed rails for mounting. To install the box I had to remove all cabling, for which I used the usual precautions (disconnect satellite box etc.)

On reconnect c1auxex2 didn't initialize the physical EPICS channels (the 'actual' acromag channels), apparently it had trouble communicating. A reboot fixed this. It's possible that this is because of the direct cable connection without a network switch that exists
between the Acromags and c1auxex. The EPICS server was started automatically on reboot.

Currently the channel defaults need to be loaded manually after every EPICS server script restart with burt. I'm looking for a good way to automate this, but the only compiled burt binaries for x86 (that we can in principle run on c1auxex2 itself) on the martian network are from EPICS version 3.14.10 and throw a missing shared object error. Could be that simply some path variable is missing.

The burt binaries are not distributed by the lscsoft or cdssoft packages, so alternatively we would need to compile it ourselves for x86 or get it working with the older epics version.

  13553   Wed Jan 17 14:32:51 2018 gautamUpdateDAQAcromag checks
  1. I take back what I said about the OSEM PD mon at the meeting - there does seem to be to be some overall calibration factor (Attachment #1) that has scaled the OSEM PD readback channels, by a factor of (20000/2^15), which Johannes informs me is some strange feature of the ADC, which he will explain in a subsequent post.
  2. The coil redback fields on the MEDM screen have a "30Hz HPF" text field below them - I believe this is misleading. Judging by the schematic, we are monitoring, on the backplane (which is what these channels are reading back from), the coil to the voltage with a gain of 0.5. We can reconfirm by checking the ETMX coil driver board, after which we should remove the misleading label on the MEDM screens.
Quote:

Some suggestions of checks to run, based on the rightmost colum in the wiring diagram here - I guess some of these have been done already, just noting them here so that results can be posted.

  1. Oplev quadrant slow readouts should match their fast DAQ counterparts.
  2. Confirm that EX Transmon QPD whitening/gain switching are working as expected, and that quadrant spectra have the correct shape.
  3. Watchdog tripping under different conditions.
  4. Coil driver slow readbacks make sense - we should also confirm which of the slow readbacks we are monitoring (there are multiple on the SOS coil driver board) and update the MEDM screen accordingly.
  5. Confirm that shadow sensor PD whitening is working by looking at spectra.
  6. Confirm de-whitening switching capability - both to engage and disengage - maybe the procedure here can be repeated.
  7. Monitor DC alignment of ETMX - we've seen the optic wander around (as judged by the Oplev QPD spot position) while sitting in the control room, would be useful to rule out that this is because of the DC bias voltage stability (it probably isn't).
  8. Confirm that burt snapshot recording is working as expected - this is not just for c1auxex, but for all channels, since, as Johannes pointed out, the 2018 directory was totally missing and hence no snapshots were being made.
  9. Confirm that systemd restarts IOC processes when the machine currently called c1auxex2 gets restarted for whatever reason.

 

 

 

Attachment 1: OSEMPDmon_Acro.png
OSEMPDmon_Acro.png
  13554   Wed Jan 17 22:44:14 2018 johannesUpdateDAQAcromag checks

This happened because there are multiple ways to scale the raw value of an EPICS channel to the desired output range. In the CryoLab I was using one way, but the EPICS records I copied from c1auxex were doing it differently. Basically this:

DTYP  - Data type -
LINR "NO CONVERSION" vs "LINEAR"
RVAL Raw value
EGUF Engineering units full scale
EGUL Engineering units low
ASLO Manual scaling factor
AOFF Manual offset
VAL Value

If the "LINR" field is set to "LINEAR", the fields EGUF and EGUL are used to convert the raw value to the channel value VAL. To use them, one needs to enter the voltages that return the maximum and minimum values expected for the given data type. It used to be +10V and -10V, respectively, and was copied that way but that doesn't work with the data type required for the Acromag units. For -some- reason, while the the ADC range is -10V to +10V, this corresponds to values -20000 to +20000, while for the DAC channels it's -30000 to +30000. I had observed this before when setting up the DAQ in the CryoLab, but there we were using "NO CONVERSION", which skips the EGUF and EGUL fields, and used the ASLO and AOFF for manual scaling to get it right. When I mixed the records from there with the old ones from c1auxex this got lost in translation. Gautam and I confirmed by eye that this indeed explains the different levels well. This means that the VMon channelsfor the coils are also showing the wrong voltages, which will be corrected, but the readback still definitely works and confirms that the enable switches do their job.

Quote:
  1. I take back what I said about the OSEM PD mon at the meeting - there does seem to be to be some overall calibration factor (Attachment #1) that has scaled the OSEM PD readback channels, by a factor of (20000/2^15), which Johannes informs me is some strange feature of the ADC, which he will explain in a subsequent post.

 

  13565   Sun Jan 21 13:11:25 2018 johannesUpdateDAQAcromag checks

After some research: -the- reason for the reduced +/- 20,000 swing in raw values is a default setting for having support for legacy devices enabled when using the acromag proprietary i2o peer-to-peer protocol. So this is doubly unnecessary because a) we don't have any legacy devices at all and b) we're using pure modbus/TCP and no i2o. To change the setting I have to connect via the USB configuration utility. In addition, I want to understand the averaging feature of the acromag units better, which is also configured via USB, and lets one set a fixed amount of samples to be averaged before updating the read-register value. The documentation says that the 8 channels are multiplexed into a single ADC, and that new input data is available after 10 ms for each channel, suggesting a sampling rate of 100 Hz per channel and that the multiplexing happens faster, but is not super-clear about this, so I want to test it in the cryo lab first before unleashing it onto c1auxex2.

Furthermore, the standard timing options for updating epics records are 10s, 5s, 2s, 1s, 0.5s, 0,2s, and 0.1s. On the previous c1auxex, the monitoring channels were set to 0.1s, but that clashes with the 16 Hz global EPICS rate, resulting in partial double-sampling. One can manually provide the option 0.0625s for 16Hz update rate. I will test this and how it deals with the averaging in the cryolab too.

  13576   Wed Jan 24 18:12:31 2018 johannesUpdateDAQETMX auxiliary DAQ work

I replaced the two remaining D-Sub M/M cables that still had gender-changers with M/F cables today, completing the mechanical and wiring work on the ETMX rack. All backplane adapter boards were secured to a cross-strut of the crate using zip ties. This was necessary because the adapter boards don't fit the crate with their panels attached ( the ETMX eurocrate is the only one with slightly different dimensions from all the others), and the we can't mount them to the strut using the panels. This won't be an issue on any of the other crates.

   

   

In other news:

I disabled the legacy support in the three Acromag ADC units and set the input averaging to 10 samples via the USB configuration utility. The documentation is unfortunately a little sparse about what this actually means. The manual states that "fresh input data is available to the network every 10ms", so the sampling rate is for sure faster than 100Hz. Since the IOC updates its channels every .1 seconds I assume that an averaging value of 10 to reduce the input noise is safe. The maximum value the configuration tool permits is 200. I tried this using the CryoLab DAQ and set all input channels to 200 and used StripTool to look at the time series of a slow oscillation (.1Hz) with a large amplitude (16Vpp) and looked for missed data points, indicating too long wait times for channels updates. There was no such qualitative difference between 1 sample, 10 samples, and 200 samples, so even pushing the averaging value to max seemed okay. I went with the conservative value of 10 for the ETMX DAQ, but we can likely increase this if noise on the slow inputs becomes an issue.

The input scaling of the ADC channels has been corrected. I changed the conversion method in the EPICS records from manual using the ASLO and AOFF fields to using engineering units via EGUF and EGUL. This required a little attention. The Acromags scale the dynamic input range of +/- 10V to +/- 30,000 raw value, but the EPICS IOC interprets the data type as ranging from -32767 to +32768, so the EGUF and EGUL fields must be set to -10.923 and +10.923 to achieve proper scaling. I also changed the SCAN field on all ADC channels to 0.1 seconds. This has been done for all ADC and DAC channel records.

  13578   Wed Jan 24 19:17:06 2018 johannesUpdateDAQc1auxex2 startup behavior

I compiled the burt binaries on c1auxex2 which took a little fiddling with dependencies and paths but nothing too major. The complete local epics folder (/opt/epics/) which contains the base epics binaries, modbus and burt for 32-bit linux has been copied to the shared drive at /opt/rtapps/epics-3.15.5. They belong to the most recent stable release. This was so we can now automatically call burt after the IOC initialization on c1auxex2 to restore the backed-up channel values.

I also copied the database definition and modbus instruction files to /cvs/cds/caltech/target/c1auxex2, from where they are now being read upon IOC initialization. This is an excerpt of the service file:

#ExecStart=/usr/bin/procServ -f -L /home/controls/modbusIOC/modbusIOC.log -p /run/modbusioc.pid 8008 /opt/epics/modules/modbus/bin/linux-x86/modbusApp /cvs/cds/caltech/target/c1auxex2/ETMXaux2.cmd   <-- Contains logging to file, see note 1)
ExecStart=/usr/bin/procServ -f -p /run/modbusioc.pid 8008 /opt/epics/modules/modbus/bin/linux-x86/modbusApp /cvs/cds/caltech/target/c1auxex2/ETMXaux2.cmd <-- Initializes the EPICS IOC with Modbus support
ExecStop=/bin/kill -9 ` cat /run/modbusioc.pid` <-- Kills the detached process by its process ID
ExecStartPost=/bin/bash -c "/opt/epics/extensions/bin/linux-x86/burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/latest/c1auxex.snap" <-- Restores general channel values
ExecStartPost=/bin/bash -c "/opt/epics/extensions/bin/linux-x86/burtwb -f /opt/rtcds/caltech/c1/medm/MISC/ifoalign/burt/ETMX.snap" <-- Restores PIT and YAW values from align MEDM screen
ExecStartPost=/bin/bash -c ". /home/controls/modbusIOC/ETMXaux2.sh" <-- Enables writing to PIT and YAW DAC channels, see note 2)

Note 1) I removed the logging to file for now because I noticed that if there are Acromag communication issues the logfile tends to grow in size VERY fast. In the cryo lab is had gotten to over 70GB just over the winter break. I don't think it's absolutely necessary to have it, and if diagnostics are needed we can easily uncomment it temporarily.

Note 2) I modified the static EPICS records of the four OSEM bias adjust channels so they won't start updating as soon as the IOC starts up (and before the channel defaults are restored by burt). This was done by setting the OMSL (output mode select) field from "closed_loop" to "supervisory". Sample record:

record(ao,"C1:SUS-ETMX_ULBiasAdj")
{
        field(DESC,"Bias Adjust for ETMX UL Coil Output")
        field(DTYP,"asynInt32")
        field(OUT, "@asynMask(C1AUXEX_XT1541A_DAC, 0, -16)MODBUS_DATA")
        field(SCAN,".1 second")
        field(OMSL,"supervisory")  <-- Used to be "closed_loop"
        field(DOL, "C1:SUS-ETMX_ULBiasSet  PP")
        field(PREC,"3")
        field(EGUF,"10.923")
        field(EGUL,"-10.923")
        field(EGU, "Volts")
        field(LINR,"LINEAR")
        field(DRVH,"10")
        field(DRVL,"-10")
        field(HOPR,"10")
        field(LOPR,"-10")
}

Now, on reboort/IOC re-initialization the physical DAC channels are performing a one-time readback of the last stored value in the Acromag's register, then idle until the last StartPost statement executes the script ETMXaux.sh, which changes their OMSL field back to "closed_loop". This causes them to start updating their output from the calc records defined in their DOL field (which have by then recovered their default values curtesy of burt). The result is a smooth transition from idling to the controlled state with no sudden or large offset changes. yes

  13580   Wed Jan 24 23:13:30 2018 johannesUpdateDAQc1auxex2 startup behavior
Quote:

The result is a smooth transition from idling to the controlled state with no sudden or large offset changes. yes

[Gautam, Johannes]

While checking how smooth the transition is we still noticed significant motion of ETMX by looking at the locked green laser and OpLevs. We found that this motion was not caused by interruption of the slow offset adjust, but rather the Watchdog being re-initialized to its OFF state, which cuts the fast channels OFF. On other optics this is observed too, but not as severe. The cause is a rather large offset on the LR coil coming from the fast DAQ, which was reported as 50mV by the slow readback channel (while other readback values are <10mV). It is present even when turning the output of the CDS model OFF, but vanishes when the watchdog is triggered. This helped us trace it to an offset of the DAC output itself: it is present at the output of the AI board but vanishes when the DAC is disconnected. The actual offset is ~40mV, as opposed to other channels on the same board, which ahve offsets in the range 3-7mV.

While we can compensate for this offset in software - it made us  wonder if the DAC channel is somehow busted and if that's what causing the 'wandering' of ETMX that we have been observing recently. There are two free DAC channels on the AI chassis that has the side coil and the green temperature control signals. We could re-route the LR signal through a different DAC channel to fix this.

gautam: 40mV offset at the AI board output gets multiplied by 3 in the dewhitening board, so there is a 120mV DC offset going to the coil (measured at dewhite board output with DMM). The offset itself isn't hurting us, but the fact that it is several times larger than other channels led us to wonder if it could be drifting around as well. From my SOS pitch balancing forays, in my head I have the number 30mrad as being the full range of the OSEM actuation - so if the offset swings by 120mV, that's ~150urad of motion, which is quite large, and is of the order of magnitude I'm used to seeing ETMX move around by.

  13590   Wed Jan 31 15:29:44 2018 johannesUpdateDAQPSL acromag server moved from megatron to c1auxex2

I moved the epics IOC server process for the single Acromag ADC that monitors the PSL signals from megatron to c1auxex2.

First, I disabled the legacy support on all channels as explained in elog 13565. Then I copied the files npro_config.cmd and NPRO.db from /opt/rtcds/caltech/c1/scripts/Acromag to /cvs/cds/caltech/target/c1psl2/ following the pattern of the old Motorola machines and the new c1auxex2. I had to make some edits for correct paths and expanded the epics records to the standard we're using for ETMX.

I then added a service to systemd on c1auxex2 that runs the epics IOC for the Acromag PSL channels: /etc/systemd/system/modbusPSL.service. No more tmux on megatron.

Running two IOCs on a signle machine at the same time did not produce any errors and seems fine so far.

  13742   Mon Apr 9 23:28:49 2018 johannesConfigurationDAQc1psl channel list

I made a list of all the physical c1psl channels to get a better idea for how many acromags we need to replace it eventually. There  3123 unit is the one whose failure had prevented c1psl from booting, which is why it was unplugged (elog post 12852), and its channels have been inactive since. Are the 126MOPA channels used for the current mephisto? 126 tells me it's for an old lightwave laser, but I was checking a few and found that they have non-zero, changing values, so they may have been rewired.

It also hosts some virtual channels for the ISS with root C1:PSL-ISS_ defined in iss.db and dc.db, the PSL particle counter with root C1:PEM- defined in PCount.db  and a whole lot of PSL status channels defined in pslstatus.db. Transfering these virtual channels to a different machine is almost trivial, but the serial readout of the particle counter would have to find a new home.

Long story short - we need:

Function Type # Channels #Channels (no MOPA) # Units # Units (no MOPA)
ADC XT1221 34 21 5 3
DAC XT1541 17 14 3 2
BIO XT1111 19 10 2 1

 



3113 - ADC

C1:PSL-126MOPA_126PWR
C1:PSL-126MOPA_DTMP
C1:PSL-126MOPA_LTMP
C1:PSL-126MOPA_DMON
C1:PSL-126MOPA_LMON
C1:PSL-126MOPA_CURMON
C1:PSL-126MOPA_DTEC
C1:PSL-126MOPA_LTEC
C1:PSL-126MOPA_CURMON2
C1:PSL-126MOPA_HTEMP
C1:PSL-126MOPA_HTEMPSET
C1:PSL-FSS_RFPDDC
C1:PSL-FSS_LODET
C1:PSL-FSS_FAST
C1:PSL-FSS_PCDRIVE
C1:PSL-FSS_MODET
C1:PSL-FSS_VCODETPWR
C1:PSL-FSS_TIDALOUT
C1:PSL-PMC_RFPDDC
C1:PSL-PMC_LODET
C1:PSL-PMC_PZT
C1:PSL-PMC_MODET


3123 - ADC (failed)

C1:PSL-126MOPA_AMPMON
C1:PSL-126MOPA_126MON
C1:PSL-FSS_RCTRANSPD
C1:PSL-FSS_MINCOMEAS
C1:PSL-FSS_RMTEMP
C1:PSL-FSS_RCTEMP
C1:PSL-FSS_MIXERM
C1:PSL-FSS_SLOWM
C1:PSL-FSS_TIDALINPUT
C1:PSL-PMC_PMCTRANSPD
C1:PSL-PMC_PMCERR
C1:PSL-PPKTP_TEMP


4116 - DAC

C1:PSL-126MOPA_126CURADJ
C1:PSL-126MOPA_DCAMP
C1:PSL-126MOPA_DCAMP-
C1:PSL-FSS_INOFFSET
C1:PSL-FSS_MGAIN
C1:PSL-FSS_FASTGAIN
C1:PSL-FSS_PHCON
C1:PSL-FSS_RFADJ
C1:PSL-FSS_SLOWDC
C1:PSL-FSS_VCOMODLEVEL
C1:PSL-FSS_TIDAL
C1:PSL-FSS_TIDALSET
C1:PSL-PMC_GAIN
C1:PSL-PMC_INOFFSET
C1:PSL-PMC_PHCON
C1:PSL-PMC_RFADJ
C1:PSL-PMC_RAMP


XVME-210 - Binary Input

C1:PSL-126MOPA_FAULT
C1:PSL-126MOPA_INTERLOCK
C1:PSL-126MOPA_SHUTTER
C1:PSL-126MOPA_126LASE
C1:PSL-126MOPA_AMPON


XVME-220 - Binary Output

C1:PSL-126MOPA_126NE
C1:PSL-126MOPA_126STANDBY
C1:PSL-126MOPA_SHUTOPENEX
C1:PSL-126MOPA_STANDBY
C1:PSL-FSS_SW1
C1:PSL-FSS_SW2
C1:PSL-FSS_FASTSWEEP
C1:PSL-FSS_PHFLIP
C1:PSL-FSS_VCOTESTSW
C1:PSL-FSS_VCOWIDESW
C1:PSL-PMC_SW1
C1:PSL-PMC_SW2
C1:PSL-PMC_PHFLIP
C1:PSL-PMC_BLANK

  14141   Mon Aug 6 20:41:10 2018 aaronUpdateDAQNew DAC for the OMC

Gautam and I tested out the DAC that he installed in the latter half of last week. We confirmed that at least one of the channels is can successfully drive a sine wave (ch10, 1-indexed). We had to measure the output directly on the SCSI connector (breakout in the FE hard drive cabinet along the Y arm), since the SCSI breakout box (D080303) seems not to be working (wiring diagram in Gautam's elog from his SURF years).

I added some DAC channels to our c1omc model:
PZT1_PIT
PZT1_YAW
PZT2_PIT
PZT2_YAQ
 
And determined that when we go to use the ADC, we will initially want the following channels (even these are probably unnecessary for the very first scans):
TRANS_PD1
TRANS_PD2
REFL_PD
DVMDC (drive voltage monitor, DC level)
DVMAC ("", AC level, only needed if we dither the length)
 
I attach a screenshot of the model, and a picture of where the whitening/dewhitening boards should go in the rack.
Attachment 1: OMCDACmdl.png
OMCDACmdl.png
  14172   Tue Aug 21 03:09:59 2018 johannesOmnistructureDAQPanels for Acromag DAQ chassis

I expanded the previous panels to 6U height for the new DAQ chassis we're buying for the upgrade. I figure it's best if we stick to the modular design, so I'm showing a panel for 8 BNC connectors as an example. The front panel has 12 slots, the back has 10 plus power connectors, switches, and the ethernet plug.

I moved the power switch to the rear because it's a waste of space to put it in the front, and it's not like we're power cycling this thing all the time. Note that the unit only requires +24V (for general operation, +20V also does the trick, as is the situation for ETMX) and +15V (excitation field for the binary I/O modules). While these could fit into a single CONEC power connector, it's probably for the better if we don't make a version that supplies a large positive voltage where negative is expected, so I put in two CONEC plugs for +/- 15 and +/- 24.

I want to order 5-6 of these as soon as possible, so if anyone wants anything changed or sees a problem, please do tell!

Attachment 1: auxdaq_40m_6U_front.pdf
auxdaq_40m_6U_front.pdf
Attachment 2: auxdaq_40m_6U_rear.pdf
auxdaq_40m_6U_rear.pdf
Attachment 3: auxdaq_40m_6U_BNC.pdf
auxdaq_40m_6U_BNC.pdf
  14295   Wed Nov 14 18:58:35 2018 aaronUpdateDAQNew DAC for the OMC

I began moving the AA and AI chassis over to 1X1/1X2 as outlined in the elog.

The chassis were mostly filled with empty cables. There was one cable attached to the output of a QPD interface board, but there was nothing attached to the input so it was clearly not in use and I disconnected it.

I also attach a picture of some of the SMA connectors I had to rotate to accommodate the chassis in their new locations.

Update:

The chassis are installed, and the anti-imaging chassis can be seen second from the top; the anti-aliasing chassis can be seen 7th from the top.

I need to breakout the SCSI on the back of the AA chassis, because ADC breakout board only has a DB36 adapter available; the other cables are occupied by the signals from the WFS dewhitening outputs.

Attachment 1: 6D079592-1350-4099-864B-1F61539A623F.jpeg
6D079592-1350-4099-864B-1F61539A623F.jpeg
Attachment 2: 5868D030-0B97-43A1-BF70-B6A7F4569DFA.jpeg
5868D030-0B97-43A1-BF70-B6A7F4569DFA.jpeg
  15067   Tue Dec 3 20:32:37 2019 ranaOmnistructureDAQNDS2 situation

Recently, accordian to Gautam, the NDS2 server has been dying on Megatron ~daily or weekly. The prescription is to restart the server.

  1. I could find no instructions (that work) in the elog or wiki. We must remove the misleading entries from the wiki and update it with whatever works as of today.
  2. There is a line (which has been commented out) in the Megatron crontab which is close to the right command, but it has the wrong path.
  3. Running the command from the CRON (/home/nds2mgr/nds2-megatron/test_restart), gives several errrors.
  4. when I run the init.d command which is in the script, it seems to run fine
  5. the server then takes several minutes to get itself together; i.e. just because it is running doesn't mean that you can get data. I recommend waiting 5-10 minutes.

Also, megatron is running Ubuntu 12 !! Let's decide on a day to upgrade it to a Debian 18ish....word from Rolf is that Scientific Linux is fading out everywhere, so Debian is the new operating system for all conformists.

Attachment 1: getData.py
#!/usr/bin/env python
# this function gets some data (from the 40m) and saves it as
# a .mat file for the matlabs
# Ex. python -O getData.py


from scipy.io import savemat,loadmat
import scipy.signal as sig
from astropy.time import Time
import nds2
... 116 more lines ...
Attachment 2: chanlist.txt
PEM-SEIS_BS_X_OUT_DQ
PEM-SEIS_BS_Y_OUT_DQ
PEM-SEIS_BS_Z_OUT_DQ
PEM-SEIS_EX_X_OUT_DQ
PEM-SEIS_EX_Y_OUT_DQ
PEM-SEIS_EX_Z_OUT_DQ
PEM-SEIS_EY_X_OUT_DQ
PEM-SEIS_EY_Y_OUT_DQ
PEM-SEIS_EY_Z_OUT_DQ
  15302   Mon Apr 13 16:51:49 2020 ranaSummaryDAQNODUS: rsyncd daemon / service set up

I just now modified the /etc/rsyncd.conf file as per Dan Kozak's instructions. The old conf file is still there with the file name appended with today's date.

I then enabled the rsync daemon to run on boot using 'enable'. I'll ask Dan to start the file transfers again and see if this works.

controls@nodus|etc> sudo systemctl start rsyncd.service
controls@nodus|etc> sudo systemctl enable rsyncd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/rsyncd.service to /usr/lib/systemd/system/rsyncd.service.
controls@nodus|etc> sudo systemctl status rsyncd.service
● rsyncd.service - fast remote file copy program daemon
   Loaded: loaded (/usr/lib/systemd/system/rsyncd.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2020-04-13 16:49:12 PDT; 1min 28s ago
 Main PID: 4950 (rsync)
   CGroup: /system.slice/rsyncd.service
           └─4950 /usr/bin/rsync --daemon --no-detach

Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd[1]: Started fast remote file copy program daemon.
Apr 13 16:49:12 nodus.martian.113.168.192.in-addr.arpa systemd[1]: Starting fast remote file copy program daemon...

  15560   Sun Sep 6 13:15:44 2020 JonUpdateDAQUPS for framebuilder

Now that the old APC Smart-UPS 2200 is no longer in use by the vacuum system, I looked into whether it can be repurposed for the framebuilder machine. Yes, it can. The max power consumption of the framebuilder (a SunFire X4600) is 1.137kW. With fresh batteries, I estimate this UPS can power the framebuilder for >10 min. and possibly as long as 30 min., depending on the exact load.

@Chub/Jordan, this UPS is ready to be moved to rack 1X6/1X7. It just has to be disconnected from the wall outlet. All of the equipment it was previously powering has been moved to the new UPS. I have ordered a replacement battery (APC #RBC43) which is scheduled to arrive 9/09-11.

  16853   Sat May 14 08:36:03 2022 ChrisUpdateDAQDAQ troubleshooting

I heard a rumor about a DAQ problem at the 40m.

To investigate, I tried retrieving data from some channels under C1:SUS-AS1 on the c1sus2 front end. DQ channels worked fine, testpoint channels did not. This pointed to an issue involving the communication with awgtpman. However, AWG excitations did work. So the issue seemed to be specific to the communication between daqd and awgtpman.

daqd logs were complaining of an error in the tpRequest function: error code -3/couldn't create test point handle. (Confusingly, part of the error message was buffered somewhere, and would only print after a subsequent connection to daqd was made.) This message signifies some kind of failure in setting up the RPC connection to awgtpman. A further error string is available from the system to explain the cause of the failure, but daqd does not provide it. So we have to guess...

One of the reasons an RPC connection can fail is if the server name cannot be resolved. Indeed, address lookup for c1sus2 from fb1 was broken:

$ host c1sus2
Host c1sus2 not found: 3(NXDOMAIN)

In /etc/resolv.conf on fb1 there was the following line:

search martian.113.168.192.in-addr.arpa

Changing this to search martian got address lookup on fb1 working:

$ host c1sus2
c1sus2.martian has address 192.168.113.87

But testpoints still could not be retrieved from c1sus2, even after a daqd restart.

In /etc/hosts on fb1 I found the following:

192.168.113.92  c1sus2

Changing the hardcoded address to the value returned by the nameserver (192.168.113.87) fixed the problem.

It might be even better to remove the hardcoded addresses of front ends from the hosts file, letting DNS function as the sole source of truth. But a full system restart should be performed after such a change, to ensure nothing else is broken by it. I leave that for another time.

  16854   Mon May 16 10:49:01 2022 AnchalUpdateDAQDAQ troubleshooting

[Anchal, Paco, JC]

Thanks Chris for the fix. We are able to access the testpoints now but we started facing another issue this morning, not sure how it is related to what you did.

  • The C1:LSC-TRX_OUT and C1:LSC-TRY_OUT channels are stuck to zero value.
  • These were the channels we used until last friday to align the interferometer.
  • These channels are routed through the c1rfm FE model (Reflected Memory model is the name, I think). These channels carry the IR transmission photodiode monitors at the two ends of the interferometer, where they are first logged into the local FEs as C1:SUS-ETMX_TRX and C1:SUS-ETMY_TRY .
  • These channels are then fed to C1:SCX-RFM_TRX -> C1:RFM_TRX -> C1:RFM-LSC_TRX -> C1:LSC-TRX and similar for Y side.
  • We are able to see channels in the end FE filtermodule testpoints (C1:SUS-ETMX_TRX_OUT & C1:SUS-ETMY_TRY_OUT)
  • However, we are unable to see the same signal in c1rfm filter module testpoints like C1:RFM_TRX_IN1, C1:RFM_TRY_IN1 etc
  • There is an IPC error shown in CDS FE status screen for c1rfm in c1sus. But we remember seeing this red for a long time and have been ignoring it so far as everything was working regardless.

The steps we have tried to fix this are:

  • Restart all the FE models in c1lsc, c1sus, and c1ioo (without restarting the computers themselves) , and then burt restore.
  • Restart all the FE models in c1iscex, and c1iscey (only c1iscey computer was restarted) , and then burt restore.

These above steps did not fix the issue. Since we have  the testpoints (C1:SUS-ETMX_TRX_OUT & C1:SUS-ETMY_TRY_OUT) for now to monitor the transmission levels, we are going ahead with our upgrade work without resovling this issue. Please let us know if you have any insights.

  16855   Mon May 16 12:59:27 2022 ChrisUpdateDAQDAQ troubleshooting

It looks like the RFM problem started a little after 2am on Saturday morning (attachment 1). It’s subsequent to what I did, but during a time of no apparent activity, either by me or others.

The pattern of errors on c1rfm (attachment 2) looks very much like this one previously reported by Gautam (errors on all IRFM0 ipcs). Maybe the fix described in Koji’s followup will work again (involving hard reboots).

Attachment 1: timeseries.png
timeseries.png
Attachment 2: err.png
err.png
  280   Mon Jan 28 15:11:38 2008 robHowToDMFrunning compiled matlab DMF tools

I compiled Rana's seisBLRMS monitor, and it's now running on mafalda. To start your own DMF tools, here is a procedure:

1) build your tool in mDV, get it working the way you'd like.
2) Make a new directory /cvs/cds/caltech/apps/DMF/compiled_matlab/{your_new_directory} and copy the *.m file there.
3) Make the *.m in your new directory into a function with no args (just add a function line at the top)
4) compile it (from within a fully mDV-configured matlab) with mcc -m -R -nojvm {yourfile}.m at the matlab command line.
5) add a line corresponding to your new tool to the script /cvs/cds/caltech/apps/DMF/scripts/start_all
6) Run the start_all script referenced in part (5).

NB: Steps (4) and (6) must be carried out on mafalda.
  282   Mon Jan 28 18:56:47 2008 ranaUpdateDMFseisBLRMS 1.0
I made all of the updates I aludded to before:

  • Expanded the dmf.db file on c1aux to include all accelerometers and the seismometer.
  • Added the channels to the C0EDCU.ini file and restarted the framebuilder daqd.
  • Modified seisBLRMS.m to use .conf files for the channels. The .conf files are now residing in the compiled matlab directory that Rob made.

I still have yet to compile and test the new version. Its running on linux3 right now, but feel free to kill it and compile it to run on Mafalda.

It should be making trends overnight and so we can finally see what the undergrads are really up to.
  284   Tue Jan 29 14:56:39 2008 robUpdateDMFseisBLRMS 1.0

The seisBLRMS 1.0 program crashed at ~7:20 pm last night, so we didn't get data from overnight. It crashed when framecaching failed. I added
a try-catch-end statement around the call to dttfft2 to let the program survive this, then compiled it and started it on mafalda. After ~45 minutes, the compiled version encountered the same error, and while it didn't crash per se, after 20 minutes it still wasn't able to read data. We may have to dig deeper into the guts of mDV to make this stuff run more robustly.
  287   Wed Jan 30 20:39:31 2008 robUpdateDMFseisBLRMS


In order to reduce the probability of seisBLRMS crashing due to unavailability of data, I edited seisBLRMS.m so that it displays data from 6 minutes in the past, rather than 3. After compiling, this version ran for ~8 hours without crashing. I've killed the process now because it seems to interfere with alignment scripts that use ezcademod, causing "DATA RECEIVING ERROR 4608" messages. These don't cause ezcademod to crash, but so many of them are spit out that the scripts don't work very well. I guess running DMF constantly is just making the framebuilder work too hard with disk access. In the near term, we can maybe work around this by having DMF programs check the AutoDithering bit of the IFO state vector, and just not try to get data when we're running these sorts of scripts.
  291   Fri Feb 1 12:37:39 2008 robUpdateDMFseisBLRMS trends

Here are DV trends of the output of seisBLRMS over the last ~36 hours (which is how long it's been running), and another of the last 2 hours (which show the construction crew taking what appears to be a lunch break).
Attachment 1: seis36hours.png
seis36hours.png
Attachment 2: seis2hours.png
seis2hours.png
  312   Tue Feb 12 16:34:07 2008 robDAQDMFseisBLRMS 1.1
The compiled version of seisBLRMS had been running ~2 weeks without crashing as of last night, when I killed it
so it wouldn't interfere with alignment scripts.  I added an EPICS channel C1:DMF-ENABLE, and updated the DMF
executables to check this channel while running.  So far it seems to work.  When you're running alignment acripts,
simply click the DISABLE button on the C1DMF_MASTER.adl screen, and then re-ENABLE when the scripts finish. 
It's not clear why this is necessary though.  Theories include the constant disk access is keeping the
framebuilder busy, reducing its ability to deal with ezcademod commands and DMF programs just flooding the
network with so much traffic that ezcademod-related packets run late and get ignored.

Also, for reasons of aesthetics, I changed the data delay from 6 minutes to 5 minutes.  We'll see if that's enough.
  316   Thu Feb 14 15:04:53 2008 robDAQDMFDMF delay

Sometime ago I edited seisBLRMS to keep of track of how long it was taking to write RMS data (that is, the delay between the accelerometer data and the write of the EPICS rms data). Here's a plot of that info, showing how the delay increases over time. I think this indicates a logical flaw in the timing of the seisBLRMS program, which sort of relies on everything running well consistently; this should not be difficult to fix. I'll maybe try increasing the delay to ~10 minutes, and making it relatively inflexible.
Attachment 1: DMFdelay.png
DMFdelay.png
  317   Thu Feb 14 15:05:18 2008 robDAQDMFseisBLRMS 1.1
> 
> Also, for reasons of aesthetics, I changed the data delay from 6 minutes to 5 minutes.  We'll see if that's enough.

5 minutes didn't work.  
  390   Fri Mar 21 17:01:21 2008 ranaConfigurationDMFLocale change on Mafalda & seisBLRMS restart
Ever since we moved the accelerometers to be around the MC and changed their names, the seisBLRMS
has not been working. I tried to restart it today after fixing the channel names in the par file
but I ran into a PERL / UBUNTU bug.
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
        LANGUAGE = (unset),
        LC_ALL = (unset),
        LC_TIME = "en_US.ISO8859-1",
        LC_MONETARY = "en_US.ISO8859-1",
        LC_CTYPE = "en_US.ISO8859-1",
        LC_COLLATE = "en_US.ISO8859-1",
        LC_MESSAGES = "C",
        LC_NUMERIC = "en_US.ISO8859-1",
        LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").

I don't know how this crept up or when it started. There were a bunch of fixes on the Ubuntu
forums which didn't work.

In the end I just set the 'unset' environment variables via our cshrc.40m file and this seemed
to make ligotools/perl happy. Lets hope this lasts...I love Linux.
  392   Fri Mar 21 23:17:47 2008 ranaConfigurationDMFseisBLRMS restarted
I updated the seisBLRMS par file with the new channel names of the accelerometers and the seismometer and then
recompiled the code and restarted it according to Rob's elog entry. It went fine and the seisBLRMS is now back in
action
.
  1230   Thu Jan 15 22:30:32 2009 ranaConfigurationDMFDMF start script
I tried to restart the DMF using the start_all script: http://dziban.ligo.caltech.edu:40/40m/280

it didn't work Frown
  1232   Fri Jan 16 11:33:59 2009 robConfigurationDMFDMF start script

Quote:
I tried to restart the DMF using the start_all script: http://dziban.ligo.caltech.edu:40/40m/280

it didn't work Frown


It should work soon. The PATH on mafalda does not include ".", so I added a line to the start_DMF subscript, which sets up the DMF ENV, to prepend this to the path before starting the tools. I didn't put it in the primary login path (such as in the .cshrc file) because Steve objects on philosophical grounds.

Also, the epics tools in general (such as tdsread) on mafalda were not working, due to PATH shenanigans and missing caRepeaters. Yoichi is harmonizing it.
  1359   Thu Mar 5 01:09:29 2009 rana, albertoUpdateDMFstill not working
We tried to run DMF on mafalda, but it didn't work. I tried to compile it using Rob's elog instructions.
On mafalda, I started matlab and ran the mdv_config.m to set up mDV. I tested that the seisBLRMS.m
script ran and correctly produced changes in the seisBLRMS strip tool. but when I tried to compile it I got:
>> mcc -v -m -R -nojvm seisBLRMS.m
Warning: Duplicate directory name: /cvs/cds/caltech/apps/linux/matlab/toolbox/local.
Compiler version: 4.6 (R2007a)
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/matlab/mcc.enc
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/signal/mcc.enc
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/control/mcc.enc
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/filterdesign/mcc.enc
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/shared/controllib/mcc.enc
Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/ident/mcc.enc
Warning: an error occurred while parsing class FilterDesignDialog.AbstractEditor:
Undefined function or variable 'DAStudio.Object'.

> In /cvs/cds/caltech/apps/linux/matlab/toolbox/shared/filterdesignlib/@FilterDesignDialog/@CoeffEditor/schema.p>schema at 9
Warning: an error occurred while parsing class FilterDesignDialog.CoeffEditor:
Invalid superclass handle.

Processing /cvs/cds/caltech/apps/linux/matlab/toolbox/fixedpoint/mcc.enc
terminate called after throwing an instance of 'ApplicationRedefinedException*'
Abort (core dumped)
"/cvs/cds/caltech/apps/linux/matlab/bin/mcc" -E "/tmp/fileRnU5Qj_31324": Aborted
??? Error executing mcc, return status = 134.

In the meantime, I've started up a green terminal on allegra which is ssh'd into megatron.
On megatron there is a regular matlab session which is running seisBLRMS.m as a matlab script
and the seis DMF channels are getting updated.
  1379   Mon Mar 9 19:33:10 2009 ranaUpdateDMFseisBLRMS in temp condition

The seisBLRMS has been running on megatron via an open terminal ssh'd into there from allegra with matlab running. This

is because I couldn't get the compiled matlab functionality to work.

Even so, this running script has been dying lately because of some bogus 'NDS' error. So for today I

have set the NDS server for mDV on megatron to be fb40m:8088 instead of nodus.ligo.caltech.edu. If this seems to fix the problem

I will make this permanent by putting in a case statement to check whether or not the mDV'ing machine is a 40m-martian or not.

Attachment 1: Untitled.png
Untitled.png
  1392   Thu Mar 12 00:29:39 2009 JenneOmnistructureDMFDMF being whiny again

Quote:
The seisBLRMS has been running on megatron via an open terminal ssh'd into there from allegra with matlab running.


[Yoichi, Jenne]

seisBLRMS was down again. I assumed it was just because the DMF Master Enable was in the 'Disabled' state, but enabling it didn't do the trick. Rana's green terminal window was complaining about not being able to find nodus.ligo.caltech.edu. Yoichi and I stopped it, closed and restarted Matlab, ran mdv_config, then ran seisBLRMS again, and it seems happy now.

On the todo list still is making the DMF / seisBLRMS stuff happy all the time.
  1400   Fri Mar 13 19:26:09 2009 YoichiUpdateDMFseisBLRMS compiled

 I compiled seisBLRMS.

The tricks were the following:

(1) Don't add path in a deployed command.
It does not make sense to add paths in a compiled command because it may be moved to anywhere. Moreover, it can cause some weird side effects. Therefore, I enclosed the addpath part of mdv_config.m in a "if ~isdeployed ... end" clause to avoid adding paths when deployed. Instead of adding paths in the code, we have to add paths to necessary files with -I options at the compilation time. This way, mcc will add all the necessary files into the CTF archive.

(2) Add mex files to the CTF archive by -a options.
For some reason, mcc does not add necessary mex files into the CTF archive even though those files are called in the m-file which is being compiled. We have to add those files by -a options.

(3) NDS_GetData() is slow for nodus when compiled.
NDS_GetData(), which is called by get_data() stops for a few minutes when using nodus as an NDS server.
This problem does not happen when not compiled. I don't know the reason. To avoid this, I modified seisBLRMS.m so that when an environmental variable $NDS is defined, it will use an NDS server defined in this variable.

I wrote a Makefile to compile seisBLRMS. You can read the file to see the details of the tricks.
I also wrote a script start_seisBLRMS, which can be found in /cvs/cds/caltech/apps/DMF/compiled_matlab/seisblrms/. To start seisBLRMS, you can just call this script.
At this moment, seisBLRMS is running on megatron. Let's see if it continues to run without crashing.

Quote:

The seisBLRMS has been running on megatron via an open terminal ssh'd into there from allegra with matlab running. This

is because I couldn't get the compiled matlab functionality to work.

Even so, this running script has been dying lately because of some bogus 'NDS' error. So for today I

have set the NDS server for mDV on megatron to be fb40m:8088 instead of nodus.ligo.caltech.edu. If this seems to fix the problem

I will make this permanent by putting in a case statement to check whether or not the mDV'ing machine is a 40m-martian or not.

  1416   Sun Mar 22 22:47:58 2009 ranaUpdateDMFseisBLRMS compiled but still dying
Looks like seisBLRMS was restarted ~1 AM Friday morning but only lasted for 5 hours. I just restarted it on megatron;
let's see how it does. I'm not optimistic.
  1484   Wed Apr 15 02:20:46 2009 rana, yoichiUpdateDMFDMF now working copy

We found that DMF/ was not an SVN working copy, so I wiped out the SVN version, imported the on-disk copy, moved it to DMFold/ and then checked out the SVN version.

We can delete DMFold/ whenever we are happy with the SVN copy.

  1485   Wed Apr 15 03:52:27 2009 ranaUpdateDMFNDS client32 updated for DMF
 Since our seisBLRMS.m complains about 'can't find hostname' after a few hours, even though matlab is able to ping fb40m, 
I have recompiled the NDS mex client for 32-bit linux on mafalda and stuck it into the nds_mexs/ directory. This time I 
 

 compiled using the 'gcc' compiler instead of the 'ANSI C' compiler that is recommended in the README (which, I notice,

 

 is now missing from Ben Johnsons web page!). Let's see how long this runs.

  1977   Tue Sep 8 19:36:52 2009 JenneOmnistructureDMFDMF restarted

I (think I) restarted DMF.  It's on Mafalda, running in matlab (not the complied version which Rana was having trouble with back in the day).  To start Matlab, I did "nohup matlab", ran mdv_config, then started seisBLRMS.m running.  Since I used nohup, I then closed the terminal window, and am crossing my fingers in hopes that it continues to work.  I would have used Screen, but that doesn't seem to work on Mafalda.

ELOG V3.1.3-