40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 86 of 357  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  14320   Mon Nov 26 21:58:08 2018 JonOmnistructure Serial Vacuum Signals

All the serial vacuum signals are now interfaced to the new digital controls system. A set of persistent Python scripts will query each device at regular intervals (up to ~10 Hz) and push the readings to soft channels hosted by the modbus IOC. Similar scripts will push on/off state commands to the serial turbo pumps.

IP Addresses/Comm Settings

Each serial device is assigned an IP address on the local subnet as follows. Its serial communication parameters as configured in the terminal server are also listed.

Device IP Address Baud Rate Data Bits Stop Bits Parity
MKS937a vacuum gauge controller 192.168.114.11 9600 8 1 even
MKS937b vacuum gauge controller 192.168.114.12 9600 8 1 even
GP307 vacuum gauge controller 192.168.114.13 9600 8 1 even
GP316a vacuum gauge controller 192.168.114.14 9600 8 1 even
GP316b vacuum gauge controller 192.168.114.15 9600 8 1 even
N2 pressure line gauge 192.168.114.16 9600 7 1 odd
TP2/3 192.168.114.17/18 9600 8 1 none

Hardware Modifications

  • Each of the five vacuum gauge controllers has an RJ45 adapter installed directly on its DB9/DB25 output port. Because the RJ45 cable now plugs directly into the terminal server, instead of passing through some additional adapters as it formerly did, it was necessary to reverse the wiring of the controller TXD and RXD pins to Ethernet pins. The DB9/25-to-RJ45 adapters on the back of the controllers are now wired as follows.
    • For the MKS controllers: DB2 (RXD) --> Eth4;  DB3 (TXD) --> Eth5;  DB5 (RTN) --> Eth6
    • For the Granville-Phillips controllers: DB2 (TXD) --> Eth5;  DB3 (RXD) --> Eth4;  DB7 (RTN) --> Eth6
  • I traced a communications error with the GP307 gauge controller all the way back to what I would have suspected least, the controller itself. The comm card inside each controller has a set of mechanical relay switches which set the communications parameters (baud rate, parity, etc.). Knowing that this controller was not part of the original installation, but was swapped in to replace the original in 2009, I pulled the controller from the rack and checked the internal switch settings. Sure enough, the switch settings (pictured below) were wrong. In the background of the photo is the unit removed in 2009, which has the correct settings. After setting the correct communications parameters, the controller immediately began communicating with the server. Did these readouts (PRP, PTP1) never work since 2009? I don't see how they could.
Attachment 1: GP307_relays.jpeg
GP307_relays.jpeg
  14325   Fri Nov 30 15:53:52 2018 JonOmnistructureGeneralN2 pressure gauge fix

I've made a repair to the N2 pressure monitor. I don't believe the polarity of the analog signal into the controller actually was reversed. I found the data sheet (attached) for the transducer model we have installed. Its voltage should read ~0 at 0 PSI and 100mV at 100 PSI. As wired, the input voltage reads +80 mV as it should.

The controller calibrates the sensor voltage to PSI (i.e., applies a scale and offset) based on two settable reference points which appeared to be incorrect. I changed them to:

  1. 0 mV = 0 PSI (neglecting the small dark bias)
  2. 100 mV = 100 PSI

After the change, the pressure reads 80 PSI. Let's see if the time history now shows a sensible trend. 

Quote:

[koji, gautam, jon, steve]

  • We suspect analog voltage from N2 pressure gauge is connected to interfacing Omega controller with the 'wrong' polarity (i.e pressure is rising over ~4 days and then rapidly falling instead of the other way around). This should be fixed.

 

Attachment 1: N2_transducer_datasheet.pdf
N2_transducer_datasheet.pdf
  14327   Sun Dec 2 16:08:44 2018 JonOmnistructureUpgradeFeedthroughs for Vacuum Acromag Chassis

Below is an inventory of the signal feedthroughs that need to be installed on the vacuum Acromag crate this week.

Type Qty Connects to # Chs Signals
DB-37 female 1 Main AC relay box 18 Valve/roughing pump control
DB-9 female 5** Satellite AC relay boxes 3-4/box Valve control
DB-25 male 1 Turbo pump 1 controller 5 Pump status readbacks
DB-9 male 30 Valve position indicators 2/valve Valve position readbacks
DB-9 male 3 Roughing pump controllers 1/pump Pump status readbacks
DB-9 male 1 Cryo pump controller 2 Pump status readbacks

**The original documentation lists five satellite boxes (one for each test mass chamber and one for the beamsplitter chamber), but Chub reports not all of them are in use. We may remove the ones not used.

  14330   Tue Dec 4 10:38:12 2018 JonOmnistructureUpgradeUpdated Feedthrough List for Vacuum Acromag Chassis

Based on new input from Chub, attached is the revised list of signal cable feedthroughs needed on the vacuum system Acromag crate. I believe this list is now complete.

Attachment 1: acromag_chassis_feedthroughs.pdf
acromag_chassis_feedthroughs.pdf
  14333   Thu Dec 6 17:33:33 2018 JonOmnistructureGeneralN2 line disconnected

I believe I finally have the N2 gauge working correctly. The wiring is unchanged from its original state and the controller has been recalibrated.

After letting the line pressure drop to 0 PSI as indicated by the analog gauge in the drill-press room, I recorded the number of counts read by the Omega controller. Then I pressurized the line to 80 PSI, again indicated by the analog gauge, and recorded the Omega counts again. I entered these two reference points into the controller (automatically determines the gain and offset from these), then confirmed the readings to agree with the anaog gauge as I varied the line pressure.

The two reference points are:

0 PSI  :  13 counts
80 PSI : 972 counts

 

Quote:

[jon, gautam]

In the latest installment in this puzzler: turns out that maybe the trend of the "N2 pressure" channel increasing over the ~3 day timescale it takes a cylinder of N2 to run out is real, and is a feature of the way our two N2 cylinder lines/regulators are setup (for the automatic switching between cylinders when one runs out). In order to test this hypothesis, we'd like to have the line pressure be 0 initially, and then just have 1 cylinder hooked up. 

 

  14348   Wed Dec 12 18:27:07 2018 JonOmnistructureUpgradeAnalog signals, A/D Acromag added to vacuum system

There turned out to be a few analog signals for the vacuum system after all. The TP2/3 foreline pressure gauges were never part of the digital system, but we wanted to add them, as some of the interlock conditions should be predicated on their readings. Each gauge connects to an old Granville-Phillips 375 controller which only has an analog output. Interfacing these signals with the new system required installing an Acromag XT1221 8-channel A/D unit. Taking advantage of the extra channels, I also moved the N2 delivery line pressure transducer to the XT1221, eliminating the need for its separate Omega DPiS32 controller. When the new high-pressure transducers are added to the two N2 tanks, their signals can also be connected.

The XT1221 is mounted on the DIN rail inside the chassis and I have wired a DB-9 feedthrough for each of its three input signals. It is assigned the IP 192.168.114.27 on the vacuum subnet. Testing the channels in situ revealed a subtley in calibrating them to physical units. It was first encountered by Johannes in a series of older posts, but I repeat it here in one place.

An analog-input EPICS channel can be calibrated from raw ADC counts to physical units (e.g., sensor voltage) in two ways:

  1. Via LINR="LINEAR" by setting the engineering-units fields EGUF="[V_max_adc]", EGUL="[V_min_adc]"
  2. Via LINR="NO CONVERSION" by manually setting the gain ASLO="[V/count]" and offset AOFF="[V_offset]"

From the documentation, under the engineering-units method EPICS internally computes:

where EGUF="eng units full scale", EGUL="eng units low", and "full scale A/D counts" is the full range of ADC counts. EPICS automatically infers the range of ADC counts based on the data type returned by the ADC. For a 16-bit ADC like the XT1221, this number is 2^16 = 65,536.

The problem is that, for unknown reasons, the XT1221 rescales its values post-digitization to lie within the range +/-30,000 counts. This corresponds to an actual "full scale A/D counts" = 60,001. If a multiplicative correction factor of 65,536/60,000 is absorbed into the values of EGUF and EGUL, then the first term in the above summation can be corrected. However, the second term (the offset) has no dependence on "full scale A/D counts" and should NOT absorb a correction factor. Thus adjusting the EGUF and EGUL values from, e.g., 10V to 10.92V is only correct when EGUL=0V. Otherwise there is a bias introduced from the offset term also being rescaled.

The generally correct way to handle this correction is to use the manual "NO CONVERSION" method. It constructs calibrated values by simply applying a specified gain and offset to the raw ADC counts:

calibrated val = (measured A/D counts)  x ASLO + AOFF

The gain ASLO="[(V_max_adc - V_min_adc) / 60,001]" and the offset AOFF="0". I have tested this on the three vacuum channels and confirmed it works. Note that if the XT1221 input voltage range is restricted from its widest +/-10V setting, the number of counts is not necessarily 60,001. Page 42 of the manual gives the correct counts for each voltage setting.

  14372   Thu Dec 20 08:38:27 2018 JonUpdateVACPumpdown tomorrow

Linked is the pumpdown procedure, contained in the old 40m documentation. The relevant procedure is "All Off --> Vacuum Normal" on page 11.

Quote:

I just spoke to Jon who asked me to make this elog - we will be ready to test one or more parts of the pumpdown procedure tomorrow (12/20), so we should proceed as planned to put the heavy doors back on EY and OMC chambers at 9am tomorrow morning. Jon will circulate a more detailed procedure about the pumpdown steps later today evening.

 

  14375   Thu Dec 20 21:29:41 2018 JonOmnistructureUpgradeVacuum Controls Switchover Completed

[Jon, Chub, Koji, Gautam]

Summary

Today we carried out the first pumpdown with the new vacuum controls system in place. It performed well. The only problem encountered was with software interlocks spuriously closing valves as the Pirani gauges crossed 1E-4 torr. At that point their readback changes from a number to "L OE-04, " which the system interpreted as a gauge failure instead of "<1E-4." This posed no danger and was fixed on the spot. The main volume was pumped to ~10 torr using roughing pumps 1 and 3. We were limited only by time, as we didn't get started pumping the main volume until after 1pm. The three turbo pumps were also run and tested in parallel, but were isolated to the pumpspool volume. At the end of the day, we closed every pneumatic valve and shut down all five pumps. The main volume is sealed off at ~10 torr, and the pumpspool volume is at ~1e-6 torr. We are leaving the system parked in this state for the holidays. 

Main Volume Pumpdown Procedure

In pumping down the main volume, we carried out the following procedure.

  1. Initially: All valves closed (including manual valves RV1 and VV1); all pumps OFF.
  2. Manually connected roughing pump line to pumpspool via KF joint.
  3. Turned ON RP1 and RP2.
  4. Waited until roughing pump line pressure (PRP) < 0.5 torr.
  5. Opened V3.
  6. Waited until roughing pump line pressure (PRP) < 0.5 torr.
  7. Manually opened RV1 throttling valve to main volume until pumpdown rate reached ~3 torr/min (~3 hours on roughing pumps).
  8. Waited until main volume pressure (P1a/P1b) < 0.5 torr.

We didn't quite reach the end of step 8 by the time we had to stop. The next step would be to valve out the roughing pumps and to valve in the turbo pumps.

Hardware & Channel Assignments

All of the new hardware is now permanently installed in the vacuum rack. This includes the SuperMicro rack server (c1vac), the IOLAN serial device server, a vacuum subnet switch, and the Acromag chassis. Every valve/pump signal cable that formerly connected to the VME bus through terminal blocks has been refitted with a D-sub connector and screwed directly onto feedthroughs on the Acromag chassis.

The attached pdf contains the master list of assigned Acromag channels and their wiring.

Attachment 1: 40m_vacuum_acromag_channels.pdf
40m_vacuum_acromag_channels.pdf 40m_vacuum_acromag_channels.pdf 40m_vacuum_acromag_channels.pdf
  14383   Fri Jan 4 10:25:19 2019 JonOmnistructureVACN2 line valved off

Yes, for TP2 and TP3. They both have a small vent valve that opens automatically on shutdown.

Quote:

Independent question: Are all the turbo forelines vented automatically? We manually did it for the main roughing line.

 

 

  14384   Fri Jan 4 11:06:16 2019 JonOmnistructureUpgradeVac System Punchlist

The base Acromag vacuum system is running and performing nicely. Here is a list of remaining questions and to-do items we still need to address.

Safety Issues

  • Interlock for HV supplies. The vac system hosts a binary EPICS channel that is the interlock signal for the in-vacuum HV supplies. The channel value is OFF when the main volume pressure is in the arcing range, 3 mtorr - 500 torr, and ON otherwise. Is there something outside the vacuum system monitoring this channel and toggling the HV supplies?
  • Exposed 30-amp supply terminals. The 30-amp output terminals on the back of the Sorensen in the vac rack are exposed. We need a cover for those.
  • Interlock for AC power loss. The current vac system is protected only from transient power glitches, not an extended loss. The digital system should sense an outage and put the IFO into a safe state (pumps spun down and critical valves closed) before the UPS battery is fully drained. However, it presently has no way of sensing when power has been lost---the system just continues running normally on UPS power until the battery dies, at which point there is a sudden, uncontrolled shutdown. Is it possible for the digital system to communicate directly with the UPS to poll its activation state?

Infrastructure Improvements

  • Install the new N2 tank regulator and high-pressure transducers (we have the parts; on desk across from electronics bench). Run the transducer signal wires to the Acromag chassis in the vacuum rack.
  • Replace the kludged connectors to the Hornet and SuperBee serial outputs with permanent ones (we need to order the parts).
  • Wire the position indicator readback on the manual TP1 valve to the Acromag chassis.
  • Add cable tension relief to the back of the vac rack.
  • Add the TP1 analog readback signals (rotation speed and current) to the digital system.  Digital temperature, current, voltage, and rotation speed signals have already been added for TP2 and TP3.
  • Set up a local vacuum controls terminal on the desk by the vac rack.
  • Remove gauges from the EPICS database/MEDM screens that are no longer installed or functional. Potential candidates for removal: PAN, PTP1, IG1, CC2, CC3, CC4.
  • Although it appeared on the MEDM screen, the RGA was never interfaced to the old vac system. Should it be connected to c1vac now?
  14387   Mon Jan 7 11:54:12 2019 JonConfigurationComputer Scripts / ProgramsVac system shutdown

I'm making a controlled shutdown of the vac controls to add new ADC channels. Will advise when it's back up.

  14388   Mon Jan 7 19:21:45 2019 JonConfigurationComputer Scripts / ProgramsVac system shutdown

ADC work finished for the day. The vac controls are back up, with all valves CLOSED and all pumps OFF.

Quote:

I'm making a controlled shutdown of the vac controls to add new ADC channels. Will advise when it's back up.

 

  14390   Tue Jan 8 19:13:39 2019 JonUpdateUpgradeReady for pumpdown tomorrow

Everything is set for a second pumpdown tomorrow. We'll plan to start pumping after the 1pm meeting. Since the main volume is already at 12 torr, the roughing phase won't take nearly as long this time.

I've added new channels for the TP1 analog readings (current and speed) and for the two N2 tank pressure readings. Chub finished installing the new regulator and has run the transducer signal cable to the vacuum rack. In the morning he will terminate the cable and make the final connection to the Acromag.

Gautam and I updated the framebuilder config file, adding the newly-added channels to the list of those to be logged. We also set up a git repo containing all of the Python interlock/interfacing code: https://git.ligo.org/40m/vacpython. The idea is to use the issue tracker to systematically document any changes to the interlock code.

  14393   Wed Jan 9 20:01:25 2019 JonUpdateVACSecond pumpdown completed

[Jon, Koji, Chub, Gautam]

Summary

The second pumpdown with the new vacuum system was completed successfully today. A time history is attached below.

We started with the main volume still at 12 torr from the Dec. pumpdown. Roughing from 12 to 0.5 torr took approximately two hours, at which point we valved out RP1 and RP3 and valved in TP1 backed by TP2 and TP3. We additionally used the AUX dry pump connected to the backing lines of TP2 and TP3, which we found to boost the overall pump rate by a factor of ~3. The manual hand-crank valve directly in front of TP1 was used to throttle the pump rate, to avoid tripping software interlocks. If the crank valve is opened too quickly, the pressure differential between the main volume (TP1 intake) and TP1 exhaust becomes >1 torr, tripping the V1 valve-close interlock. Once the main volume pressure reached 1e-2 torr, the crank valve could be opened fully.

We allowed the pumpdown to continue until reaching 9e-4 torr in the main volume. At this point we valved off the main volume, valved off TP2 and TP3, and then shut down all turbo pumps/dry pumps. We will continue pumping tomorrow under the supervision of an operator. If the system continues to perform problem-free, we will likely leave the turbos pumping on the main volume and annuli after tomorrow.

New Vac Control Station

We installed a local controls terminal for the vacuum system on the desk in front of the vacuum rack (pictured below). This console is connected directly to c1vac and can be used to monitor/control the system even during a network outage or power failure. The entire pumpdown was run from this station today.

To open a controls MEDM screen, open a terminal and execute the alias

$control

Similarly, to open a monitor-only MEDM screen, execute the alias

$monitor
Attachment 1: Screenshot_from_2019-01-09_20-00-39.png
Screenshot_from_2019-01-09_20-00-39.png
Attachment 2: IMG_3088.jpg
IMG_3088.jpg
  14396   Thu Jan 10 19:59:08 2019 JonUpdateVACVac System Running Normally on Turbo Pumps

[Jon, Gautam, Chub]

Summary

We continued the pumpdown of the IFO today. The main volume pressure has reached 1.9e-5 torr and is continuing to fall. The system has performed without issue all day, so we'll leave the turbos continuously running from here on in the normal pumping configuration. Both TP2 and TP3 are currently backing for TP1. Once the main volume reaches operating pressure, we can transition TP3 to pump the annuli. They have already been roughed to ~0.1 torr. At that point the speed of all three turbo pumps can also be reduced. I've finished final edits/cleanup of the interlock code and MEDM screens.

Python Code

All the python code running on c1vac is archived to the git repo: 

https://git.ligo.org/40m/vacpython

This includes both the interlock code and the serial device clients for interfacing with gauges and pumps.

MEDM Monitor/Control

We're still using the same base MEDM monitor/control screens, but they have been much improved. Improvements:

  • Valves now light up in red when they are open. This makes it much easier to see at a glance what is valved in/out.
  • Every pump in the system (except CP1) is now digitally controlled from the MEDM control screen. No more need to physically push any buttons in the vaccum rack. 👍
  • The turbo pumps now show additional diagnostic readouts: speed (TP1/2/3), temperature (TP2/3), current draw (TP1/2/3), and voltage (TP2/3).
  • The foreline pressure gauge readouts for TP2/3 have been added to the digital system.
  • The two new main volume gauges, Hornet and SuperBee, have been added to the digital system as well.
  • New transducers have been added to read back the two N2 tank pressures.
  • The interlock code generates a log file of all its actions. A field in the MEDM screens specifies the location of the log file.
  • A tripped interlock (appearing as a message in the "Error message" field) must be manually cleared via the "Clear error message" button on the control screen before the system will accept any more manual valve input.

Note: The apparent glitches in the pressure and TP diagnostic channels are due to the interlock system being taken down to implement some of these changes.

Attachment 1: Screen_Shot_2019-01-10_at_7.58.24_PM.png
Screen_Shot_2019-01-10_at_7.58.24_PM.png
Attachment 2: CCs.png
CCs.png
Attachment 3: TPs.png
TPs.png
  14410   Sun Jan 20 23:41:00 2019 JonOmnistructureVACNotes on vac serial comm, adapter wiring

I've attached my handwritten notes covering all the serial communications in the vac system, and the relevant wiring for all the adapters, etc. I'll work with Chub to produce a final documentation, but in the meantime this may be a useful reference.

Attachment 1: Jon_wiring_notes.tar.gz
  14453   Thu Feb 14 18:16:24 2019 JonUpdateVACVacromag failure

I sent Gautam instructions to first try stopping the modbus service, power cycling the Acromag chassis, then restarting the service. I've seen the Acromags go into an unresponsive state after a strong electrical transient or shorted signal wires, and the unit has to be power cycled to be reset.

If this doesn't resolve it, I'll come in tomorrow to help with the Acromag replacement. We have plenty of spares.

Quote:

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

 

  14456   Fri Feb 15 11:58:45 2019 JonUpdateVACVac system is back up

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals) that could only be cleared by power cycling the units. After resetting the system, the main volume pressure dropped quickly and is now < 2e-5 torr, so normal operations can resume. For future reference, below is the procedure to safely reset these units from a trouble state.

Vacromag Reset Procedure

  • TP2 and TP3 can be left running, but isolate them by closing valves V4 and V5.
  • TP1 can also be left running, but manually flip the operation mode on the front of the controller from REMOTE to LOCAL. This prevents the pump from receiving a "stop" command when its control Acromag shuts down.
  • Close all the pneumatic valves in the system (they'll otherwise close automatically when their control Acromags shut down).
  • On c1vac, stop the modbusIOC service. Sometimes this takes ~1 min to actually terminate.
  • Turn off the Acromags by flipping the "24 V" on the back of the chassis.
  • Wait ~10 sec, then turn them back on.
  • Start the modbusIOC service. It may take up to ~1 min for all the readings on the MEDM screen to initialize.
  • Ensure that the rotation speed of TP1,2,3 are still all nominal.
  • If pumps are OK, open V4, V5, and V7, then open V1. This restores the system to the "Maximum pumping speed" state.
  • Flip the TP1 controller operation state back to REMOTE.
  14461   Fri Feb 15 20:07:02 2019 JonUpdateVACUpdated vacuum punch list

While working on the vac controls today, I also took care of some of the remaining to-do items. Below is a summary of what was done, and what still remains.

Completed today

  • TP2/3 overcurrent interlock raised from 1 to 1.2 A. This was tripping during normal operation as the pump accelerates from low-speed (standby) to normal-speed mode.
  • Interlock conditions on VABSSCO/VABSSCI removed. Per discussion with Steve, these are not vent valves, but rather isolation valves between the BS/IOO/OMC annuli. The interlocks were preventing the valves from opening, and hence the IOO and OMC annuli from being pumped.
  • Channel exposed for interlocking in-vacuum high-voltage drivers. The channel name is C1:Vac-interlock_high_voltage. The vac interlock service sets this channel's value to 0 when the main volume pressure is in the range 3 mtorr-500 torr, and to 1 otherwise.
  • Annuli pumping integrated into the set of recognized states. "Vacuum normal" now refers to TP1 and TP2 pumping on the main volume AND TP3 pumping on all the annuli. The system is currently running in this state.
  • TP1 lowered to the nominal speed setting recommended by Steve: 33.6 krpm (560 Hz).

Still remaining

  • Implement a "blinker" input-output signal loop between two Acromags to detect hardware failures like the one today.
  • Add an AC power monitor to sense extended power losses and automatically put the system into safe shutdown.
  • Migrate the RGA to c1vac. Still some issues getting the serial comm working.
  • Troubleshoot the SuperBee (backup) main volume Parani gauge. It has not communicated with c1vac since a serial adapter was replaced two weeks ago. Chub thinks the gauge was possibly damaged by arcing during the replacement.
  • Scripting for more automated pumpdowns.
  • Generate a bootable backup hard drive for c1vac, which could be swapped in on a short time scale after a failure.
  14487   Wed Mar 20 12:31:30 2019 JonUpdateVACDoing vac controls work

I'm rebooting the IOLAN server to load new serial ports. The interlocks might trip when the pressure gauge readbacks cut out.

  14488   Wed Mar 20 19:26:25 2019 JonUpdateVACProtection against AC power loss

Today I implemented protection of the vac system against extended power losses. Previously, the vac controls system (both old and new) could not communicate with the APC Smart-UPS 2200 providing backup power. This was not an issue for short glitches, but for extended outages the system had no way of knowing it was running on dwindling reserve power. An intelligent system should sense the outage and put the IFO into a controlled shutdown, before the batteries are fully drained.

What enabled this was a workaround Gautam and I found for communicating with the UPS serially. Although the UPS has a serial port, neither the connector pinout nor the low-level command protocol are released by APC. The only official way to communicate with the UPS is through their high-level PowerChute software. However, we did find "unofficial" documentation of APC's protocol. Using this information, I was able to interface the the UPS to the IOLAN serial device server. This allowed the UPS status to be queried using the same Python/TCP sockets model as all the other serial devices (gauges, pumps, etc.). I created a new service called "serial_UPS.service" to persistently run this Python process like the others. I added a new EPICS channel "C1:Vac-UPS_status" which is updated by this process.

With all this in place, I added new logic to the interlock.py code which closes all valves and stops all pumps in the event of a power failure. To be conservative, this interlock is also tripped when the communications link with the UPS is disconnected (i.e., when the power state becomes unknown). I tested the new conditions against both communication failure (by disconnecting the serial cable) and power failure (by pressing the "Test" button on the UPS front panel). This protects TP2 and TP3. However, I discovered that TP1---the pump that might be most damaged by a sudden power failure---is not on the UPS. It's plugged directly into a 240V outlet along the wall. This is because the current UPS doesn't have any 240V sockets. I'd recommend we get one that can handle all the turbo pumps.

For future reference:

Pin 1: RxD

Pin 2: TxD

Pin 5: GND

Standard: RS-232

Baud rate: 2400

Data bits: 8

Parity: none

Stop bits: 1

Handshaking: none

 

 

Attachment 1: IMG_3146.jpg
IMG_3146.jpg
  14489   Wed Mar 20 20:07:22 2019 JonUpdateVACDoing vac controls work

Work is completed and the vac system is back in its nominal state.

Quote:

I'm rebooting the IOLAN server to load new serial ports. The interlocks might trip when the pressure gauge readbacks cut out.

 

  14490   Thu Mar 21 12:46:22 2019 JonUpdateVACMore vac controls upgrades

The vac controls system is going down for migration from Python 2.7 to 3.4. Will advise when it is back up.

  14491   Thu Mar 21 17:22:52 2019 JonUpdateVACMore vac controls upgrades

I've converted all the vac control system code to run on Python 3.4, the latest version available through the Debian package manager. Note that these codes now REQUIRE Python 3.x. We decided there was no need to preserve Python 2.x compatibility. I'm leaving the vac system returned to its nominal state ("vacuum normal + RGA").

Quote:

The vac controls system is going down for migration from Python 2.7 to 3.4. Will advise when it is back up.

 

  14493   Thu Mar 21 18:36:59 2019 JonOmnistructureUpgradeVacuum Controls Switchover Completed

Updated vac channel list is attached. There are several new ADC channels.

Quote:

Hardware & Channel Assignments

All of the new hardware is now permanently installed in the vacuum rack. This includes the SuperMicro rack server (c1vac), the IOLAN serial device server, a vacuum subnet switch, and the Acromag chassis. Every valve/pump signal cable that formerly connected to the VME bus through terminal blocks has been refitted with a D-sub connector and screwed directly onto feedthroughs on the Acromag chassis.

The attached pdf contains the master list of assigned Acromag channels and their wiring.

Attachment 1: 40m_Vacuum_Acromag_Channels_20190321.pdf
40m_Vacuum_Acromag_Channels_20190321.pdf 40m_Vacuum_Acromag_Channels_20190321.pdf 40m_Vacuum_Acromag_Channels_20190321.pdf
  14495   Mon Mar 25 10:21:05 2019 JonUpdateUpgradec1susaux upgrade plan

Now that the Acromag upgrade of c1vac is complete, the next system to be upgraded will be c1susaux. We chose c1susaux because it is one of the highest-priority systems awaiting upgrade, and because Johannes has already partially assembled its Acromag replacement (see photos below). I've assessed the partially-assembled Acromag chassis and the mostly-set-up host computer and propose we do the following to complete the system.

Documentation

As I go, I'm writing step-by-step documentation here so that others can follow this procedure for future systems. The goal is to create a standard procedure that can be followed for all the remaining upgrades.

Acromag Chassis Status

The bulk of the remaining work is the wiring and testing of the rackmount chassis housing the Acromag units. This system consists of 17 units: 10 ADCs, 4 DACs, and 3 digitial I/O modules. Johannes has already created a full list of channel wiring assignments. He has installed DB37-to-breakout board feedthroughs for all the signal cable connections. It looks like about 40% of the wiring from the breakout boards to Acromag terminals is already done.

The Acromag units have to be initially configured using the Windows laptop connected by USB. Last week I wasn't immediately able to check their configuration because I couldn't power on the units. Although the DC power wiring is complete, when I connected a 24V power supply to the chassis connector and flipped on the switch, the voltage dropped to ~10V irrespective of adjusting the current limit. The 24V indicator lights on the chassis front and back illuminated dimly, but the Acromag lights did not turn on. I suspect there is a short to ground somewhere, but I didn't have time to investigate further. I'll check again this week unless someone else looks at it first.

Host Computer Status

The host computer has already been mostly configured by Johannes. So far I've only set up IP forwarding rules between the martian-facing and Acromag-facing ethernet interfaces (the Acromags are on a subnet inaccessible from the outside). This is documented in the link above. I also plan to set up local installations of modbus and EPICS, as explained below. The new EPICS command file (launches the IOC) and database files (define the channels) have already been created by Johannes. I think all that remains is to set up the IOC as a persistent system service.

Host computer OS

Recommendation from Keith Thorne:

For CDS lab-wide, Jamie Rollins and Ryan Blair have been maintaining Debian 8 and 9 repos with some of these.  
They have somewhat older EPICS versions and may not include all the modules we have for SL7.
One worry is whether they will keep up Debian 9 maintained, as Debian 10 is already out.

I would likely choose Debian 9 instead of Ubuntu 18.04.02, as not sure of Ubuntu repos for EPICS libraries.

Based on this, I propose we use Debian 9 for our Acromag systems. I don't see a strong reason to switch to SL7, especially since c1vac and c1susaux are already set-up using Debian 8. Although Debian 8 is one version out of date, I think it's better to get a well-documented and tested procedure in place before we upgrade the working c1vac and c1susaux computers. When we start building the next system, let's install Debian 9 (or 10, if it's available), get it working with EPICS/modbus, then loop back to c1vac and c1susaux for the OS upgrade.

Local vs. central modbus/EPICS installation

The current convention is for all machines to share a common installation which is hosted on the /cvs/cds network drive. This seems appealing because only a single central EPICS distribution needs to be maintained. However, from experience attempting this on c1vac, I'm convinced this is a bad design for the new Acromag systems.

The problem is that any network outage, even routine maintenance or brief glitches, wreaks havoc on Acromags set up this way. When the network is interrupted, the modbus executable disappears mid-execution, crashing the process and hanging the OS (I think related to the deadlocked NFS mount), so that the only way to recover is to manually power-cycle. Still worse, this can happen silently (channel values freeze), meaning that, e.g., watchdog protections might fail.

To avoid this, I'm planning to install a local EPICS distribution from source on c1susaux, just as I did for c1vac. This only takes a few minutes to do, and I will include the steps in the documented procedure. Building from source also better protects against OS-dependent buginess.

Main TODO items

  • Debug issue with Acromag DC power wiring
  • Complete wiring from chassis feedthroughs to Acromag terminals, following this wiring diagram
  • Check/set the configuration of each Acromag unit using the software on the Windows laptop
  • Set the analog channel calibrations in the EPICS database file
  • Test each channel ex situ. Chub and I discussed an idea to use two DB-37F breakout boards, with the wiring between the board terminals manually set. One DAC channel would be calibrated and driven to test other ADC channels. A similar approach could be used for the digital input/output channels.
Attachment 1: IMG_3136.jpg
IMG_3136.jpg
Attachment 2: IMG_3138.jpg
IMG_3138.jpg
Attachment 3: IMG_3137.jpg
IMG_3137.jpg
  14497   Tue Mar 26 18:35:06 2019 JonUpdateUpgradeModbus IOC is running on c1susaux2

Thanks to new info from Johannes, I was able to finish setting up the modbus IOC on c1susaux2. It turns out the 17 Acromags draw ~1.9 A, which is way more than I had expected. Hence the reason I had suspected a short. Adding a second DC supply in parallel solves the problem. There is no issue with the wiring.

With the Acromags powered on, I carried out the following:

  • Confirmed c1susaux2 can communicate with each Acromag at its assigned IP address
  • Modified the EPICS .cmd file to point to the local modbus installation (not the remote executable on /cvs/cds)
  • Debugged several IOC initialization errors. All were caused by minor typos in the database files.
  • Scripted the modbus IOC to launch as a systemd service (will add implementation details to the documentation page)

The modbusIOC is now running as a peristent system service, which is automatically launched on boot and relaunched after a crash. I'm able to access a random selection of channels using caget.

What's left now is to finish the Acromag-to-feedthrough wiring, then test/calibrate each channel.

  14500   Fri Mar 29 11:43:15 2019 JonUpdateUpgradeFound c1susaux database bug

I found the current bias output channels, C1:SUS-<OPTIC>_<DOF>BiasAdj, were all pointed at C1:SUS-<OPTIC>_ULBiasSet for every degree of freedom. This same issue appeared in all eight database files (one per optic), so it looks like a copy-and-paste error. I fixed them to all reference the correct degree of freedom.

  14505   Mon Apr 1 12:01:52 2019 JonUpdateCDS 

I brought c1susaux back online this morning for suspension-channel test scripting. It had been dead for some time. I followed the procedure outlined in #12542. ITMY became stuck during this process, which Gautam tells me always happens since the last vacuum access, but ITMX is not stuck.

  14508   Tue Apr 2 15:02:53 2019 JonUpdateCDSITMY freed

blushI renamed all channels on c1susaux2 from "C1:SUS-..." to "C1:SUS2-..." to avoid contention. When the new system is ready to install, those channel names can be reverted with a quick search-and-replace edit.

Quote:

While doing this work, I noticed several errors corresponding to EPICS channel conflicts. Turns out the c1susaux2 EPICS server was left running, and the MEDM screens (and possibly several scripts) were confused. There has to be some other way of testing the new crate, on an isolated network or something - please do not leave the modbus service running as it potentially interferes with normal IFO operation. For good measure, I stopped the process and shut down the machine since I saw nothing in the elog about any running tests.

  14514   Wed Apr 3 16:17:17 2019 JonUpdateVACTP2 forepump replaced

I can't explain the mechanical switching sound Gautam reported. The relay controlling power to the TP2 forepump is housed in the main AC relay box under the arm tube, not in the Acromag chassis, so it can't be from that. I've cycled through the pumpdown sequence several times and can't reproduce the effect. The Acromag switches for TP2 still work fine.

In any case, I've made modifications to the vacuum interlocks that will help with two of the issues:

  1. For the "AC power loss" over-triggering: New logic added requiring the UPS to be out of the "on line power, battery OK" state for ~5 seconds before tripping the interlock. This will prevent electrical transients from triggering an emergency shutdown, as seems to be the case here (the UPS briefly isolates the load to battery during such events).
  2. PSL interlocking: New logic added which directly sets C1:AUX-PSL_ShutterRqst --> 0 (closes the PSL shutter) when the main volume pressure is 3 mtorr-500 torr. Previously there was a channel exposed for this interlock (C1:Vac-interlock_high_voltage), but c1aux was not actually monitoring it. Following the convention of every vac interlock, after the PSL shutter has been closed, it has to be manually reopened. Once the pressure is out of this range, the vac system will stop blocking the shutter from reopening, but it will not perform the reopen action itself. gautam: a separate interlock logic needs to be implemented on c1aux (the shutter machine) that only permits the shutter to be opened if the Vac pressure range is okay. The SUS watchdog style AND logic in the EPICS database file should work just fine.

After finishing this vac work, I began a new pumpdown at ~4:30pm. The pressure fell quickly and has already reached ~1e-5 torr. TP2 current and temp look fine.

Quote:

However, when opening V4 (the foreline of TP1 pumped by TP2), I heard a loud repeated click track (~5Hz) from the electronics rack. Shortly after, the interlocks shut down all the TPs again, citing "AC power loss". Something is not right, I leave it to Jon and Chub to investigate.

Attachment 1: IMG_3180.jpg
IMG_3180.jpg
  14536   Thu Apr 11 12:04:43 2019 JonUpdateSUSStarting some scripted SUS tests on ITMY

Will advise when I'm finished, will be by 1 pm for ALS work to begin.

  14538   Thu Apr 11 12:57:48 2019 JonUpdateSUSStarting some scripted SUS tests on ITMY

Testing is finished.

Quote:

Will advise when I'm finished, will be by 1 pm for ALS work to begin.

  14539   Thu Apr 11 17:30:45 2019 JonUpdateSUSAutomated suspension testing with susPython

Summary

In anticipation of needing to test hundreds of suspension signals after the c1susaux upgrade, I've started developing a Python package to automate these tests: susPython

https://git.ligo.org/40m/suspython

The core of this package is not any particular test, but a general framework within which any scripted test can be "nested." Built into this framework is extensive signal trapping and exception handling, allowing actuation tests to be performed safely. Namely it protects against crashes of the test scripts that would otherwise leave the suspensions in an arbitrary state (e.g., might destroy alignment).

Usage

The package is designed to be used as a standalone from the command line. From within the root directory, it is executed with a single positional argument specifying the suspension to test:

$ python -m suspython ITMY

Currently the package requires Python 2 due to its dependence on the cdsutils package, which does not yet exist for Python 3.

Scripted Tests

So far I've implemented a cross-consistency test between the DC-bias outputs to the coils and the shadow sensor readbacks. The suspension is actuated in pitch, then in yaw, and the changes in PDMon signals are measured. The expected sign of the change in each coil's PDMon is inferred from the output filter matrix coefficients. I believe this test is sensitive to two types of signal-routing errors: no change in PDMon response (actuator is not connected), incorrect sign in either pitch or yaw response, or in both (two actuators are cross-wired).

The next test I plan to implement is a test of the slow system using the fast system. My idea is to inject a 3-8 Hz excitation into the coil output filter modules (either bandlimited noise or a sine wave), with all coil outputs initially disabled. One coil at a time will be enabled and the change in all VMon signals monitored, to verify the correct coil readback senses the excitation. In this way, a signal injected from the independent and unchanged fast system provides an absolute reference for the slow system.

I'm also aware of ideas for more advanced tests, which go beyond testing the basic signal routing. These too can be added over time within the susPython framework.

  14561   Mon Apr 22 21:33:17 2019 JonUpdateSUSBench testing of c1susaux replacement

Today I bench-tested most of the Acromag channels in the replacement c1susaux. I connected a DB37 breakout board to each chassis feedthrough connector in turn and tested channels using a multimeter and calibrated voltage source. Today I got through all the digital output channels and analog input channels. Still remaining are the analog output channels, which I will finish tomorrow.

There have been a few wiring issues found so far, which are noted below.

Channel Type Issue
C1:SUS2-PRM_URVMon Analog input No response
C1:SUS2-PRM_LRVMon Analog input No response
C1:SUS2-BS_UL_ENABLE Digital output Crossed with LR
C1:SUS2-BS_LL_ENABLE Digital output Crossed with UR
C1:SUS2-BS_UR_ENABLE Digital output Crossed with LL
C1:SUS2-BS_LR_ENABLE Digital output Crossed with UL
C1:SUS2-ITMY_SideVMon Analog input Polarity reversed
C1:SUS2-MC2_UR_ENABLE Digital output Crossed with LR
C1:SUS2-MC2_LR_ENABLE Digital output Crossed with UR
     
     

 

  14563   Tue Apr 23 18:48:25 2019 JonUpdateSUSc1susaux bench testing completed

Today I tested the remaining Acromag channels and retested the non-functioning channels found yesterday, which Chub repaired this morning. We're still not quite ready for an in situ test. Here are the issues that remain.

Analog Input Channels

Channel Issue
C1:SUS-MC2_URPDMon No response
C1:SUS-MC2_LRPDMon No response

I further diagnosed these channels by connecting a calibrated DC voltage source directly to the ADC terminals. The EPICS channels do sense this voltage, so the problem is isolated to the wiring between the ADC and DB37 feedthrough.

Analog Output Channels

Channel Issue
C1:SUS-ITMX_ULBiasAdj No output signal
C1:SUS-ITMX_LLBiasAdj No output signal
C1:SUS-ITMX_URBiasAdj No output signal
C1:SUS-ITMX_LRBiasAdj No output signal
C1:SUS-ITMY_ULBiasAdj No output signal
C1:SUS-ITMY_LLBiasAdj No output signal
C1:SUS-ITMY_URBiasAdj No output signal
C1:SUS-ITMY_LRBiasAdj No output signal
C1:SUS-MC1_ULBiasAdj No output signal
C1:SUS-MC1_LLBiasAdj No output signal
C1:SUS-MC1_URBiasAdj No output signal
C1:SUS-MC1_LRBiasAdj No output signal

To further diagnose these channels, I connected a voltmeter directly to the DAC terminals and toggled each channel output. The DACs are outputting the correct voltage, so these problems are also isolated to the wiring between DAC and feedthrough.

In testing the DC bias channels, I did not check the sign of the output signal, but only that the output had the correct magnitude. As a result my bench test is insensitive to situations where either two degrees of freedom are crossed or there is a polarity reversal. However, my susPython scripting tests for exactly this, fetching and applying all the relevant signal gains between pitch/yaw input and coil bias output. It would be very time consuming to propagate all these gains by hand, so I've elected to wait for the automated in situ test.

Digital Output Channels

yes Everything works.

  14564   Tue Apr 23 19:31:45 2019 JonUpdateSUSWatchdog channels separated from autoBurt.req

For the new c1susaux, Gautam and I moved the watchdog channels from autoBurt.req to a new file named autoBurt_watchdogs.req. When the new modbus service starts, it loads the state contained in autoBurt.snap. We thought it best for the watchdogs to not be automatically enabled at this stage, but for an operator to manually have to do this. By moving the watchdog channels to a separate snap file, the entire SUS state can be loaded while leaving just the watchdogs disabled.

This same modification should be made to the ETMX and ETMY machines.

  14574   Thu Apr 25 10:32:39 2019 JonUpdateVACVac interlocks updated

I slightly cleaned up Gautam's disabling of the UPS-predicated vac interlock and restarted the interlock service. This interlock is intended to protect the turbo pumps after a power outage, but it has proven disruptive to normal operations with too many false triggers. It will be reenabled once a new UPS has been installed. For now, as it has been since 2001, the vac pumps are unprotected against an extended power outage.

  14580   Fri Apr 26 12:32:35 2019 JonUpdatePSLmodbusPSL service shut down

Gautam and I are removing the prototype Acromag chassis from the 1x4 rack to make room for the new c1susuax hardware. I shut down and disabled the modbusPSL service running on c1auxex, which serves the PSL diagnostic channels hosted by this chassis. The service will need to be restarted and reenabled once the chassis has been reinstalled elsewhere.

  14581   Fri Apr 26 19:35:16 2019 JonUpdateSUSNew c1susaux installed, passed first round of scripted testing

[Jon, Gautam]

Today we installed the c1susaux Acromag chassis and controller computer in the 1X4 rack. As noted in 14580 the prototype Acromag chassis had to first be removed to make room in the rack. The signal feedthroughs were connected to the eurocrates by 10' DB-37 cables via adapters to 96-pin DIN.

Once installed, we ran a scripted set of suspension actuation tests using PyIFOTest. BS, PRM, SRM, MC1, MC2, and MC3 all passed these tests. We were unable to test ITMX and ITMY because both appear to be stuck. Gautam will shake them loose on Monday.

Although the new c1susaux is now mounted in the rack, there is more that needs to be done to make the installation permanent:

  • New 15V and 24V power cables with standard LIGO connectors need to be run from the Sorensenn supplies in 1X5. The chassis is currently powered by bench supplies sitting on a cart behind the rack.
  • All 24 new DB-37 signal cables need to be labeled.
  • New 96-pin DIN connectors need to be put on two ribbon cables (1Y5_80 B, 1Y5_81) in the 1X4 rack. We had to break these connectors to remove them from the back of the eurcrates.
  • General cleanup of any cables, etc. left around the rack. We cleaned up most things this evening.
  • Rename the host computer c1susaux2 --> c1susaux, and update the DNS lookup tables on chiara.

On Monday we plan to continue with additional scripted tests of the suspensions.


gautam - some more notes:

  • Backplane connectors for the SUS PD whitening boards, which now only serve the purpose of carrying the fast BIO signals used for switching the whitening, were moved from the P1 connector to P2 connector for MC1, MC2, MC3, ITMX, ITMY, BS and PRM.
  • In the process, the connectors for BS and PRM were detatched from the ribbon cable (there wasn't any good way to unseat the connector from the shell that I know of). These will have to be repaired by Chub, and the signal integrity will have to be checked (as they have to be for the connectors that are allegedly intact).
  • While we were doing the wiring, I disconnected the outputs of the coil driver board going to the satellite box (front panel DB15 connector on D010001). These were restored after our work for the testing phase.
  • The backplane cables to the eurocrate housing the coil driver boards were also disconnected. They are currently just dangling, but we will have to clean it up if the new crate is performing alright.
  • In general the cable routing cleanliness has to be checked and approved by Chub or someone else qualified. In particular, the power leads to the eurocrate are in the way of the DIN96-DB37 adaptor board of Johannes' design, particularly on the SUS PD eurocrate.
  • Tapping new power rails for the Acromag chassis will have to be done carefully. Ideally we shouldn't have to turn off the Sorensens.
  • There are some software issues we encountered today connected with the networking that have to be understood and addressed in a permanent way.
  • Sooner rather than later, we want to reconnect the Acromag crate that was monitoring the PSL channels, particularly given the NPRO's recent flakiness.
  • The NPRO was turned back on (following the same procedure of slowly dialing up the injection current). Primary motivation to see if the mode cleaner cavity could be locked with the new SUS electronics. Looks like it could. I'm leaving it on over the weekend...
Attachment 1: IMG_3254.jpg
IMG_3254.jpg
Attachment 2: IMG_3256.jpg
IMG_3256.jpg
  14585   Mon Apr 29 19:23:49 2019 JonUpdateComputer Scripts / ProgramsScripted tests of suspension VMons using fast system

I've added a scripted VMon/coil-enable test to PyIFOTest following the suggestion in #15542. Basically, a DC offset is added to one fast coil output at a time, and all the VMon responses are checked.

After resolving the swapped ITMX/ITMY eurocrate slots described in #14584, I ran the new scripted VMon test on all eight optics managed by c1susaux. All of them passed: SRM, BS, MC1, MC2, MC3, PRM, ITMX, ITMY. This is not the final suspension test we plan to do, but it gives me reasonably good confidence that all channels are connected correctly.

  14588   Thu May 2 10:59:58 2019 JonUpdateSUSc1susux in situ wiring testing completed

Summary

Yesterday Gautam and I ran final tests of the eight suspensions controlled by c1susaux, using PyIFOTest. All of the optics pass a set of basic signal-routing tests, which are described in more detail below. The only issue found was with ITMX having an apparent DC bias polarity reversal (all four front coils) relative to the other seven susaux optics. However, further investigation found that ETMX and ETMY have the same reversal, and there is documentation pointing to the magnets being oppositely-oriented on these two optics. It seems likely that this is the case for ITMX as well. 

I conclude that all the new c1susaux wiring/EPICS interfacing works correctly. There are of course other tests that can still be scripted, but at this point I'm satisfied that the new Acromag machine itself is correctly installed. PyIFOTest has been morphed into a powerful general framework for automating IFO tests. Anything involving fast/slow IO can now be easily scripted. I highly encourage others to think of more applications this may have at the 40m.

Usage and Design

The code is currently located in /users/jon/pyifotest although we should find a permanent location for it. From the root level it is executed as

$ ./IFOTest <PARAMETER_FILE>

where PARAMETER_FILE is the filepath to a YAML config file containing the test parameters. I've created a config file for each of the suspended optics. They are located in the root-level directory and follow the naming convention SUS-<OPTIC>.yaml.

The code climbs a hierarchical "ladder" of actuation/readback-paired tests, with the test at each level depending on signals validated in the preceding level. At the base is the fast data system, which provides an independent reference against which the slow channels are tested. There are currently three scripted tests for the slow SUS channels, listed in order of execution:

  1. VMon test:  Validates the low-frequency sensing of SUS actuation (VMon channels). A DC offset is applied in the final filter module of the fast coil outputs, one coil at a time. The test confirms that the VMon of the actuated coil, and only this VMon, senses the displacement, and that the response has the correct polarity. The screen output is a matrix showing the change in VMon responses with actuation of each coil. A passing test, roughly, is diagonal values >> 0 and off-diagonal values << diagonal.

  2. Coil Enable test:  Validates the slow watchdog control of the fast coil outputs (Coil-Enable channels). Analogously to (1), this test also applies a DC offset via the fast system to one coil at a time and analyzes the VMon responses. However, in this case, the offset is enabled to all five coils simulataneously and only one coil output is enabled at a time. The screen output is again a \Delta VMon matrix interpreted in the same way as above.

     

  3. PDMon/DC Bias test:  Validates slow alignment control and readback (BiasAdj and PDMon channels). A DC misalignment is introduced first in pitch, then in yaw, with the OSEM PDMon responses measured in both cases. Using the gains from the PIT/YAW---> COIL output coupling matrix, the script verifies that each coil moves in the correct direction and by a sufficiently large magnitude for the applied DC bias. The screen output shows the change in PDMon responses with a pure pitch actuation, and with a pure yaw actuation. The output filter matrix coefficients have already been divided out, so a passing test is a sufficiently large, positive change under both pitch and yaw actuations.

     

  14589   Thu May 2 15:15:15 2019 JonOmnistructureComputerssusaux machine renamed

Now that the replacement susaux machine is installed and fully tested, I renamed it from c1susaux2 to c1susaux and updated the DNS lookup tables on chiara accordingly.

  14590   Thu May 2 15:35:54 2019 JonOmnistructureUpgradec1susaux upgrade documentation

For future reference:

  • The updated list of c1susaux channel wiring (includes the "coil enable" --> "coil test" digital outputs change)
  • Step-by-step instructions on how to set up an Acromag system from scratch
  14596   Mon May 6 11:05:23 2019 JonUpdateSUSAll vertex SUS watchdogs were tripped

Yes, this was a consequence of the systemd scripting I was setting up. Unlike the old susaux system, we decided for safety NOT to allow the modbus IOC to automatically enable the coil outputs. Thus when the modbus service starts/restarts, it automatically restores all state except the watchdog channels, which are left in their default disabled state. They then have to be manully enabled by an operator, as I should have done after finishing testing.

Quote:

I found the 8 vertex watchdogs tripped today morning. The ETMs were fine, suggesting this was not an actual earthquake. I suspect it was connected to this remote work? Was there a reason why they were left tripped?

  14600   Thu May 9 22:26:39 2019 JonOmnistructurePSLSecond ADC added to PSL Acromag crate

This evening I added a second ADC module to the prototype Acromag chassis. This chassis can now read out all the PSL diagnostic channels.

I configured the second ADC identically to the first ("ADC0"), and assigned it IP address 192.168.113.122. I confirmed it is visible on the martian network.

There was an existing but unused DB-15 feedthrough which I used for ADC1 channels 1-7. The eighth channel I left unwired, but there are slots available in the neighboring DB-25 feedthough, if that channel is needed in the future. The channel wiring assignments are as follows.

ADC1 Channel DB-15 Feedthrough Pin
0+ 1
0- 9
1+ 2
1- 10
2+ 3
2- 11
3+ 4
3- 12
4+ 5
4- 13
5+ 6
5- 14
6+ 7
6- 15
7+ not connected
7- not connected

I tested all seven of these channels by applying a calibrated voltage source and measuring the response via the Windows Acromag software. All work and are correctly calibrated to better than 0.1%.

Attachment 1: IMG_3291.jpg
IMG_3291.jpg
  14604   Sat May 11 11:48:54 2019 JonUpdatePSLSome work on/around PSL table

I took a look at the error being encountered by the modbusPSL service. The problem is that the /run/modbusPSL.pid file is not being generated by procServ, even though the -p flag controlling this is correctly set. I don't know the reason for this, but it was also a problem on c1vac and c1susaux. The solution is to remove the custom kill command (ExecStop=...) and just allow systemd to stop it via its default internal kill method.

modbusPSL.service - ModbusIOC Service via procServ
   Loaded: loaded (/etc/systemd/system/modbusPSL.service; disabled)
   Active:
active (running) since Fri 2019-05-10 13:17:54 PDT; 2h 13min ago
  Process: 8824 ExecStop=/bin/kill -9 ` cat /run/modbusPSL.pid`
(code=exited, status=1/FAILURE)
 Main PID: 8841 (procServ)
   CGroup: /system.slice/modbusPSL.service
           ├─8841 /usr/bin/procServ -f -L /home/controls/modbusPSL.log -p /run/modbusPSL.pid 8009 /cvs/cds/rtapps/epics-3.14.12.2_long/module...
           ├─8842 /cvs/cds/rtapps/epics-3.14.12.2_long/modules/modbus/bin/linux-x86_64/modbusApp /cvs/cds/caltech/target/c1psl2/npro_config.c...
           └─8870 caRepeater

May 10 13:17:54 c1auxex systemd[1]: Started ModbusIOC Service via procServ.

  14801   Tue Jul 23 21:59:08 2019 JonUpdateCamerasPlan for GigE cameras

This afternoon Gautam and I assessed what to do about restoring the GigE camera software. Here's what I propose:

  • Set up one of the new rackmount Supermicros as a dedicated camera feed server
  • All GigE cameras on a local subnet connected to the second network interface (these Supermicros have two)
  • Put the SnapPy, pypylon, and pylon5 binaries on the shared network drive. These all have to be built from source.
  • All other dependencies can be gotten through the package managers, so create requirements files for yum and pip to automatically install these locally.

I've started resolving the many dependencies of this code on rossa. The idea is to get a working environment on one workstation, then generate requirements files that can be used to set up the rest of the machines. I believe the dependencies have all been installed. However, many of the packages are newer versions than before, and this seems to have broken SnapPy. I'll continue debugging this tomorrow.

  14806   Wed Jul 24 16:45:32 2019 JonUpdateCamerasUpgraded Pylon from 5.0.12 to 5.2.0

I upgraded Pylon, the C/C++ API for the GigE cameras, to the latest release, 5.2.0. It is installed in the same location as before, /opt/rtcds/caltech/c1/scripts/GigE/pylon5, so environment variables do not change. The old version, 5.0.12, still exists at opt/rtcds/caltech/c1/scripts/GigE/backup_pylon5.

The package contains a GUI application (/bin/PylonViewerApp) for streaming video. The old version supports saving still images, but Milind discovered that the new version supports saving video as well. This required installing a supplementary package supporting MPEG-4 output.

Basler's GUI application is launched from the terminal using the alias pylon. I've tested it and confirm it can save both videos and still-image formats. I recommend to also try grabbing images using this program and check the bit resolution. It would be a useful diagnostic to know whether it's a bug in Joe B.'s code or something deeper in the camera settings.

Attachment 1: IMG_3525.jpg
IMG_3525.jpg
  14810   Thu Jul 25 09:19:32 2019 JonUpdateCamerasUpgraded Pylon from 5.0.12 to 5.2.0

I'll keep developing the camera server on a parallel track using the "new_..." directory naming convention. One thing I forgot to note is that the new pylon/pypylon packages require Python 3, so will not work with any of the 2.7 scripts. All of the environment I need to set up is exclusively Python 3. I won't change anything in the Python 2.7 environment in current use.

Also, I found the source of the bit resolution issue: Joe B's code loads a set of initialization parameters from a config file. One of them is "Frame Type = Mono8" which sets the dynamic range of the stream. I'll look into how this should be changed. 

Quote:

Since there are multiple SURF projects that rely on the cameras:

  1. I moved the new installs Jon made to "new_pylon5" and "new_pypylon". The old installs were moved back to be the default directories.
  2. The bashrc alias for pylon was updated to allow the recording of videos (i.e. it calls the PylonViewerApp from new_pypylon).
  3. There is a script that can grab images at multiple exposures and save 12-bit data as uint16 numpy arrays to an HDF5 file. Right now, it is located at /users/kruthi/scripts/grabHDR.py. We can move this to a better place later, and also improve the script for auto adjusting the exposure time to avoid saturations.
ELOG V3.1.3-