40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 314 of 355  Not logged in ELOG logo
ID Date Author Type Categorydown Subject
  14967   Mon Oct 14 16:25:03 2019 KojiUpdateCDSCM servo board testing

The output stage (and AO GAIN stage) of the MC board was modelled. The transfer function was measured with the injection from EXC B. The denominator was TESTB2, and the numerator was SERVO OUT.

This stage is AC coupled by 2x 1st order HPFs. Firstly, this transfer function was measured with AO GAIN set to be 0dB. (Attachment 1)
This TF was used to characterize the cutoffs of the HPF stages, represented as the following ZPK:

zero 1m
zero 1m
pole 6.0502599855
pole 6.0624642854
factor -26.2725046079n

Then the AO GAIN was already measured as seen in [ELOG 14948]. The AO gain TF was then modeled by LISO with the above HPF as the preset. This allows us to characterize the time delay of the AO GAIN part.

Attachment 1: servo_out.pdf
servo_out.pdf
  14968   Mon Oct 14 16:34:42 2019 KojiUpdateCDSCM servo board testing

Input referred offsets on the IN1/IN2 were tested with different gain settings. The two inputs were plugged by the 50 ohm terminators. The output was monitored at OUT1 (SLOW Length Output). The fast path is AC coupled and has no sensitivity to the offset.

There is the EPICS monitor point for OUT1. With the multimeter it was confirmed that the EPICS monitor (C1:LSC-CM_REFL1_GAIN) has the right value except for the opposite sign because the output stage of OUT1 is inverting. The previous stages have no sign inversion. Therefore, the numbers below does not compensate the sign inversion.

Attachment 1 shows the output offset observed at C1:LSC-CM_REFL1_GAIN. There is some gain variation, but it is around the constant offset of ~26mV. This suggested that the most of the offset is not from the gain stages but from the later stages (like the boost stages). Note that the boost stages were turned off during the measurements.

Attachment 2 shows the input refered offset naively calculated from the above output offset. In dependent from which path was used, the offset with low gain was hugely enhanced.

Since the input referred offset without subtracting the static offset seemed useless, a constant offset of -26mV was subtracted from the calculation (Attachment 2). This shows that the input refered offset can go up to ~+/-20mV when the gain is up to -16dB. Above that, the offset is mV level.

I don't think this level of offset by whichever OP27 or AD829 becomes an issue when the input error signal is the order of a volt.
This suggests that it is more important to properly set the internal offset cancellation as well as to keep the gain setting to be high.

 

Attachment 1: in12_output_offset.pdf
in12_output_offset.pdf
Attachment 2: in12_input_offset.pdf
in12_input_offset.pdf
Attachment 3: in12_input_offset2.pdf
in12_input_offset2.pdf
  14970   Mon Oct 14 17:32:28 2019 KojiUpdateCDSPortal Elog entry for the recent CM servo board tests

Updated Circuit Diagram and photos: https://dcc.ligo.org/D1500308-v2

- (1) and (6) of the diagram: TFs with various gain slider values for REFL1/REFL2/AO GAIN [ELOG 14948] (gain values and time delay modeling)
- Switching checks, latest photo of the board, Limiter check  [ELOG 14953]
- (2): Boost transfer functions [ELOG 14955]
- (3): Slow (aka Length) CM output path [ELOG 14965]
- (4): Pole-Zero filter TF [ELOG 14965]
- (5): TF from TESTA2 to TESTB2 [ELOG 14966]
- (6): AC coupling TF of the AO GAIN stage [ELOG 14967]
- (7): AC coupling TF of the IN2 stage on IMC servo board [ELOG 15044]

Slow path = (1)*(2 if necessary)*(3)*(4 if necessary)

Fast path = (1)*(2 if necessary)*(4 if necessary)*(5)*(6)

gautam 20191122: Adding the measured AC coupling of the IN2 input of the IMC servo board for completeness.

Attachment 1: CM_Servo_Diagram.png
CM_Servo_Diagram.png
  14990   Wed Oct 23 18:40:58 2019 gautamUpdateCDSanother round of vertex FE reboots

I wanted to restart the c1oaf model. As usual, the first time the model was restarted, it came back online with a 0x2bad error. This isn't even listed in the diagnostics manual as one of the recognized error states (unless there is a typo and they mean 0x2bad when they say 0xbad). The fix that has worked for me is to stop and start the model again, but of course, there is some chance of taking all the vertex FEs down in the process. No permutation of mxstream and daqd process restarts have cleared this error. We need some CDS/RCG support to look into this issue and fix it, it is not reasonable to go through reboots of all the vertex FEs every time we want to make a model change.

  15035   Tue Nov 19 15:08:48 2019 gautamUpdateCDSVertex models rebooted

Jon and I were surveying the CDS situation so that he can prepare a report for discussion with Rolf/Rich about our upcoming BHD upgrade. In our poking around, we must have bumped something somewhere because the c1ioo machine went offline, and consequently, took all the vertex models out. I rebooted everything with the reboot script, everything seems to have come back smoothly. I took this opportunity to install some saturation counters for the arm servos, as we have for the CARM/DARM loops, because I want to use these for a watch script that catches when the ALS loses lock and shuts stuff off before kicking optics around needlessly. See Attachment #1 for my changes.

Attachment 1: armSat.png
armSat.png
  15061   Mon Dec 2 23:01:47 2019 gautamUpdateCDSFrequent DTT crashes on pianosa

I have been experiencing frequent crashes of DTT on pianosa in the past few weeks. This is pretty annoying to deal with when trying to characterize the interferometer loops. I attach the error log dumped to console. The error has to do with some kind of memory corruption. Recall that we aren't using a GDS version that is packaged with the SL7 lscsoft packages, we are using a pretty ancient (2.15) version that is built from source. I have been unable to build a newer version from source (though I didn't spend much time trying). pianosa is the only usable workstation at the moment, but perhaps someone can make this work on donatella / rossa for general improvement in quality of life.

Attachment 1: DTTerrorLog.tgz
  15071   Wed Dec 4 09:11:42 2019 YehonathanUpdateCDSReboot script

After the CDSs crashed we run the rebootC1LSC.sh script.

The script is a bit annoying in that it requires entering the CDSs' passwords multiple times over the time it runs which is long.

The resulting CDS screen is a bit different than what was reported before (attached). Also, not all watchdogs were restored.

We restore the remaining watchdogs and do XARM locking. Everything seems to be fine.

Attachment 1: medmScreen11.ps
medmScreen11.ps
  15072   Wed Dec 4 12:13:10 2019 gautamUpdateCDSReboot script

It was way more annoying without a script and took longer than the 4 minutes it does now.

You can fix the requirement to enter password by changing the sshd settings on the FEs like I did for pianosa.

After running the script, you should verify that there are no red flags in the output to console. Yesterday, some of the settings the script was supposed to reset weren't correctly reset, possibly due to python/EPICS problems on donatella, and this cost me an hour of searching last night because the locking wasn't working. Anyway, best practise is to not crash the FEs.

Quote:

The script is a bit annoying in that it requires entering the CDSs' passwords multiple times over the time it runs which is long.

  15078   Thu Dec 5 15:09:50 2019 gautamUpdateCDSc1oaf crashed c1lsc

I tried starting the c1oaf model, but got a DQ error (I want the option of running feedforward during locking even if the filters aren't particularly well tuned yet). Note that this isn't "just a warning light" - some channels are initialized to +/- 1e20, so if you try turning some filters on, you will deliver a massive kick to the optics. Restarting it crashed c1lsc (this is not unexpected behavior - the only way to clear the DQ error is to restart the model, and empirically, the success rate is ~50%). The reboot script brought everything back online smoothly, and the second, time, c1oaf started without any issues.

While looking at the CDS overview screen, I noticed that the c1scy model was reporting frequent RFM errors for the C1:SCY-RFM_ETMY_LSC channel (but none of the others). On the sender model (c1rfm), no errors were being reported. The diag reset button / mxstream restart didn't really work either. See Attachment #1. Just restarting the c1scy model didn't fix the error - I had to reboot the machine and restart the models, and now no errors are being reported.

Attachment #2 shows the current nominal CDS status - the red light on c1lsc is due to some missing c1dnn channels (I'll remove these at the next c1lsc model change because I don't want to un-necessarily reboot the vertex FEs), and the c1omc model is obsolete I guess. c1daf isn't running right now but once I get the new fiber (ordered), I'm gonna restart this model as well.

P.S. The ALS temperature sliders are not SDF-ed. So when the model was restarted, I had to change the sliders back to their old values to get the beat back in the usable range.

Attachment 1: SCYerrors.png
SCYerrors.png
Attachment 2: CDSnormal.png
CDSnormal.png
  15122   Wed Jan 15 08:55:14 2020 gautamUpdateCDSYearly DAQD fix

Summary:

Every new year (on Dec 31 or Jan 1), all of the realtime models will report a "0x4000" error. This happens due to an offset to the GPStime driver not being updated. Here is how this can be fixed (slightly modified version of what was done at LASTI).

Steps to fix the DC errors:

  1. ssh into FB machine. 
  2. Edit the file /opt/rtcds/rtscore/release/src/include/drv/spectracomGPS.c:
    • Look for the code block with a text string that reads something like
      /* 2019 had 365 days and no leap seconds */
                   pHardware->gpsOffset += 31536000;
    • Copy and paste the above string for the appropriate number of years of offset you are adding, and edit the comment string appropriately!.
  3. Navigate to /opt/rtcds/rtscore/release/src/drv/symmetricom. Run the following commands:
    sudo make
    sudo make install
  4. Stop all the daqd processes and reload symmetricom:
    sudo systemctl daqd_* stop
    sudo modprobe -r symmetricom
    sudo modprobe symmetricom
  5. Re-start the daqd processes:
    sudo service daqd_* start

Independent of this, there is a 1 second offset between the gpstimes reported by /proc/gps and gpstime. However, this doesn't seem to drift. We had effected a static offset to correct for this in the daqd config files, and it looks like these do not need to be updated on a yearly basis. All the daqd indicators are now green, see Attachment #1.

Attachment 1: DCerrors_fixed.png
DCerrors_fixed.png
  15223   Tue Feb 25 16:17:57 2020 Gautam, HangUpdateCDS 

Seems that the GPS is out of sync on donatella. We could not get any data from diaggui...

  15237   Mon Mar 2 16:14:47 2020 gautamUpdateCDSsome target directory cleanup

$TARGET_DIR = /cvs/cds/caltech/target

  • $TARGET_DIR/c1psl and $TARGET_DIR/c1iool0 moved to $TARGET_DIR/preAcromag_oldVME/
  • $TARGET_DIR/c1psl1 moved to $TARGET_DIR/c1psl 
  • $TARGET_DIR/c1psl/*.service and $TARGET_DIR/C1_PSL.cmd modified - i executed :%s/c1psl1/c1psl/g in vim.
  • $TARGET_DIR/preAcromag_oldVME/c1psl/autoBurt.req and $TARGET_DIR/preAcromag_oldVME/c1iool0/autoBurt.req catenated into $TARGET_DIR/c1psl/autoBurt.req. The first snapshot at 16:19 has been verified.

It remains to (Jon is taking care of these)

  • add a line to modbusIOC.service on the new c1psl machine that restores the latest burt snapshot on startup (this necessitated installation of a debian jessie libXp6 package on our debian buster machine because our shared EPICS is soooooooooooooo oooooooold)
  • change the hostname from c1psl1 to c1psl
  • update martian.hosts
  15239   Mon Mar 2 16:35:12 2020 gautamUpdateCDSc1psl test status

Channel list with test status
== Test Status ==

[done] Lock PMC and IMC
[done] IMC Servo board test
[done] IMC LO Det Mon channel check
[0th order] WFS quadrant DC mon
[none] WFS I/F monitors
[0th order] WFS attenuators
[none] IOO QPD channels
[done] FSS readbacks 
[done] PMC readbacks


Some more detailed elogs about the individual tests will follow.

Basically, I have characterized the IMC Servo board in detail. The summary finding is that the IN2 (=AO gain) slider needs to be investigated. 

All other channels need to be verified in a more thorough fashion than my basic checks which were just to guarantee the core interferometer functionality, which is important to me.

  15240   Mon Mar 2 19:32:41 2020 gautamUpdateCDSc1rfm errors

Had to reboot both end machines and the c1rfm model to get the TRX and TRY signals to the LSC models. Now both arms can be locked using POX/POY respectively.

Attachment 1: RFMerrors.png
RFMerrors.png
  15248   Wed Mar 4 12:25:11 2020 gautamUpdateCDSBIO1 on c1psl is dead

There was some work done on the Acro crate this morning. Unclear if this is independent, but I found that the IMC servo board IN1 slider doesn't respond anymore, even though I had tested it and verified it to be working. Patient debugging showed that BIO1 (and only that acromag unit with the static IP 192.168.114.61) doesn't show up on the subnet in c1psl. Hopefully it's just a loose network cable, if not we will switch out the unit in the afternoon. 

Jon is going to make a python script which iteratively pings all devices on the subnet and we will put this info on an MEDM screen to catch this kind of silent failure.

  15250   Wed Mar 4 16:54:43 2020 gautamUpdateCDSc1auxex temporarily disconnected

To debug a problem with the new c1psl (later elog), we needed a Supermicro EPICS server that was using the shared EPICS/modbus/asyn binaries rather than a local install. Of those available in the lab (c1iscaux, c1vac, c1susaux being the others), this was the only one which uses the shared install. So I 

  • turned the slow bias voltages to 0
  • shutdown the watchdog
  • disconnected the Acromag crate in 1X9 from the 192.168.114.xxx subnet at the supermicro end
  • connected a test ADC to the local subnet using a different ethernet cable (leaving the original one dangling)
  • ran some software tests to see if we could open up a communication line to the test ADC using modbus without any errors being thrown
  • removed the test ADC and restored the ethernet connection.

At which point Jon reset the software end, I restored the slow bias voltage and re-enabled the local damping. The optic seems to have damped okay. The Oplev spot is back in ~center of the QPD and the green beam can be locked to a TEM00 mode (so the alignment is okay - the IR beam is unavailable while c1psl issues are being sorted but I judge that things are back to the nominal state now).

  15285   Thu Mar 26 22:31:34 2020 YehonathanUpdateCDSC1AUXEY wiring + channel list

I have made a wiring + channel list that need to be included in the new C!AUXEY Acromag.

It was mostly copied from C1AUXEX

I ignored the IPANG channels since it is going to be removed from the table.

  15287   Tue Mar 31 09:39:41 2020 gautamUpdateCDSFoton for shaped noise injections

I'd like to re-measure the transfer function from driving MC2 position to the MC_L_DQ channel (for feedforward purposes). Swept sine would be one option, but I can't get the "Envelope" feature of DTT to work, the excitation amplitude isn't getting scaled as specified in the envelope, and so I'm unable to make the measurement near 1 Hz (which is where the FF is effective). I see some scattered mentions of such an issue in past elogs but no mention of a fix (I also feel like I have gotten the envelope function to work for some other loop measurement templates). So then I thought I'd try broadband noise injection, since that seems to have been the approach followed in the past. Again, the noise injection needs to be shaped around ~1 Hz to avoid knocking the IMC out of lock, but I can't get Foton to do shaped noise injections because it doesn't inherit the sample rate when launched from inside DTT/awggui - this is not a new issue, does anyone know the fix?

Note that we are using the gds2.15 install of foton, but the pre-packaged foton that comes with the SL7 installation doesn't work either.

Update:

The envelope feature for swept-sine wasn't working because i specified the frequency grid in the wrong order apparently. Eric von Reis has been notified to include a sorting algorithm in future DTT so that this can be in arbitrary order. fixing that allows me to run a swept sine with enveloped excitation amplitude and hence get the TF I want, but still no shaped noise injections via foton 😢 

  15288   Tue Mar 31 23:35:50 2020 ranaUpdateCDSFoton for shaped noise injections

do you really mean awggui cannot make shaped noise injections via its foton text box ? That has always worked for me in the past.

If this is broken I'm suspicious there's been some package installs to the shared dirs by someone.

  15289   Tue Mar 31 23:54:57 2020 gautamUpdateCDSFoton for shaped noise injections

The problem is that foton does not inherit the model sample rate when launched from DTT/awggui. This is likely some shared/linked/dynamic library issue, the binaries we are running are precompiled presumably for some other OS. I've never gotten this to work since we changed to SL7 (but I did use it successfully in 2017 with the Ubuntu12 install).

Quote:

do you really mean awggui cannot make shaped noise injections via its foton text box ? That has always worked for me in the past.

If this is broken I'm suspicious there's been some package installs to the shared dirs by someone.

  15292   Thu Apr 2 16:31:33 2020 JonUpdateCDSC1AUXEY wiring + channel list
Quote:

I have made a wiring + channel list that need to be included in the new C!AUXEY Acromag.

I used Yehonathan's wiring assignments to lay the rest of groundwork for the final slow controls machine upgrade, c1auxey. Actions completed:

  • Created an internal wiring diagram for assembling the Acromag chassis (log in with LIGO.ORG credentials to view/edit)
  • Created a new target directory on the network drive:
/cvs/cds/caltech/target/c1auxey1

The "1" will be dropped after the new system is permanently installed.

  • Populated the target directory with files:
    • modbusIOC.service - wraps the EPICS IOC as a systemd service
    • ETMYaux.env - defines the EPICS environment variables
    • ETMYaux.cmd - command file to set up the EPICS IOC
    • ETMYaux.sh - enables DAC outputs to the suspension (executed lastly)
  • Created the EPICS channel databases:
    • ETMYaux.db - migration of the existing database
    • c1auxey_state.db - contains logic for loopback monitoring of the IOC "alive" state (visible from Sitemap > CDS > Slow Controls Status)

Hardware-wise, this system will require:

  • 2 Acromag XT-1221 units (ADC)
  • 1 Acromag XT-1541 unit (DAC)
  • 1 Acromag XT-1111 unit (sinking BIO)

I know that we do have these quantities left on hand. The next steps are to set up the Supermicro host and begin assembling the Acromag chassis. Both of these activities require an in-person presence, so I think this is as far as we can advance this project for now.

  15293   Thu Apr 2 22:19:18 2020 KojiUpdateCDSC1AUXEY wiring + channel list

We want to migrate the end shutter controls from c1aux to the end acromags. Could you include them to the list if not yet?

This will let us remove c1aux from the rack, I believe.

 

  15294   Fri Apr 3 12:09:53 2020 JonUpdateCDSC1AUXEY wiring + channel list
Quote:

We want to migrate the end shutter controls from c1aux to the end acromags. Could you include them to the list if not yet?

This will let us remove c1aux from the rack, I believe.

Yehonathan's list does include C1:AUX-GREEN_Y_Shutter and I copied its definition from /cvs/cds/caltech/target/c1aux/ShutterInterlock.db into the new ETMYaux.db file.

I noticed ShutterInterlock.db still contains about a dozen channels. Some of them appear to be ghosts (like the C1:AUX-PSL_Shutter[...] set, which has since become C1:PSL-PSL_Shutter[...] hosted on c1psl) but others like C1:AUX-GREEN_X_Shutter appear to still be in active use.

  15383   Mon Jun 8 18:14:55 2020 gautamUpdateCDSVertex FEs crashed

Summary:

Around 5pm local time, the three vertex FEs crashed. AFAIK, no one was in the lab or working on anything CDS related, so this is worrying.

Details:

  • Reboot script was used to bring all FEs back - only soft reboots were required.
  • The IMC and arms can now be locked.
  • I think combination of burt + SDF would have reverted all the settings as they should be, but if something appears off, it could be that some EPICS value didn't get reset correctly.
Attachment 1: FEcrash_CDSoverview.png
FEcrash_CDSoverview.png
  15423   Mon Jun 22 17:51:50 2020 gautamUpdateCDSc1iscaux was down

The machine needed a hard reboot as it was un-ssh-able. 

The exact time that the machine went down is unknown because the blinkys were not DQ-ed. I've now added these to the EDCU to make these channels actually useful, and we may look back on the reliability (or otherwise) of the Acromag system. To my memory, this is the ~5th time one of the new Acromag servers has needed a hard reboot. While this may be less frequent (?) than the VME machines, perhaps there is some other reason for these dropouts. Maybe something to do with the martian network?

Anyway the machine is back up and running now.

  15462   Thu Jul 9 16:02:33 2020 JonHowToCDSProcedure for setting up BHD front-ends

Here is the procedure for setting up the three new BHD front-ends (c1bhd, c1sus2, c1ioo - replacement). This plan is based on technical advice from Rolf Bork and Keith Thorne.

The overall topology for each machine is shown here. As all our existing front-ends use (obsolete) Dolphin PCIe Gen1 cards for IPC, we have elected to re-use Dolphin Gen1 cards removed from the sites. Different PCIe generations of Dolphin cards cannot be mixed, so the only alternative would be to upgrade every 40m machine. However the drivers for these Gen1 Dolphin cards were last updated in 2016. Consequently, they do not support the latest Linux kernel (4.x) which forces us to install a near-obsolete OS for compatibility (Debian 8).

Hardware

Software

  • OS: Debian 8.11 (Linux kernel 3.16)
  • IPC card driver: Dolphin DX 4.4.5 [works only with Linux kernel 2.6 to 3.x]
  • I/O card driver: None required, per the manual

Install Procedure

  1. Follow Keith Thorne's procedure for setting up Debian 8 front-ends
  2. Apply the real-time kernel patches developed for Debian 9, but modified for kernel 3.16 [these are UNTESTED against Debian 8; Keith thinks they may work, but they weren't discovered until after the Debian 9 upgrade]
  3. Install the PCIe expansion cards and Dolphin DX driver (driver installation procedure)
  15515   Wed Aug 12 17:36:42 2020 gautamUpdateCDSTiming distribution slot availability

See Attachment #1. J8 was connected to a "LASTI timing slave" sitting in the rack that Chiara lives in - we don't use this for anything and I confirmed that there was no effect on the RTCDS when I pulled that fiber out. The LASTI timing slave also had a blinky that was blinking when the fiber was plugged in - which I take to believe that the slot works. 

Can we get away with just using these two available slots, J8 and J13? Do we really need three new expansion chassis?

Attachment 1: IMG_8706.JPG
IMG_8706.JPG
  15518   Wed Aug 12 20:14:06 2020 KojiUpdateCDSTiming distribution slot availability

I believe we will use two new chassis at most. We'll replace c1ioo from Sun to Supermicro, but we recycle the existing timing system.

  15521   Thu Aug 13 11:30:19 2020 gautamUpdateCDSTiming distribution slot availability

That's great. I wonder if we can also get away with not adding new Dolphin infrastructure. I'd really like to avoid changing any IPC drivers.

Quote:

I believe we will use two new chassis at most. We'll replace c1ioo from Sun to Supermicro, but we recycle the existing timing system.

  15522   Thu Aug 13 13:35:13 2020 KojiUpdateCDSTiming distribution slot availability

The new dolphin eventually helps us. But the installation is an invasive change to the existing system and should be done at the installation stage of the 40m BHD.

  15524   Fri Aug 14 00:01:55 2020 gautamUpdateCDSBHD / OMC model channels now added to autoburt

I added the EPCIS channels for the c1omc model (gains, matrix elements etc) to the autoburt such that we have a record of these, since we expect these models to be running somewhat regularly now, and I also expect many CDS crashes.

  15525   Fri Aug 14 10:03:37 2020 JonUpdateCDSTiming distribution slot availability

That's great news we won't have to worry about a new timing fanout for the two new machines, c1bhd and c1sus2. And there's no plan to change Dolphin IPC drivers. The plan is only to install the same (older) version of the driver on the two new machines and plug into free slots in the existing switch.

Quote:

The new dolphin eventually helps us. But the installation is an invasive change to the existing system and should be done at the installation stage of the 40m BHD.

  15564   Tue Sep 8 11:49:04 2020 gautamUpdateCDSSome path changes

I edited /diskless/root.jessie/home/controls/.bashrc so that I don't have to keep doing this every time I do a model recompile.

Quote:

Where is this variable set and how can I add the new paths to it? 

export RCG_LIB_PATH=/opt/rtcds/userapps/release/isc/c1/models/isc/:/opt/rtcds/userapps/release/isc/c1/models/cds/:/opt/rtcds/userapps/release/isc/c1/models/sus/:$RCG_LIB_PAT
  15578   Wed Sep 16 17:44:27 2020 gautamUpdateCDSAll vertex FE models restarted

I had to make a CDS change to the c1lsc model in an effort to get a few more signals into the models. Rather than risk requiring hard reboots (typcially my experience if I try to restart a model), I opted for the more deterministic scripted reboot, at the expense of spending ~20mins to get everything back up and running.


Update 2230: this was more complicated than expected - a nuclear reboot was necessary but now everything is back online and functioning as expected. While all the CDS indicators were green when I wrote this up at ~1800, the c1sus model was having frequent CPU overflows (execution time > 60 us). Not sure why this happened, or why a hard power reboot of everything fixed it, but I'm not delving into this.

The point of all this was that I can now simultaneously digitize 4 channels - 2 DCPDs, and 2 demodulated quadratures of an RF signal.

Attachment 1: CDS.png
CDS.png
  15609   Sat Oct 3 16:51:27 2020 gautamUpdateCDSRFM errors

Attachment #1 shows that the c1rfm model isn't able to receive any signals from the front end machines at EX and EY. Attachment #2 shows that the problem appears to have started at ~430am today morning - I certainly wasn't doing anything with the IFO at that time.

I don't know what kind of error this is - what does it mean that the receiving model shows errors but the sender shows no errors? It is not a new kind of error, and the solution in the past has been a series of model reboots, but it'd be nice if we could fix such issues because it eats up a lot of time to reboot all the vertex machines. There is no diagnostic information available in all the places I looked. I'll ask the CDS group for help, but I'm not sure if they'll have anything useful since this RFM technology has been retired at the sites (?).

In the meantime, arm cavity locking in the usual way isn't possible since we don't have the trigger signals from the arm cavity transmission. 


Update 1500 4 Oct: soft reboots of models didn't do the trick so I had to resort to hard reboots of all FEs/expansion chassis. Now the signals seem to be okay.

Attachment 1: RFMstat.png
RFMstat.png
Attachment 2: RFMerrs.png
RFMerrs.png
  15646   Wed Oct 28 09:35:00 2020 KojiUpdateCDSRFM errors

I'm starting the model restarts from remote. Then later I'll show up in the lab to do more hard resets.
==> It seems that the RFM errors are gone. Here are the steps.

  1. Shutdown all the watchdogs
  2. login to c1iscex. Shutdown all the realtime models: rtcds kill --all
  3. login to c1iscey. Shutdown all the realtime models: rtcds kill --all
  4. run scripts/cds/rebootC1LSC.sh on pianosa
  5. reboot c1iscex
  6. reboot c1isxey
  7. Wait until all the machines/models are up by the script
  8. restart c1iscex models
  9. restart c1iscey models
  10. some IPC errors are still visible on the CDS status screen. Lauch c1daf and c1oaf

 

Attachment 1: Screen_Shot_2020-10-28_at_10.06.00.png
Screen_Shot_2020-10-28_at_10.06.00.png
  15659   Wed Nov 4 17:14:49 2020 gautamUpdateCDSc1bhd setup

I am working on the setup of a CDS FE, so please do not attempt any remote login to the IPMI interface of c1bhd until I'm done.

  15663   Fri Nov 6 14:27:16 2020 gautamUpdateCDSc1bhd setup - diskless boot

I was able to boot one of the 3 new Supermicro machines, which I christened c1bhd, in a diskless way (with the boot image hosted on fb, as is the case for all the other realtime FEs in the lab). This is just a first test, but it is reassuring that we can get this custom linux kernel to boot on the new hardware. Some errors about dolphin drivers are thrown at startup but this is to be expected since the server isn't connected to the dolphin network yet. We have the Dolphin adaptor card in hand, but since we have to get another PCIe card (supposedly from LLO according to the BHD spreadsheet), I defer installing this in the server chassis until we have all the necessary hardware on hand.

I also have to figure out the correct BIOS settings for this to really run effectively as a FE (we have to disable all the "un-necessary" system level services) - these machines have BIOS v3.2 as opposed to the older vintages for which there are instructions from K.T. et al.

There may yet be issues with drivers, but this is all the testing that can be done without getting an expansion chassis. After the vent and recovering the IFO, I may try experimenting with the c1ioo chassis, but I'd much prefer if we can do the testing offline on a subnet that doesn't mess with the regular IFO operation (until we need to test the IPC).

Quote:

I am working on the setup of a CDS FE, so please do not attempt any remote login to the IPMI interface of c1bhd until I'm done.

Attachment 1: Screenshot_2020-11-06_14-26-54.png
Screenshot_2020-11-06_14-26-54.png
  15695   Wed Dec 2 17:54:03 2020 gautamUpdateCDSFE reboot

As discussed at the meeting, I commenced the recovery of the CDS status at 1750 local time.

  • Started by attempting to just soft-restart the c1rfm model and see if that fixes the issue. It didn't and what's more, took down the c1sus machine.
  • So hard reboots of the vertex machines was required. c1iscey also crashed. I was able to keep the EX machine up, but I soft-stopped all the RT models on it.
  • All systems were recovered by 1815. For anyone checking, the DC light on the c1oaf model is red - this is a "known" issue and requires a model restart, but i don't want to get into that now and it doesn't disrupt normal operation.

Single arm POX/POY locking was checked, but not much more. Our IMC WFS are still out of service so I hand aligned the IMC a bit, IMC REFL DC went from ~0.3 to ~0.12, which is the usual nominal level.

  15719   Wed Dec 9 15:37:48 2020 gautamUpdateCDSRFM switch IP addr reset

I suspect what happened here is that the IP didn't get updated when we went from the 131.215.113.xxx system to 192.168.113.xxx system. I fixed it now and can access the web interface. This system is now ready for remote debugging (from inside the martian network obviously). The IP is 192.168.113.90.

Managed to pull this operation off without crashing the RFM network, phew.

BTW, a windows laptop that used to be in the VEA (I last remember it being on the table near MC2 which was cleared sometime to hold the spare suspensions) is missing. Anyone know where this is ?

Attachment 1: Screenshot_2020-12-09_15-39-20.png
Screenshot_2020-12-09_15-39-20.png
Attachment 2: Screenshot_2020-12-09_15-46-46.png
Screenshot_2020-12-09_15-46-46.png
  15737   Fri Dec 18 10:52:17 2020 gautamUpdateCDSRFM errors

As I was working on the IFO re-alignment just now, the rfm errors popped up again. I don't see any useful diagnostics on the web interface.

Do we want to take this opportunity to configure jumpers and set up the rogue master as Rolf suggested? Of course there's no guarantee that will fix anything, and may possibly make it impossible to recover the current state...

Attachment 1: RFMdiag.png
RFMdiag.png
  15738   Fri Dec 18 22:59:12 2020 JonConfigurationCDSUpdated CDS upgrade plan

Attached is the layout for the "intermediate" CDS upgrade option, as was discussed on Wednesday. Under this plan:

  • Existing FEs stay where they are (they are not moved to a single rack)

  • Dolphin IPC remains PCIe Gen 1

  • RFM network is entirely replaced with Dolphin IPC

Please send me any omissions or corrections to the layout.

Attachment 1: CDS_2020_Dec.pdf
CDS_2020_Dec.pdf
Attachment 2: CDS_2020_Dec.graffle
  15742   Mon Dec 21 09:28:50 2020 JamieConfigurationCDSUpdated CDS upgrade plan
Quote:

Attached is the layout for the "intermediate" CDS upgrade option, as was discussed on Wednesday. Under this plan:

  • Existing FEs stay where they are (they are not moved to a single rack)

  • Dolphin IPC remains PCIe Gen 1

  • RFM network is entirely replaced with Dolphin IPC

Please send me any omissions or corrections to the layout.

I just want to point out that if you move all the FEs to the same rack they can all be connected to the Dolphin switch via copper, and you would only have to string a single fiber to every IO rack, rather than the multiple now (for network, dolphin, timing, etc.).

  15743   Mon Dec 21 18:18:03 2020 gautamUpdateCDSMany model changes

The CDS model change required to get the AS WFS signals into the RTCDS system are rather invasive.

  • We use VCS for these models. Linus Torvald may question my taste but I also made local backups of the models, just in case...
  • Particularly, the ADC1 card on c1ioo is completely reconfigured.
  • I also think it's more natural to do all the ASC computations on this machine rather than the c1lsc machine (it is currently done in the c1ass model). So there are many IPC changes as well.
  • I have documented everything carefully, and the compile/install went smoothly.
  • Taking down all the FE servers at 1830 local time
    1. To propagagte the model changes
    2. To make a hardware change in the c1rfm card in the c1ioo machine to configure it as "ROGUE MASTER 0"
    3. To clear the RFM errors we are currently suffering from will require a model reboot anyways.
  • Recovery was completed by 1930 - the RFM errors are also cleared, and now we have a "ROGUE MASTER 👾" on the network. Pretty smooth, no major issues with the CDS part of the procedure to report.
  • The main issue is that in the AA chassis I built, Ch14 (with the first channel as Ch1) has the output saturated to 28V (differential). I'm not sure what kind of overvoltage protection the ADC has - we frequently have the inputs exceed the spec'd +/-20 V (e.g. when the whitening filters are engaged and the cavity is fringing), but pending further investigation, I am removing the SCSI connection from the rear of the AA chassis.

In terms of computational load, the c1ioo model seems to be able to handle the extra load no issues - ~35us/60us per cycle. The RFM model shows no extra computational time

After this work, the IMC locking and POX/POY locking, and dither alignment servos are working okay. So I have some confidence that my invasive work hasn't completely destroyed everything. There is some hardware around the rear of 1X2 that I will clear tomorrow.

Attachment 1: CDSoverview.png
CDSoverview.png
  15744   Tue Dec 22 22:11:37 2020 gautamUpdateCDSAA filt repaired and reinstalled

Koji fixed the problematic channel - the issue was a bad solder joint on the input resistors to the THS4131. The board was re-installed. I also made a custom 2x4-pin LEMO-->DB9 cable, so we are now recording the PMC and FSS ERR/CTRL channel diagnostics again (spectra tomorrow). Note that Ch32 is recording some sort of DuoTone signal and so is not usable. This is due to a misconfiguration - ADC0 CH31 is the one which is supposed to be reserved for this timing signal, and not ADC1 as we currently have. When we swap the c1ioo hosts, we should fix this issue.

I also did most of the work to make the MEDM screens for the revised ASC topology, tried to mirror the site screens where possible. The overview screen remains to be done. I also loaded the anti-whitening filters (z:p 150:15) at the demodulated WFS input signal entry points. We don't have remote whitening switching capability at this time, so I'll test the switching manually at some point.

Quote:

The main issue is that in the AA chassis I built, Ch14 (with the first channel as Ch1) has the output saturated to 28V (differential). I'm not sure what kind of overvoltage protection the ADC has - we frequently have the inputs exceed the spec'd +/-20 V (e.g. when the whitening filters are engaged and the cavity is fringing), but pending further investigation, I am removing the SCSI connection from the rear of the AA chassis.

  15745   Wed Dec 23 10:13:08 2020 gautamUpdateCDSNear term upgrades

Summary:

  1. There appears to be insufficient number of PCIe slots in the new Supermicro servers that were bought for the BHD upgrade.
  2. Modulo a "riser card", we have all the parts in hand to put one of the end machines on the Dolphin network. If the Rogue Master doesn't improve the situation, we should consider installing a Dolphin card in the c1iscex server and connecting it to the Dolphin network at the next opportunity. 

Details:

Last night, I briefly spoke with Koji about the CDS upgrade plan. This is a follow up.

Each server needs a minimum of two peripheral devices added to the PCIe bus:

  • A PCIe interface card that connects the server to the Expansion Chassis (copper or optical fiber depending on distance between the two).
  • A Dolphin or RFM card that makes the IPC interconnects. 
  • I'm pretty certain the expansion chasiss cannot support the Dolphin / RFM card (it's only meant to be for ADCs/DACs/BIO cards). At least, all the existing servers in the 40m have at least 2 PCIe cards installed, and I think we have enough to worry about without trying to engineer a hack.
  • I attach some photos of new and old Supermicro servers to illustrate my point, see Attachment #1

As for the second issue, the main question is if we had an open PCIe slot on the c1iscex machine to install a Dolphin card. Looks like the 2 standard slots are taken (see Attachment #1), but a "low profile" slot is available. I can't find what the exact models of the Supermicro servers installed back in 2010 are, but maybe it's this one? It's a good match visually anyways. The manual says a "riser card" is required. I don't know if such a riser is already installed. 

Questions I have, Rolf is probably the best person to answer:

  1. Can we even use the specified host adaptor, HIB25-X4, which is "PCIe Gen2", with the "PCIe Gen3" slots on the new Supermicro servers we bought? Anyway, the fact that the new servers have only 1 PCIe slot probably makes this a moot point.
  2. Can we overcome this slot limitation by installing the Dolphin / RFM card in the expansion chassis?
  3. In the short run (i.e. if it is much faster than the full CDS shipment we are going to receive), can we get (from the CDS test stand in Downs or the site) 
    • A riser card so that we may install the Gen 1 Dolphin card (which we have in hand) in the c1iscex server?
    • A compatible (not sure what PCIe generation we need) PCIe host to ECA kit so we can test out the replacement for the Sun Microsystems c1ioo server?
    • A spare RFM card (VMIC 5565, also for the above purpose). 
  4. What sort of a test should we run to test the new Dolphin connection? Make a "null channel" differencing the same signal (e.g. TRX) sent via RFM and Dolphin? Or is there some better checksum diagnostic?
Attachment 1: IMG_0020.pdf
IMG_0020.pdf
  15746   Wed Dec 23 23:06:45 2020 gautamConfigurationCDSUpdated CDS upgrade plan
  1. The diagram should clearly show the host machines and the expansion chassis and the interconnects between them.
  2. We no longer have any Gentoo bootserver or diskless FEs.
  3. The "c1lsc" host is in 1X4 not 1Y3.
  4. The connection between c1lsc and Dolphin switch is copper not fiber. I don't know how many Gbps it is. But if the switch is 10 Gbps, are they really selling interface cables that have lower speed? The datasheet says 10 Gbps.
  5. The control room workstations - Debian10 (rossa) is the way forward I believe. it is true pianosa remains SL7 (and we should continue to keep it so until all other machines have been upgraded and tested on Debian 10).
  6. There is no "IOO/OAF". The host is called "c1ioo".
  7. The interconnect between Dolphin switch and c1ioo host is via fiber not copper.
  8. It'd be good to have an accurate diagram of the current situation as well (with the RFM network).
  9. I'm not sure if the 1Y1 rack can accommodate 2 FEs and 2 expansion chassis. Maybe if we clear everything else there out...
  10. There are 2 "2GB/s" Copper traces. I think the legend should make clear what's going on - i.e. which cables are ethernet (Cat 6? Cat 5? What's the speed limitation? The cable? Or the switch?), which are PCIe cables etc etc. 

I don't have omnigraffle - what about uploading the source doc in a format that the excellent (and free) draw.io can handle? I think we can do a much better job of making this diagram reflect reality. There should also be a corresponding diagram for the Acromag system (but that doesn't have to be tied to this task). Megatron (scripts machine) and nodus should be added to that diagram as well.

Please send me any omissions or corrections to the layout.

  15760   Tue Jan 12 08:21:47 2021 anchalHowToCDSAcromag wiring investigation

I used an Acromag XT1221 in CTN to play around with different wiring and see what works.  Following are my findings:


Referenced Single Ended Source (Attachment 1):

  • If the source signal is referenced single ended, i.e. the signal is only on the positive output and the negative side is tied to GND on the source side AND this GND is also shared by the power supply powering the Acromag, then no additional wiring is required.
  • The GND common to the power supply and the source is not required to be Earth GND but if done so, it should be at one point only.
  • RTN terminal on Acromag can be left floating or tied to IN- terminal.

Floating Single Ended Source (Attachment 2):

  • If the source signal is floating single-ended i.e. the signal is only on the positive output and the negative output is a floating GND on the source, the the IN- should be connected to RTN.
  • This is the case for handheld calibrators or battery powered devices.
  • Note that there is no need to connect GND of power supply to the floating GND on the source.

Differential Source (Attachment 3):

  • If the source is differential output i.e. the signal is on both the positive output and the negative output, then connect one of the RTN terminals on Acromag to Earth GND. It has to be Earth GND for this to work.
  • Note that you can no longer tie the IN- of different signals to RTN as they are all carrying different negative output from the source.
  • Earth GND at RTN gives acromag a stable voltage reference to measure against the signals coming in IN+ and IN-. And the most stable voltage reference is of course Earth GND.

Conclusion:

  • We might have a mix of these three types of signals coming to a single Acromag box. In that case, we have to make sure we are not connecting the different IN- to each other (maybe through RTN) as the differential negative inputs carry signal, not a constant voltage value.
  • In this case, I think it would be fine to always use differential signal wiring diagram with the RTN  connected to Earth GND.
  • There's no difference in software configuration for the two types of inputs, differential or single-ended.
  • For cases in which we install the acromag box inside a rack mount chasis with an associated board (example: CTN/2248), it is ok and maybe the best to use the Attachment 1 wiring diagram.

Comments and suggestions are welcome.


Related elog posts:

40m/14841    40m/15134


Edit Tue Jan 26 12:44:19 2021 :

Note that the third wiring diagram mentioned actually does not work. It is an error in judgement. See 40m/15762 for seeing what happens during this.

Attachment 1: SingleEndedNonFloatingWiring.pdf
SingleEndedNonFloatingWiring.pdf
Attachment 2: SingleEndedFloatingWiring.pdf
SingleEndedFloatingWiring.pdf
Attachment 3: DifferentialSignalWiring.pdf
DifferentialSignalWiring.pdf
  15761   Tue Jan 12 11:42:38 2021 gautamHowToCDSAcromag wiring investigation

Thanks for the systematic effort.

  1. Can you please post some time domain plots (ndscope perferably or StripTool) to clearly show the different failure modes?
  2. The majority of our AI channels are "Referenced Single Ended Source" in your terminology. At least on the c1psl Acromag crate, there are no channels that are truly differential drive (case #3 in your terminology). I think the point is that we saw noisy inputs when the IN- wasn't connected to RTN. e.g. the thorlabs photodiode has a BNC output with the shield connected to GND and the central conductor carrying a signal, and that channel was noisy when the RTN was unconnected. Is that consistent with your findings?
  3. What is the prescription when we have multiple power supplies (mixture of Sorensens in multiple racks, Thorlabs photodiodes and other devices powered by an AC/DC converter) involved?
  4. I'm still not entirely convinced of what the solution is, or that this is the whole picture. On 8 Jan, I disconnected (and then re-connected) the FSS RMTEMP sensor from the Acromag box, to check if the sensor output was noisy or if it was the Acromag. The problem surfaced on Dec 15, when I installed some new electronics in the rack (though none of them were connected to the Acromag directly, the only common point was the Sorensen DCPS. And between 8 Jan and today, the noise RMS has decreased back to the nominal level, without me doing anything to the grounding. How to reconcile this?
  15762   Wed Jan 13 16:09:29 2021 AnchalHowToCDSAcromag wiring investigation

I'm working on a better wiring diagram that takes into account multiple power supplies, how their GND is passed forward to the circuits or sensors using those power supplies and what possible wiring configurations on Acromag would give low noise. I think I have two configurations in mind which I will test and update here with data and better diagrams.


I took some striptool images earlier yesterday. So I'm dumping them here for further comments or inferences.

Attachment 1: SimpleTestsStriptoolImages.pdf
SimpleTestsStriptoolImages.pdf SimpleTestsStriptoolImages.pdf SimpleTestsStriptoolImages.pdf SimpleTestsStriptoolImages.pdf SimpleTestsStriptoolImages.pdf SimpleTestsStriptoolImages.pdf SimpleTestsStriptoolImages.pdf
ELOG V3.1.3-