40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 55 of 341  Not logged in ELOG logo
ID Date Author Type Categoryup Subject
  14187   Tue Aug 28 18:39:41 2018 JonUpdateCDSC1LSC, C1AUX reboots

I found c1lsc unresponsive again today. Following the procedure in elog #13935, I ran the rebootC1LSC.sh script to perform a soft reboot of c1lsc and restart the epics processes on c1lsc, c1sus, and c1ioo. It worked. I also manually restarted one unresponsive slow machine, c1aux.

After the restarts, the CDS overview page shows the first three models on c1lsc are online (image attached). The above elog references c1oaf having to be restarted manually, so I attempted to do that. I connect via ssh to c1lsc and ran the script startc1oaf. This failed as well, however.

In this state I was able to lock the MICH configuration, which is sufficient for my purposes for now, but I was not able to lock either of the arm cavities. Are some of the still-dead models necessary to lock in resonant configurations?

  14192   Tue Sep 4 10:14:11 2018 gautamUpdateCDSCDS status update

c1lsc crashed again. I've contacted Rolf/JHanks for help since I'm out of ideas on what can be done to fix this problem.

Quote:

Starting c1cal now, let's see if the other c1lsc FE models are affected at all... Moreover, since MC1 seems to be well-behaved, I'm going to restore the nominal eurocrate configuration (sans extender board) tomorrow.

  14193   Wed Sep 5 10:59:23 2018 wgautamUpdateCDSCDS status update

Rolf came by today morning. For now, we've restarted the FE machine and the expansion chassis (note that the correct order in which to do this is: turn off computer--->turn off expansion chassis--->turn on expansion chassis--->turn on computer). The debugging measures Rolf suggested are (i) to replace the old generation ADC card in the expansion chassis which has a red indicator light always on and (ii) to replace the PCIe fiber (2010 make) running from the c1lsc front-end machine in 1X6 to the expansion chassis in 1Y3, as the manufacturer has suggested that pre-2012 versions of the fiber are prone to failure. We will do these opportunistically and see if there is any improvement in the situation.

Another tip from Rolf: if the c1lsc FE is responsive but the models have crashed, then doing sudo reboot by ssh-ing into c1lsc should suffice* (i.e. it shouldn't take down the models on the other vertex FEs, although if the FE is unresponsive and you hard reboot it, this may still be a problem). I'll modify I've modified the c1lsc reboot script accordingly.

* Seems like this can still lead to the other vertex FEs crashing, so I'm leaving the reboot script as is (so all vertex machines are softly rebooted when c1lsc models crash).

Quote:

c1lsc crashed again. I've contacted Rolf/JHanks for help since I'm out of ideas on what can be done to fix this problem.

  14194   Thu Sep 6 14:21:26 2018 gautamUpdateCDSADC replacement in c1lsc expansion chassis

Todd E. came by this morning and gave us (i) 1x new ADC card and (ii) 1x roll of 100m (2017 vintage) PCIe fiber. This afternoon, I replaced the old ADC card in the c1lsc expansion chassis, and have returned the old card to Todd. The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue. CDS is back to what is now the nominal state (Attachment #1) and Yarm is locked for Jon to work on his IFOcoupling study. We will monitor the stability in the coming days.

Quote:

(i) to replace the old generation ADC card in the expansion chassis which has a red indicator light always on and (ii) to replace the PCIe fiber (2010 make) running from the c1lsc front-end machine in 1X6 to the expansion chassis in 1Y3, as the manufacturer has suggested that pre-2012 versions of the fiber are prone to failure. We will do these opportunistically and see if there is any improvement in the situation.

  14195   Fri Sep 7 12:35:14 2018 gautamUpdateCDSADC replacement in c1lsc expansion chassis

Looks like the ADC was not to blame, same symptoms persist.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue.

  14196   Mon Sep 10 12:44:48 2018 JonUpdateCDSADC replacement in c1lsc expansion chassis

Gautam and I restarted the models on c1lsc, c1ioo, and c1sus. The LSC system is functioning again. We found that only restarting c1lsc as Rolf had recommended did actually kill the models running on the other two machines. We simply reverted the rebootC1LSC.sh script to its previous form, since that does work. I'll keep using that as required until the ongoing investigations find the source of the problem.

Quote:

Looks like the ADC was not to blame, same symptoms persist.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3), but hopefully the problem was the ADC card with red indicator light, and replacing it has solved the issue.

 

  14202   Thu Sep 20 11:29:04 2018 gautamUpdateCDSNew PCIe fiber housed

[steve, yuki, gautam]

The plastic tubing/housing for the fiber arrived a couple of days ago. We routed ~40m of fiber through roughly that length of the tubing this morning, using some custom implements Steve sourced. To make sure we didn't damage the fiber during this process, I'm now testing the vertex models with the plastic tubing just routed casually (= illegally) along the floor from 1X4 to 1Y3 (NOTE THAT THE WIKI PAGE DIAGRAM IS OUT OF DATE AND NEEDS TO BE UPDATED), and have plugged in the new fiber to the expansion chassis and the c1lsc front end machine. But I'm seeing a DC error (0x4000), which is indicative of some sort of timing error (Attachment #1) **. Needs more investigation...

Pictures + more procedural details + proper routing of the protected fiber along cable trays after lunch. If this doesn't help the stability problem, we are out of ideas again, so fingers crossed...

** In the past, I have been able to fix the 0x4000 error by manually rebooting fb (simply restarting the daqd processes on fb using sudo systemctl restart daqd_* doesn't seem to fix the problem). Sure enough, seems to have done the job this time as well (Attachment #2). So my initial impression is that the new fiber is functioning alright yes.

Quote:

The PCIe fiber replacement is a more involved project (Steve is acquiring some protective tubing to route it from the FE in 1X6 to the expansion chassis in 1Y3)

  14203   Thu Sep 20 16:19:04 2018 gautamUpdateCDSNew PCIe fiber install postponed to tomorrow

[steve, gautam]

This didn't go as smoothly as planned. While there were no issues with the new fiber over the ~3 hours that I left it plugged in, I didn't realize the fiber has distinct ends for the "HOST" and "TARGET" (-5 points to me I guess). So while we had plugged in the ends correctly (by accident) for the pre-lunch test, while routing the fiber on the overhead cable tray, we switched the ends (because the "HOST" end of the cable is close to the reel and we felt it would be easier to do the routing the other way. 

Anyway, we will fix this tomorrow. For now, the old fiber was re-connected, and the models are running. IMC is locked.

Quote:

Pictures + more procedural details + proper routing of the protected fiber along cable trays after lunch. If this doesn't help the stability problem, we are out of ideas again, so fingers crossed...

  14206   Fri Sep 21 16:46:38 2018 gautamUpdateCDSNew PCIe fiber installed and routed

[steve, koji, gautam]

We took another pass at this today, and it seems to have worked - see Attachment #1. I'm leaving CDS in this configuration so that we can investigate stability. IMC could be locked. However, due to the vacuum slow machine having failed, we are going to leave the PSL shutter closed over the weekend.

  14208   Fri Sep 21 19:50:17 2018 KojiUpdateCDSFrequent time out

Multiple realtime processes on c1sus are suffering from frequent time outs. It eventually knocks out c1sus (process).

Obviously this has started since the fiber swap this afternoon.

gautam 10pm: there are no clues as to the origin of this problem on the c1sus frontend dmesg logs. The only clue (see Attachment #3) is that the "ADC" error bit in the CDS status word is red - but opening up the individual ADC error log MEDM screens show no errors or overflows. Not sure what to make of this. The IOP model on this machine (c1x02) reports an error in the "Timing" bit of the CDS status word, but from the previous exchange with Rolf / J Hanks, this is down to a misuse of ADC0 Ch31 which is supposed to be reserved for a DuoTone diagnostic signal, but which we use for some other signal (one of the MC suspension shadow sensors iirc). The response is also not consistent with this CDS manual - which suggests that an "ADC" error should just kill the models. There are no obvious red indicator lights in the c1sus expansion chassis either.

  14210   Sat Sep 22 00:21:07 2018 KojiUpdateCDSFrequent time out

[Gautam, Koji]

We had another crash of c1sus and Gautam did full power cycling of c1sus. It was a sturggle to recover all the frontends, but this solved the timing issue.

We went through full reset of c1sus, and rebooting all the other RT hosts, as well as daqd and fb1.

  14253   Sun Oct 14 16:55:15 2018 not gautamUpdateCDSpianosa upgrade

DASWG is not what we want to use for config; we should use the K. Thorne LLO instructions, like I did for ROSSA.

Quote:

pianosa has been upgraded to SL7. I've made a controls user account, added it to sudoers, did the network config, and mounted /cvs/cds using /etc/fstab. Other capabilities are being slowly added, but it may be a while before this workstation has all the kinks ironed out. For now, I'm going to follow the instructions on this wiki to try and get the usual LSC stuff working.

  14267   Fri Nov 2 12:07:16 2018 ranaUpdateCDSNDScope

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=44971

Let's install Jamie's new Data Viewer

  14293   Tue Nov 13 21:53:19 2018 gautamUpdateCDSRFM errors

This problem resurfaced, which I noticed when I couldn't get the single arm locks going.

The fix was NOT restarting the c1rfm model, which just brought the misery of all vertex FEs crashing and the usual dance to get everything back.

Restarting the sender models (i.e. c1scx and c1scy) seems to have done the trick though.

  14344   Tue Dec 11 14:33:29 2018 gautamUpdateCDSNDScope

NDscope is now running on pianosa. To be really useful, we need the templates, so I've made /users/Templates/NDScope_templates where these will be stored. Perhaps someone can write a parser to convert dataviewer .xml to something ndscope can understand. To get it installed, I had to run:

sudo yum install ndscope
sudo yum install python34-gpstime
sudo yum install python34-dateutil
sudo yum install python34-requests

 I also changed the pythonpath variable to include the python3.4 site-packages library in .bashrc

Quote:

https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=44971

Let's install Jamie's new Data Viewer

  14356   Thu Dec 13 22:56:28 2018 gautamUpdateCDSFrames

[koji, gautam]

We looked into the /frames situation a bit tonight. Here is a summary:

  1. We have already lost some second trend data since the new FB has been running from ~August 2017.
  2. The minute trend data is still safe from that period, we believe.
  3. The Jetstor has ~2TB of trend data in the /frames/trend folder.
    • This is a combination of "second", "minute_raw" and "minute".
    • It is not clear to us what the distinction is between "minute_raw" and "minute", except that the latter seems to go back farther in time than the former.
    • Even so, the minute trend folder from October 2011 is empty - how did we manage to get the long term trend data?? From the folder volumes, it appears that the oldest available trend data is from ~July 24 2015.

Plan of action:

  1. The wiper script needs to be tweaked a bit to allow more storage for the minute trends (which we presumably want to keep for long term).
  2. We need to clear up some space on FB1 to transfer the old trend data from Jetstor to FB1.
  3. We need to revive the data backup via LDAS. Also summary pages.

BTW - the last chiara (shared drive) backup was October 16 6 am. dmesg showed a bunch of errors, Koji is now running fsck in a tmux session on chiara, let's see if that repairs the errors. We missed the opportunity to swap in the 4TB backup disk, so we will do this at the next opportunity.

  14359   Fri Dec 14 14:25:36 2018 KojiUpdateCDSchiara backup

fsck of chiara backup disk (UUID="90a5c98a-22fb-4685-9c17-77ed07a5e000") was done. But this required many files to be fixed. So the backed-up files are not reliable now.
On the top of that, the disk became not recognized from the machine.

I went to the disk and disconnected the USB and then the power supply, which was/is connected to the UPS.
Then they are reconnected again. This made the disk came back as /media/90a5c98a-22fb-4685-9c17-77ed07a5e000. (*)
After unmounting this disk, I ran "sudo mount -a" to follow the way of mounting as fstab does.
Now I am running the backup script manually so that we can pretend to maintain a snapshot of the day at least.

(*) This is the same situation we found at the recovery from the power shutdown. So my hypothesis is that on Oct 16 at 7 AM during the backup there was a USB failure or disk failure or something which unmounted the disk. This caused some files got damaged. Also this caused the disk mounted as /media/90a5c98a-22fb-4685-9c17-77ed07a5e000. So since then, we did not have the backup.
Update (20:00): The disk connection failed again. I think this disk is no longer reliable.

 

  14374   Thu Dec 20 17:17:41 2018 gautamUpdateCDSLogging of new Vacuum channels

Added the following channels to C0EDCU.ini:

[C1:Vac-P1b_pressure]
units=torr
[C1:Vac-PRP_pressure]
units=torr
[C1:Vac-PTP2_pressure]
units=torr
[C1:Vac-PTP3_pressure]
units=torr
[C1:Vac-TP2_rot]
units=kRPM
[C1:Vac-TP3_rot]
units=kRPM

Also modified the old P1 channel to

[C1:Vac-P1a_pressure]
units=torr

Unfortunately, we realized too late that we don't have these channels in the frames, so we don't have the data from this test pumpdown logged, but we will have future stuff. I say we should also log diagnostics from the pumps, such as temperature, current etc. After making the changes, I restarted the daqd processes.


Things to add to ASA wiki page once the wiki comes back online:

  1. What is the safe way to clean the cryo pump if we want to use it again?
  2. What are safe conditions to turn the RGA on?
  14376   Fri Dec 21 11:11:51 2018 gautamUpdateCDSLogging of new Vacuum channels

The N2 pressure channel name was also wrong in C0EDCU.ini, so I updated it this morning to the correct name and units:

[C1:Vac-N2_pressure]
units=psi

Now it too is being recorded to frames.

  14386   Fri Jan 4 17:43:24 2019 gautamUpdateCDSTiming issues

[J Hanks (remote), koji, gautam]

Summary:

The problem stems from the way GPS timing signals are handled by the FEs and FB. We effected a partial fix:

  • Now, old frame data is no longer being overwritten
  • For the channels that are indeed being recorded now, the correct time stamp is being applied so they can be found in /frames by looking for the appropriate gpstime.

Details:

  • The usual FE/FB power cycling did not fix the problem.
  • The gps time used by FB and associated RT processes may be found by using  cat /proc/gps (i.e. this is different from the system time found by using date, or gpstime).
  • This was off by 2 years.
  • The way this worked up till now was by adding a fixed offset to this time.
    • This offset can be found as a line saying set symm_gps_offset=31622382 in daqdrc.fw (for example)
    • There were similar lines in daqdrc.rcv and daqdrc.dc - however, they were not all the same offset! We couldn't figure out why.
    • All these files live in /opt/rtcds/caltech/c1/target/daqd/.

Changes effected:

  1. First, we tried changing the offset in the daqdrc.fw file only.
    • Incremented it by 24*60*60*365 = number of seconds in a year with no leap seconds/days.
    • This did not fix the problem.
  2. So J Hanks decided to rebuild the Spectracom driver - these commands may not be comprehensive, but I think I got everything).
    • The relevant file is spectracomGPS.c (made a copy of /usr/src/symmetricom-3.3~rc1, called symmetricom-3.3~rc1-patched, this file is in /usr/src/symmetricom-3.3~rc1-patched/include/drv)
    • Added the following lines:
      /* 2018 had no leap seconds or leap days, so adjust for that */
             pHardware->gpsOffset += 31536000;
    • re-built and installed the modified symmetricom driver.
    • Checked that cat /proc/gps now yields the correct time.
    • Reset the gps time offsets in daqdrc.fw, daqdrc.rcv and daqdrc.dc to 0
    • With these steps, the frames were being written to /frames with the correct timestamp.
  3. Next, we checked the timing on the FEs
    • Basically, J Hanks rebuilt the version of the symmetricom driver that is used by the rtcds models to mimic the changes made for FB.
    • This did the trick for c1lsc and c1ioo - cat /proc/gps now returns the correct time on those FEs.
    • However, c1sus remains problematic (it initially reported a GPS time from 2 years ago, and even after the re-installed driver, is 4 days behind) - he suspects that this is because c1sus is the only FE with a Symmetricom/Spectracom card installed in the I/O chassis. So c1sus reports a gpstime that is ~4 days behind the "correct" gpstime.
    • He is going to check with Rolf/other CDS experts to figure out if it's okay for us to simply remove the card and run the models, or if we need to take other steps.
    • As part of this work, the c1x02 IOP model was recompiled, re-installed and re-started.

The realtime models were not restarted (although all the vertex FEs are running) - we can regroup next week and decide what is the correct course of action.

Quote:
 
  • Attachment #2 shows the minute trend of the pressure gauges for a 12 day period - it looks like there is some issue with the frame builder clock, perhaps this issue resurfaced? But checking the system time on FB doesn't suggest anything is wrong.. I double checked with dataviewer as well that the trends don't exist... But checking the status of the individual daqd processes indeed showed that the dates were off by 1 year, so I just restarted all of them and now the time seems correct. How can we fix this problem more permanently? Also, the P1b readout looks suspicious - why are there periods where it seems like we are reading values better than the LSB of the device?
  14392   Wed Jan 9 11:33:35 2019 gautamUpdateCDSTiming issues still persist

Summary:

The gps time mismatch between /proc/gps and gpstime seems to be resolved. However, the 0x4000 DC errors still persist. It is not clear to me why.

Details:

On the phone with J Hanks on Friday, he reminded me that c1sus seems to be the only machine with an IRIG-B timing card installed. I can't find the elog but I remembered that Jamie, ericq and I had done this work in 2016 (?), and I also remembered Jamie saying it wasn't working exactly as expected. Since the DAQ was working fine before this card was installed, and since there are no problems with the recording of channels from the other four FE machines without this card installed, I decided to simply pull out the card from the expansion chassis. The card has been stored in the CDS/FE cabinet along the Y arm for now. There was also a cable that interfaces to the card which brings over the 1pps from the GPS unit, which has also been stored in the CDS/FE cabinet.

This seems to have resolved the mismatch between the gpstime reported by cat /proc/gps and the gpstime commands - Attachment #1 (the <1 second mismatch is presumably due to the deadtime between commands). However, the 0x4000 DC errors still persist. I'll try the full power cycle of FEs and FB which has fixed this kind of error in the past, but apart from that, I'm out of ideas.

Update 1215:

Following the instructions in this elog did not fix the problem. The problem seems to be with the daqd_fw service, which reports the following:

controls@fb1:~ 0$ sudo systemctl status daqd_fw.service 
● daqd_fw.service - Advanced LIGO RTS daqd frame writer
   Loaded: loaded (/etc/systemd/system/daqd_fw.service; enabled)
   Active: failed (Result: start-limit) since Wed 2019-01-09 12:17:12 PST; 2min 0s ago
  Process: 2120 ExecStart=/usr/bin/daqd_fw -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.fw (code=killed, signal=ABRT)
 Main PID: 2120 (code=killed, signal=ABRT)

Jan 09 12:17:12 fb1 systemd[1]: Unit daqd_fw.service entered failed state.
Jan 09 12:17:12 fb1 systemd[1]: daqd_fw.service holdoff time over, scheduling restart.
Jan 09 12:17:12 fb1 systemd[1]: Stopping Advanced LIGO RTS daqd frame writer...
Jan 09 12:17:12 fb1 systemd[1]: Starting Advanced LIGO RTS daqd frame writer...
Jan 09 12:17:12 fb1 systemd[1]: daqd_fw.service start request repeated too quickly, refusing to start.
Jan 09 12:17:12 fb1 systemd[1]: Failed to start Advanced LIGO RTS daqd frame writer.
Jan 09 12:17:12 fb1 systemd[1]: Unit daqd_fw.service entered failed state.
                                                     

Update 1530:

The frame-writer error was tracked down to a C0EDCU issue. Jon told me that the Hornet CC1 pressure gauge channel was renamed to . C1:Vac-CC1_pressure, and I made the change in the C0EDCU file. However, it returns a value of 9990000000.0, which the frame writer is not happy about... Keeping the old channel name makes the frame-writer run again (although the actual data is bunk).

Update 1755:

J Hanks suggested adding a 1 second offset to the daqdrc config files. This has now fixed the 0x4000 errors, and we are back to the "nominal" RTCDS status screen now - Attachment #2.

  14455   Thu Feb 14 23:14:12 2019 gautamUpdateCDSc1rfm errors

The pressure is still 2e-4 torr according to CC1 so I thought I'd give ASS debugging a go tonight. But the arm transmission signal isn't coming through to the LSC model from the end PDs - so a resurfacing of this problem. Rebooting the sender model, c1scy, did not fix the problem. Moreover, c1susaux is dead. The last time I rebooted it, ITMY got stuck so I'm not going to attempt a revival tonight.

  14457   Fri Feb 15 15:22:08 2019 gautamUpdateCDSc1rfm errors persist

I restarted c1scyc1rfm (so both sender and receiver models were cycled) and power-cycled the c1iscey and c1sus machines. The TRY PD is certainly seeing light - it is just not getting piped over to c1rfm. dmesg doesn't give any clues. I'm out of ideas.

P.S. The new reality seems to be that getting ITMY stuck in the event of a c1susaux reboot is inevitable. As is the practise for ITMX, I tried slowly ramping the PIT and YAW biases to 0 slowly - but in the process of ramping YAW to 0, the optic got stuck. I am ramping in steps of 0.1 (in units of the PIT/YAW sliders, waiting ~3 seconds between steps), I guess I can try ramping even more slowly.

Update: I power cycled the physical RFM switch. This necessitated reboot of all vertex FEs. But seems like things are back to normal now...

Note: to unstick ITMY, seems like the best approach is:

  1. Jiggle bias until SIDE shadow sensor is on average above it's half-light level. This is the critical step. A bias of +20000 cts on the fast SIDE output seems to help.
  2. Set YAW bias to -10, ramp down the BIAS in steps of 0.1, watching shadow sensor levels to ensure optic doesn't get stuck again.
  3. Hope for the best. Iterate if necessary.
Quote:

The pressure is still 2e-4 torr according to CC1 so I thought I'd give ASS debugging a go tonight. But the arm transmission signal isn't coming through to the LSC model from the end PDs - so a resurfacing of this problem. Rebooting the sender model, c1scy, did not fix the problem. Moreover, c1susaux is dead. The last time I rebooted it, ITMY got stuck so I'm not going to attempt a revival tonight.

  14472   Sat Mar 2 14:19:35 2019 gautamUpdateCDSFSS Slow servo gains not burt-ed

PSL NPRO PZT voltage showed large low frequency (hour timescale) excursions on the control room StripTool trace, leading me to suspect the slow servo wasn't working as expected. Yesterday evening, I keyed the unresponsive c1psl crate at ~9 PM PST, and had to run the burtrestore to get the PMC locking working. I must have pressed the wrong button on burtgooey or something, because all the FSS_SLOW channels were reset to 0. What's more, their values were not being saved by the hourly burt-snap script, so I don't have any lookback on what these values were. There isn't any detailed record on the elog about what the optimal values for these are, and the most recent reference I could find was Ki=0.1, Kp=Kd=0, which is what I've set it now to. The servo isn't running away, so I'm leaving things in this state, PID tuning can be done later.

I also added the FSS Slow servo channels to the burt snapshot requirement file at /cvs/cds/caltech/target/c1psl/autoBurt.req, and confirmed that the snapshots are getting the channels from now onwards.

While looking at the req file, I saw a bunch of *_MOPA* channels and also several other currently unused channels. Probably would benefit from going through these and commenting out all the legacy channels, to minimize disk space wastage (though we compress the snapshot files every few years anyways I guess).

Reminder that this (unrelated) issue still needs to be looked into... Note also that the new vacuum system does not have burt snapshot set up (i.e. it is still trying to get the old channels from the c1vac1 and c1vac2 databases, which while has significant overlap with the new system, should probably be setup correctly).

  14492   Thu Mar 21 18:09:36 2019 KojiUpdateCDSdb file preparation for acromag c1susaux

I have updated the google doc spreadsheet to indicate the required action for the new dbfile generation.

There are three types of actions:

1. COPY - Just duplicate the old EPICS db entry. This is for soft channels, calc channels.
2. DELETE - Delete the entry for some physical channels that will not be implemented on Acromag (oplev, dewhitening mon, AI monitor, etc)
3. REPLACE - For the physical channels, we want to replace the port names.

The blue part of the spreadsheet indicates the action for each channel. If it is a physical channel, the assigned module and the channel are indicated there. What we still want to do is to use the these information for generating the port name which looks like "@asynMask(C1VAC_XT1221A_ADC 1 -16)MODBUS_DATA".

The links to the spreadsheets can be found on 40m wiki: https://wiki-40m.ligo.caltech.edu/CDS/SlowControls/c1susaux

  14505   Mon Apr 1 12:01:52 2019 JonUpdateCDS 

I brought c1susaux back online this morning for suspension-channel test scripting. It had been dead for some time. I followed the procedure outlined in #12542. ITMY became stuck during this process, which Gautam tells me always happens since the last vacuum access, but ITMX is not stuck.

  14506   Mon Apr 1 22:33:00 2019 gautamUpdateCDSITMY freed

While Anjali is working on the 1um MZ setup, the pesky ITMY was liberated from the OSEMs. The "algorithm" :

  • Apply a large (-30000 cts) offset to the side coil using the fast system.
  • Approach the zero of the YAW DoF from -2.00V, PIT from +10V (you'll have to jiggle the offsets until the optic is free swinging, and then step the bias down by 0.1). At this point I had the damping off.
  • Once the PIT bias slider reaches -4V, I engaged all damping loops, and brought the optic to its nominal bias position under damping. 

While doing this work, I noticed several errors corresponding to EPICS channel conflicts. Turns out the c1susaux2 EPICS server was left running, and the MEDM screens (and possibly several scripts) were confused. There has to be some other way of testing the new crate, on an isolated network or something - please do not leave the modbus service running as it potentially interferes with normal IFO operation. For good measure, I stopped the process and shut down the machine since I saw nothing in the elog about any running tests.

Quote:

ITMY became stuck during this process

  14507   Tue Apr 2 14:53:57 2019 gautamUpdateCDSc1vac added to burt

I deleted references to c1vac1 and c1vac2 (which no longer exist) and added c1vac to the autoburt request file list at /opt/rtcds/caltech/c1/burt/autoburt/requestfilelist

  14508   Tue Apr 2 15:02:53 2019 JonUpdateCDSITMY freed

blushI renamed all channels on c1susaux2 from "C1:SUS-..." to "C1:SUS2-..." to avoid contention. When the new system is ready to install, those channel names can be reverted with a quick search-and-replace edit.

Quote:

While doing this work, I noticed several errors corresponding to EPICS channel conflicts. Turns out the c1susaux2 EPICS server was left running, and the MEDM screens (and possibly several scripts) were confused. There has to be some other way of testing the new crate, on an isolated network or something - please do not leave the modbus service running as it potentially interferes with normal IFO operation. For good measure, I stopped the process and shut down the machine since I saw nothing in the elog about any running tests.

  14522   Mon Apr 8 11:53:17 2019 gautamUpdateCDSc1oaf needs debugging

I tried restarting c1oaf this weekend to see if turning on the MC length FF would affect the ALS noise performance. I burtrestored the filter settings from March 2016. However, I noticed several possible anomalies, which need debugging. I am not turning the model off because of the possibility of having to reboot all the vertex FEs, but this model is totally unusable right now.

  1. Attachment #1 - the vertex seismometer input produces 1e+20 cts at the output of the feedforward filter. Attachment #2 shows the shape of the feedforward filters - doesn't explain the saturation. Since this is a feedforward loop, a runaway loop can't be the explanation either.
  2. The MC length feedforward control signal is supposed to only go to MC2 - but MC1 and MC3 coil outputs were saturated when I enabled the feedforward.
  14672   Thu Jun 13 22:21:44 2019 KojiConfigurationCDSPaola wireless connected to martian

SURFs had trouble connecting paola to martian via wireless.
Of course, it requires a fixed IP but it had not it yet. So I went to chiara and gave 192.168.113.110 as "paolawl". Note that the wired connection has .111 and it is "paola".

Followed the instruction on http://nodus.ligo.caltech.edu:8080/40m/14121

  14692   Mon Jun 24 13:48:36 2019 KruthiConfigurationCDSGiada wireless connection

[Gautam, Kruthi]

This afternoon, Gautam helped me setup Giada to access the GigE installed for MC2. Unlike Paola, which was being used earlier, Giada has a better battery life and doesn't shutdown when the charger is unplugged. Gautam configured Giada to enable its wireless connection to Martian, just like Koji had configured Paola (https://nodus.ligo.caltech.edu:8081/40m/14672). We also rerouted  the ethernet cable we were using with the PoE adaptor from Netgear Switch in 1x2 to 1x6.

  14719   Tue Jul 2 16:57:09 2019 gautamUpdateCDSc1sus is flaky

Since the work earlier this morning, the fast c1sus model has crashed ~5 times. Tried rebooting vertex FEs using the reboot script a few times, but the problem is persisting. I'm opting to do the full hard reboot of the 3 vertex FEs to resolve this problem.

Judging by Attachment #1, the processes have been stable overnight.

  14744   Wed Jul 10 14:57:01 2019 KojiSummaryCDSChannel recipe for iscaux upgrade

The list of the iscaux channels and pin assignments were posted to google drive.
The spreadsheet can be viewable by the link sent to the 40m ML. It was shared with foteee@gmail for full access.

Summary

  • We need
    4 ADC modules
    5 DAC modules
    5 Binary I/O modules
  • Be aware that there are bundled multiple digital I/O channels such as "mbboDirect" and "mbbi".
  • The full db record of the new channels need to be inferred from the existing channels.

Necessary electronics modification

1. D990694 whitening filter modification (4 modules)

This module shares the fast and slow channels on the top DIN96pin (P1) connector. Also, the whitening selector (done by an analog signal per channel) is assigned over 17pin of the P1 connector, resulting in the necessity of the second DSUB cable. By migrating the fast channels, we can swap the cable from the P1 to P2.  Also, the whitening selectors are concentrated on the first Dsub. (See Attachment1 P1)

2. D040180 / D1500308 Common Mode Board

CM servo board itself doesn't need any modification. The CM board uses P1 and P2. So we need to manufacture a special connector for CM Board P2. (cf The adapter board for P1 T1800260). See also D1700058.

3. D990543A1 LSC Photodiode Interface

PD I/F board has the DC mon channels spread over the 16pin limit. P1 21A can be connected to 6A so that we can accommdate it in the first Dsub.
Also the board uses AD797s. This is not necessary. We can replace them to OP27s. I actually don't know what is happening to those bias control, temp mon, enable, and status. These features should be disables at the I/F and the PDs. (See Attachment2 P1)

  14747   Thu Jul 11 12:42:35 2019 gautamSummaryCDSP2 interface board

I looked into the design of the P2 interface board. The main difficulty here is geometric - we have to somehow accommodate sufficient number of D-sub connectors in the tight space between the two P-type connectors. 

I think the least painful option is to stick with Johannes' design for the P1 connector. For the CM board, the P2 connector only uses 6 pairs of conductors for signals. So we can use a D-15 connector instead of 2 D-37 connectors. Then we can change the PCB shape such that the P1 connector can be accommodated (see Attachment #1). The other alternative would be to have 2 P-type connectors and 3 D-subs on the same PCB, but then we have to be extra careful about the relative positioning of the P-type connectors (otherwise they wont fit onto the Eurocrate). So I opted to still have two separate PCBs.

I took a first pass at the design, the files may be found here. I just auto-routed the connections, this is just an electrical feedthrough so I don't think we need to be too concerned about the PCB trace routing? If this looks okay, we should send out the piece for fab ASAP.

I will work on putting together the EPICS server machine (SuperMicro) this afternoon.

Quote:

2. D040180 / D1500308 Common Mode Board

CM servo board itself doesn't need any modification. The CM board uses P1 and P2. So we need to manufacture a special connector for CM Board P2. (cf The adapter board for P1 T1800260). See also D1700058.

  14749   Thu Jul 11 13:08:36 2019 ChubSummaryCDSP2 interface board

It's nice and compact, and the cost of new 15-pin DSUB cables shouldn't be a factor here.  What does the 15p cable connect to?

  14750   Thu Jul 11 13:09:22 2019 gautam SummaryCDSP2 interface board

it will connect to a 15 pin breakout board in the Acromag chassis

Quote:

It's nice and compact, and the cost of new 15-pin DSUB cables shouldn't be a factor here.  What does the 15p cable connect to?

  14764   Tue Jul 16 15:17:57 2019 KojiHowToCDSFinal bit bug of the BIO CDS module

Yutaro talked about the BIO bug in KAGRA elog. http://klog.icrr.u-tokyo.ac.jp/osl/?r=9536

I think I made the similar change for the 40m model somewhere (don't remember), but be aware of the presense of this bug.

  14765   Tue Jul 16 16:00:01 2019 gautamUpdateCDSc1iscaux Supermicro setup

I worked on preparing for the c1iscaux upgrade a bit today.

  1. Attachment #1: This shows where the 120 GB solid-state hard-drive and the 2 RAM cards (2GB each) are installed.
    • I found that it required considerable application of force to get the RAM cards into their slots.
    • Note: the 4GB RAM is broken up into two separate physical cards, each 2GB. The labeling is a bit confusing, as each card suggests it is by itself 4GB.
  2. OS install for c1iscaux:
    • I followed Jon's instructions (and added some of mine to the wiki page to hopefully make this process even less thinking-intensive).
    • To be able to use the IP address 192.168.113.83, removed "bscteststand" from chiara martian.hosts and rev.113.168.192.in-addr.arpa as the last mention I could find of this machine was from 2009 (and I'm pretty sure it isn't an active unit anymore). I then restarted the bind9 process. 
    • The hostname for this machine is currently "c1iscaux3" for testing purposes, I will change it once we do the actual install.
    • There was an error in the installation instructions to allow incoming ssh connections - it is openssh-server that is required, not openssh-client. This has now been fixed on the wiki page instructions.
  3. Acromag static IP assignment:
    • Assigned 2 ADCs (XT1221), 5 DACs (XT1541) and 5 sinking BIO units (XT1111) static IP addresses (and labelled them for easy reference) using the windows laptop and the Acromag IP config utility.
    • I saw no reason not to use the 192.168.114.yyy scheme for the Acromag subnet on this machine, even though c1auxex and c1vac both have subnets with this addressing prefix. For reasons unknown to me, Jon opted to use 192.168.115.yyy for the c1susaux Acromag subnet.
  4. Followed the excellent step-by-step to install EPICS, Modbus and Asyn.
    • This took a while, ~1 hour, dominated by the building of EPICS. The other two took only a couple of minutes each.
    • The same combination suggested on Jon's wiki, of Modbus R2-11, EPICS base-7.0.1 and asyn4-33, are the most current at the time of installation.
    • Couple of typos that prevented straight up copy-pasting were fixed on the wiki.
  5. Playground for testing new database files:
    • made a directory /cvs/cds/caltech/target/c1iscaux3 and copied over the .db files from /cvs/cds/caltech/target/c1iscaux and /cvs/cds/caltech/target/c1iscaux2 over.
    • Johannes said he did not develop any code to automate the process of translating the old .db files into the new ones for the Acromag - I won't invest the time in developing any either as I think just manually editing the files will be faster. 
    • I think I will follow the c1susaux convention of grouping .db files by the physical electronics system where possible (e.g. REFL11 channels in one file, CM channels in one file etc), as I think this makes for easier debugging.
    • There is an old "PZT_AI.db" file which I think consists completely of obsolete channels.
  6. Next steps:
    • Wire up the crate [Chub]
    • Make the database files and modbus files for talking to the Acromags on the internal subnet [Gautam], check the .db files [Koji]
    • Wiring of whitening switching from P1 to P2 connector, Issue #1 in this elog (this will also requrie the installation of the DIN shrouds) [Koji]
    • Soldering of P2 interface boards [Gautam]
    • Bench testing [Gautam, Koji, Chub]
    • Installation and in-situ testing [Gautam, Koji, Chub]

All the required additional parts should be here by the end of the week - I'd like to aim for Wednesday 7/24 for the installation in 1Y3 and in-situ testing. While talking to Rana, I realized that we should also factor in the c1aux slow channels into this acromag crate - there is no need for a separate machine to handle the shutters and illuminators. But let's not worry about that for now, those channels can simply be added later.

  14769   Wed Jul 17 21:22:41 2019 gautamUpdateCDSCM board Latch Enable subtlety

[koji, gautam]

Koji pointed out an important subtlety pertaining to the "LATCH ENABLE" signal line on the CM board. The purpose of this line is to smoothly facilitate the transition of a change in the "multi-bit-binary-outputs", a.k.a. "mbbo", that are controlled by MEDM gain sliders, to the analog electronics on the CM board. Why is this necessary? Imagine changing the gain from 7dB (=0111 in mbbo representation) to 8dB (=1000 in mbbo representation). In order to realize this change, all 4 bits have to change their state. But this almost certainly doesn't happen synchronously, because our EPICS interface isn't synchronous. So at some intermediate times, the mbbo representation could be 0100 (=4dB), or 1111 (=15dB), or many other possible values, which are all significantly different from either the initial value or the desired final state. This is clearly undesirable.

In order to protect against this kind of error, a Latched output part, 74ALS573, is used to buffer the physical digital logic levels from the switches in the analog gain stages. So in the default state, the "LATCH ENABLE" signal line is held "LOW". When a change happens in the EPICS value corresponding to a gain slider, the "LATCH ENABLE" state is quickly toggled to "HIGH", so as to enable the appropriate analog gain stages to be switched, and then again to "LOW", at which point the latch holds its output state. This logic is currently implemented by a piece of code called "latch.o", which is the compiled version of "latch.st", which may be found in /cvs/cds/caltech/target/c1iool0 where it presumably was written for the IMC servo board, but not in /cvs/cds/caltech/target/c1iool0  , which is where the CM board database files reside. The only elog reference I can find pertaining to this particular piece of code is from Alan, and doesn't say anything about the actual logic.

For the new c1iscaux, we need to implement this logic somehow. After discussion between Koji and me, we feel that a piece of python code is sufficient. This would continuously run in the background on the supermicro server machine. The channel hierarchy for each gain channes is as follows (I've taken the example of C1:LSC-CM_REFL1_GAIN):

  • C1:LSC-CM_REFL1_GAIN ------ this is the channel tied to an MEDM slider, and so is a "soft" channel
  • C1:LSC-CM_REFL1_SET ------- this is a "soft" channel that gets converted to an mbbo
  • C1:LSC-CM_REFL1_BITS ------ this is a channel that actually controls (multiple) physical binary outputs on the Acromag

So the logic will be that it continuously scans the EPICS channel C1:LSC-CM_REFL1_GAIN  for a change in set value. When a change is detected, it has to update the C1:LSC-CM_REFL1_SET channel. In the next EPICS refresh cycle, this would result in the mbbo bits, C1:LSC-CM_REFL1_BITS , all changing to the appropriate values. After these changes have happened, we need to toggle the LATCH ENABLE in order to allow the changes to propagate to the analog gain stage switches. Need to think about what's the best way to do this.

  14770   Thu Jul 18 00:51:52 2019 KojiSummaryCDSiscaux electronics modifications

Along with the plan in ELOG 14744, the ISC PD interface and the whitening filter board have been modificed. The ISC PD I/Fs were restored to the crate and the cables were connected. The whitening filteres are still on the electronics bench for some more tests before being returned to the crate.

The updated schematics were uploaded as https://dcc.ligo.org/D1900318 and https://dcc.ligo.org/D1900319

- Modification of the ISC PD interface: Jumpers between DIN96 P1 and P2. Replace all AD797s with OP27. In fact only I/F #1 (the left most)  had total 12 AD797 but the other units already had OP27s.

- Modification of the whitening filter: Jumpers between DIN96 P1 and P2.

  14771   Thu Jul 18 10:46:04 2019 gautamUpdateCDSDatabase files made

I completed the translation of the .db files for the EPICS database records from the VME notation to the Acromag/Modbus/Asyn notation. The channels are now organized into 5 database files, located in /cvs/cds/caltech/target/c1iscaux3/,  for convenience:

  1. C1_ISC-AUX_LSCPDs.db -------- This handles whitening gain, AA enable/bypass, Demodulator FE, and PD Interface Board channels for REFL11, REFL55, REFL33, REFL165, POP22, POP110, POX11, POY11, AS55 and AS110 photodiodes.
  2. C1_ISC-AUX_CM.db -------------- This handles all channels for the CM board. The mbbo addressing notation needs to be checked.
  3. C1_ISC-AUX_QPDs.db ----------- This handles all channels for the IPPOS QPD.
  4. C1_ISC-AUX_ALS.db ------------- This handles all channels for the IR ALS DFD LO and RF power monitoring.
  5. C1_ISC-AUX_SPARE.db ---------- This handles the unused channels for the various whitening, AA and PD interface boards.

For reasons unknown to me, the database files in the other Acromag system target directories (e.g. c1susaux, c1auxex) all had 755 level access permission - maybe this is required for systemctl to handle the EPICS serving? Anyways, I upgraded the permission level of the above 5 files using chmod.

There are almost certainly typos / other errors, and I may have missed copying over some soft/calibrated channels, but I hope that this way of grouping by subsystem will make the debugging less painful. Once Chub connects up the power lines to the Acromags, I will run the soft tests. For this purpose, I've also made a C1_ISC-AUX.cmd file and a C1_ISC-AUX.env file in the above target directory, and also made the modbusIOC.service file in /etc/systemd/system on the supermicro.

  14773   Thu Jul 18 19:58:56 2019 gautamUpdateCDSWork on Acromag chassis

Now that the .db files were prepared, I wanted to test for errors. So I did the following:

  1. Acromags were mounted on the DIN rails. Attachment #1 shows the grouping of ADC, DAC and BIO units. They are labelled with their IP addresses.
  2. Wiring of power:
    • Chub had already prepared the backplane with the power connectors, switches and indicator LEDs.
    • So I just had to daisy chain the +24 V (RED) and GND (BLACK) terminals for all the acromags together, which I did using 24 AWG wire (we may want to use heavier gauge given the current draw).
  3. Ethernet cables were used to daisy chain the network connectivity between the various units.  Attachment #1 shows the current state of the chassis box.
  4. Front panel pieces were attached and labelled, see Attachment #2
    • I found it was sufficient to use the front - we may use the rear panel slots when we want to add connections for controlling the c1aux machine channels.
    • The D15 P2 connector panel for the CM board will arrive tomorrow and will be installed then.
  5. Entire setup was connected to power and ethernet, see Attachment #3
    • As usual, the current draw is significant for the collection of Acromags, I got around this problem by using the bench supply to "Parallel" mode to enhance the current driving capacity.
    • For the ethernet connection, I used the office space port #6, which I connected at the network rack end to the eth1 port of the Supermicro.

All the Acromags are seen on the 192.168.114 subnet on c1iscaux3 yes- however, when I run the modbusIOC process, I see various errors in the logfile no, so more debugging is required. Nevertheless, progress.

Update 2245: Turns out the errors were indeed due to a copy/paste error - I had changed the IP addresses for the ADCs from the .115 subnet c1susaux was using, but forgot to do so for the DACs and BIOs. Now, if I turn off the existing c1iscaux so that there aren't any EPICS clashes, the EPICS server initializes correctly. There are still some errors in the log file - these pertain to (i) the mbbo notation, which I have to figure out, and (ii) the fact that this version of EPICS, 7.0.1, does not support channel descriptions longer than 28 characters (we have several that exceed this threshold). I think the latter isn't a serious problem.

Getting closer... Note that I turned off the c1iscaux VME crate to prevent any EPICS server clashes. I will turn it back on tomorrow.

  14775   Thu Jul 18 22:34:40 2019 KojiSummaryCDSiscaux electronics modifications

The whitening filter modules have been restored to the crates. The SMA cables have been restored and fastened by a spanner. The ribbon cable to the antialiasing board was also connected. The backplane cables have not been moved from the upper DIN96 connector to the lower one.

Everything is expected to be good, but just keep eyes on the LSC signals as the boards were not quantitatvely tested yet. If you find something suspicious, report on the elog.

  14781   Fri Jul 19 19:44:03 2019 gautamUpdateCDSDatabase file test

Summary:

The database files for C1ISCAUX seem to work file - the exception being the mbbo channels for the CM board.

Details:

This was just a software test - the actual functionality of the channels will have to be tested once the Acromag crate has been installed in the rack. One change I had to make on the MEDM screen for the LSC PD whitening gains was to get rid of the "NMS" suffix on the EPICS channel names for whitening gain sliders/drop-down-menus. I suspect this has to do with the EPICS version we are using, 7.0.1. Furthermore, AS165 and POP55 no longer exist - I hold off removing them from the MEDM screen for the moment.

Next steps:

From the software point of view, the major steps are:

  1. Fix the mbbo channel notation in the database files
  2. Write and test the latch enabling code
  3. Figure out what scripted tests can be done to test the functionality of the new Acromag box.

I am stopping the EPICS server on the new machine and restarting the old VME crate over the weekend.

  14785   Sat Jul 20 11:57:39 2019 gautamSummaryCDSP2 interface board

The boards arrived. I soldered on a DIN96 connector, and tested that the goemetry will work. It does yes. The only constraint is that the P2 interface board has to be installed before the P1 interface is installed. Next step is to confirm that the pin-mapping is correct. The pin mapping from the DIN96 connector to the DB15 was also verified.

*Maybe it isn't obvious from the picture, but there shouldn't be any space constraint even with the DB37/DB15 cables connected to the respective adapter boards.

  14790   Sun Jul 21 12:55:38 2019 gautamUpdateCDSCM board Latch Enable test script

DATED, SEE ELOG14941 for the most up-to-date info on latch.py.

I wrote (/cvs/cds/caltech/target/c1iscaux3/latch.py) and tested the logic illustrated in Attachment #1. Results of a test are shown in Attachment #2, the various channels change as expected. Note that for negative values of the gain channel, the corresponding "BITS" channel will take on values like 65536 - this is because the mbboDirect data type is a 16 bit data type, and presumably the MSB is the sign bit. A bit mask is applied to this channel before the actual BIO unit bits are set - we should verify that the correct behavior happens, but I don't immediately see any problems.

To me, this is a robust logic, but it will benefit from more sets of eyes giving it a look over. The idea is to run this continuously on the Supermicro machine.

Apart from this, I also fixed some errors in the mbboDirect record syntax - so now I am able to start up the EPICS server without it throwing any error messages. It remains to verify that changing an EPICS gain slider results in the appropriate gain bits being flipped in the correct way (on the hardware side, I think the correct behavior is happening on the software end). For this testing, I turned off the old c1iscaux crate at ~10am, and started up the server on c1iscaux3. I am reverting to the nominal config now (~1pm).

Further testing will require the wiring inside the Acromag chassis to be completed. This should be the priority task for next week.

*Update 1130 22 July 2019: I've now installed the required dependencies on c1iscaux3 and setup the latch.py script to run as a systemctl process dependent on modbusIOC.service.

  14795   Mon Jul 22 07:21:13 2019 gautamUpdateCDSpainosa messed with

Somebody changed the settings on painosa without elogging anything about it. Why does this keep happening? I thought the point of the elog was to communicate. I think there are sufficient number of problems in the lab without me having to manually reset the control room workstation settings every week. Please make an elog if you change something.

  14832   Tue Aug 6 14:55:23 2019 gautamUpdateCDSMaking Matlab R2015b the default

ML2013 is unable to open Simulink on any of the workstations. We decided to make the default version of Matlab R2015b (the default of the version of RCG we are using).

I commenced the procedure of the migration, starting with making a tagged commit of the current running simulink models. A local backup was also made, plus we have the usual chiara-based backup so I think we're in good hands.

Currently the branch and tag are protected - once we verify that everything works as expected post migration, I will open it up. I changed the directory structure of the models, need to confirm that the rtcds compilers don't have any hardcoded paths which may break due to my change.

The symlink to Matlab R2013 was deleted and a new symlink to R2015b was made. I activated the license using the Caltech campus license. Now running matlab from shell starts up R2015b yes. Simulink even works 😲 .

  14837   Fri Aug 9 08:59:04 2019 gautamUpdateCDSPrep for install of c1iscaux

[chub, gautam]

We scoped out the 1Y3 rack this morning to figure out what needs to be done hardware wise. We did not think about how to power the Acromag crate - the LSC rack electronics are all powered by linear supplies and not Sorensens, and the linear supplies are operating at pretty close to their maximum current-drive. The Acromag box draws ~3A of current from the 20 V supply, not sure what the current draw will be from the 15 V supply. Options:

  1. Since there are sorensens in 1Y2 and 1Y1, do we really care about installing another pair of switching supplies (+20 V DC and +15 V DC) in 1Y3?
    • Contingent on us having two spare Sorensens available in the lab. Chub has already located one.
  2. Use the Sorensens installed already in 1Y1. 
    • Probably the easiest and fastest option.
    • +15 V already available, we'd have to install a +20 V one (or if the +/-5 V or +12 V is unused, reconfigure for +20 V DC).
    • Can argue that "this doesn't make the situation any worse than it already is"
    • Will require the running of some long (~3 m) long cabling to bring the DC power to 1Y3 where it is required. 
  3. Get new linear supplies, and hook them up in parallel with the existing.
    • Need to wait for new linear supply to arrive
    • Probably expensive
    • Questionable benefit to electronics noise given the uncharacterized RF pickup situation at 1Y2

I'm going with option #2 unless anyone has strong objections.

ELOG V3.1.3-