Chub & Steve,
We swapped in our replacement of Varian V70D "bear-can" turbo as factory clean.
The new Agilent TwisTorr 84 FS turbo pump [ model x3502-64002, sn IT17346059 ] with intake screen, fan, vent valve. The controller [ model 3508-64001, sn IT1737C383 ] and a larger drypump IDP-7, [ model x3807-64010, sn MY17170019 ] was installed.
Next things to do:
All the serial vacuum signals are now interfaced to the new digital controls system. A set of persistent Python scripts will query each device at regular intervals (up to ~10 Hz) and push the readings to soft channels hosted by the modbus IOC. Similar scripts will push on/off state commands to the serial turbo pumps.
Each serial device is assigned an IP address on the local subnet as follows. Its serial communication parameters as configured in the terminal server are also listed.
[steve, rana, gautam]
Rana pointed out that the OSEM cabling, because of lack of a plastic shielding, is grounded directly to the table on which it is resting. A glass baking dish at the base of the seismic stack prevents electrical shorting to the chamber. However, there are some LEMO/BNC cables as well on the east side of the stack, whose BNC ends are just lying on the base of the stack. We should use this opportunity to think about whether anything needs to be done / what the influence of this kind of grounding is (if any) on actuator noise.
Steve also pointed out that we should replace the rubber pads which the vacuum chamber is resting on (Attachment #1, not from this vent, but just to indicate what's what). These serve the purpose of relieving small amounts of strain the chamber may experience relative to the beam tube, thus helping preserve the vacuum joints b/w chamber and tube. But after (~20?) years of being under compression, Steve thinks that the rubber no longer has any elasticity, and so should be replaced.
Gautam, Aaron, Chub & Steve,
ETMY heavy door replaced by light one.
We did the following: measured 950 particles/cf min of 0.5 micron at SP table, wiped crane and it's cable, wiped chamber,
placed heavy door on clean merostate covered stand, dry wiped o-rings and isopropanol wiped Aluminum light cover
Gautam, Aaron, Chub and Steve,
Vent 80 is nearly complete; the instrument is almost to atmosphere. All four ion pump gate valves have been disconnected, though the position sensors are still connected,and all annulus valves are open. The controllers of TP1 and TP3 have been disconnected from AC power. VC1 and VC2 have been disconnected and must remained closed. Currently, the RGA is being vented through the needle valve and the RGA had been shut off at the beginning of the vent preparations. VM1 and VM3 could not be actuated. The condition status is still listed as Unidentified because of the disconnected valves.
The vent 81 is completed.
4 ion pumps and cryo pump are at ~ 1-4 Torr (estimated as we have no gauges there), all other parts of the vacuum envelope are at atm. P2 & P3 gauges are out of order.
V1 and VM1 are in a locked state. We suspect this is because of some interlock logic.
TP1 and TP3 controllers are turned off.
Valve conditions as shown: ready to be opened or closed or moved or rewired. To re-iterate: VC1, VC2, and the Ion Pump valves shouldn't be re-connected during the vac upgrade.
Thanks for all of your help.
I've started testing the OMC channels I'll use.
I needed to update the model, because I was getting "Unable to setup testpoint" errors for the DAC channels that I had created earlier, and didn't have any ADC channels yet defined. I attach a screenshot of the new model. I ran
I replaced the projector bulb. Previous bulb was shattered.
New hardware has been installed in the vacuum controls rack. It is shown in the below post-install photo.
Below is a high-level summary of where things stand, and what remains to be done.
✔ Set up of replacement controls server (c1vac).
✔ Set up of Acromag terminals.
✔ EPICS database migration.
✔ Set up of 16-port IOLAN terminal server (for multiplexing/Ethernetizing the serial devices).
With Chub's help, I've setup a mini cleanroom at EY - Attachment #1. The HEPA unit is running on high now. All surfaces were wiped with isopropanol, we can wipe everything down again on Monday and replace the foil.
Attachment #1 is a block diagram depicting the pathway by which the vertex DOF control signals can couple into DARM (adapted from a similar diagram in Gabriele's Virgo note on the subject). I've also indicated some points where noise can couple into either loop. In general, there are sensing noises that couple in at the error point of the loop, and actuation noises that couple in at the control point. In this linear picture, each block represents a (possibly time varying) transfer function. So we can write out the node-to-node transfer functions and evaluate the various couplings.
The motivation is to see if we can first simulate with some realistic noise and time-varying couplings (and then possibly test on the realtime system) the effectiveness of the filter denoted by "FF" in canceling out the shot noise from the auxiliary loop being re-injected into the DARM loop via the DARM sensor. Does this look correct?
I finished running the cabling for the OMC, which involved running 7x 50ft DB9 cables from the OMC_NORTH rack to the 1X2 rack, laying cables over others on the tray. I tried not to move other cables to the extent I could, and I didn't run the new cables under any old cables. I attach a sketch diagram of where these cables are going, not inclusive of the entire DAC/ADC signal path.
I also had to open up the AA board (D050387, D050374), because it had an IPC connector rather than the DB37 that I needed to connect. The DAC sends signals to a breakout board that is in use (D080302) and had a DB37 output free (though note this carries only 4 DAC channels). I opened up the AA board and it had two IPC 40s connected to an adapter to the final IPC 70 output. I replaced the IPC40 connectors with DB37 breakouts, and made a new slot (I couldn't find a DB37 punch, so this is not great...) on the front panel for one of them, so I can attach it to the breakout board.
I noticed there were many unused wires, so I had to confirm that I had the wiring correct (still haven't confirmed by driving the channels, but will do). There was no DCC for D080302, but I grabbed the diagrams for the whitening boards it was connected to (D020432) and for the AA board I was opening up as well as checked out elog 8814, and I think I got it. I'll confirm this manually and make a diagram if it's not fake news.
Rana, Aaron, Gautam
The old Zojirushi has died. We have received and comissioned our new Technivoorm Mocha Master today. It is good.
I checked the IMC alignment following the vent, for which the manual beam block placed on the PSL table was removed. The alignment is okay, after minor touchup, the MC Trans was ~1200 cts which is roughly what it was pre-vent. I've closed the PSL shutter again.
I've completed bench testing of all seven vacuum Acromags installed in a custom rackmount chassis. The system contains five XT1111 modules (sinking digital I/O) used for readbacks of the state of the valves, TP1, CP1, and the RPs. It also contains two XT1121 modules (sourcing digital I/O) used to pass 24V DC control signals to the AC relays actuating the valves and RPs. The list of Acromag channel assignments is attached.
I tested each input channel using a manual flip-switch wired between signal pin and return, verifying the EPICS channel readout to change appropriately when the switch is flipped open vs. closed. I tested each output channel using a voltmeter placed between signal pin and return, toggling the EPICS channel on/off state and verifying the output voltage to change appropriately. These tests confirm the Acromag units all work, and that all the EPICS channels are correctly addressed.
I've set up a closed subnetwork for interfacing the vacuum hardware (Acromags and serial devices) with the new controls machine (c1vac; 192.168.113.72). The controls machine has two Ethernet interfaces, one which faces outward into the martian network and another which faces the internal subnetwork, 192.168.114.xxx. The second network interface was configured via the following procedure.
1. Add the following lines to /etc/network/interfaces:
iface eth1 inet static
2. Restart the networking services:
$sudo /etc/init.d/networking restart
3. Enable DNS lookup on the martian network by adding the following lines to /etc/resolv.conf:
4. Enable IP forwarding from eth1 to eth0:
$sudo echo 1 > /proc/sys/net/ipv4/ip_forward
5. Configure IP tables to allow outgoing connections, while keeping the LAN invisible from outside the gateway (c1vac):
$sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
$sudo iptables -A FORWARD -i eth0 -o eth1 -m state --state RELATED,ESTABLISHED -j ACCEPT
$sudo iptables -A FORWARD -i eth1 -o eth0 -j ACCEPT
6. Finally, because the EPICS 3.14 server binds to all network interfaces, client applications running on c1vac now see two instances of the EPICS server---one at the outward-facing address and one at the LAN address. To resolve this ambiguity, two additional enviroment variables must be set that specify to local clients which server address to use. Add the following lines to /home/controls/.bashrc:
A list of IP addresses so far assigned on the subnetwork follows.
As I was turning off the lights in the VEA, I heard a rattling sound from near the PSL enclosure. I followed it to a valve - I couldn't see a label on this valve in my brief effort to find one, but it is on the south-west corner of the IMC table, so maybe VABSSCI or VABSSCO? The power cable is somehow spliced with an attachment that looks to be bringing gas in/out of the valve (See Attachment #1), and the nut on the bottom was loose, the whole power cable + mettal attachment was responsible for the rattling. I finger-tightened the nut and the sound went away.
Following the checklist, I did these:
@Steve & Chub, we are ready to vent tomorrow (Monday Nov 19).
I've begun prepping the IFO for the vent, and completed most of the IFO related items on the checklist. The power into the MC has been cut, but the low-power autolocker has not been checked. I will finish up tomorrow and post the go ahead. PSL shutter is closed for tonight.
I made additional measurements on the x and y arms, at 5 offset positions for each arm (along with 6 measurements at the "zeroed" position).
Our 4 ion pumps were closed off for a lomg time. I estmated their pressure to be around ~1 Torr. After talking with Koji we decided not to vent them.
It'd be still useful to wire their position sensors. But make sure we do not actuate the valves.
The cryo pump was regenerated to 1e-4 Torr about 2 years ago. It's pressure can be ~ 2 Torr with charcoal powder. It is a dirty system at room temperature.
Do not actuate VC1 and VC2, and keep its manual valve closed.
IF someone feels we should vent them for some reason, let us know here in the elog before Monday morning.
Wiring of the power, Ethernet, and indicator lights for the vacuum Acromag chassis is complete. Even though this crate will only use +24V DC, I wired the +/-15V connector and indicator lights as well to conform to the LIGO standard. There was no wiring diagram available, so I had to reverse-engineer the wiring from the partially complete c1susaux crate. Attached is a diagram for future use. The crate is ready to begin software developing on Monday.
Back to loss measurements.
I replaced the PD I've been using for the AS beam.
I misaligned the x arm.
I tried to lock the y arm, but PRC was locked so I could was unable. Gautam reminded me where the config scripts are.
The armloss measurement script needed two additional modifications:
I ran successfully the loss measurement script for the x and y arms. I'm getting losses of ~100ppm from the first estimates.
I made the following changes to the lossmap script:
When the optic aligns itself not at the ideal position, I'm noticing that it often locks on a 01. When the cavity is then misaligned and restored, it can no longer obtain lock. To fix this, I've moved my 'save' commands to just before the loop begins. This means the script may take longer to run, but as long as the cavity is initially locked and well aligned, this should make it more robust against wandering off and never reacquiring lock.
I left the lossmap script running for the x-arm. Next would be to run it for the y arm, but I see that after stepping to a few positions the lock is again lost. It's still trying to run, but if you want to stop it no data already taken will be lost. To stop it, go to the remaining terminal open on rossa and ctrl+c
the analysis needs:
The 40m vacuum envelope has one large single O-ring on the OOC west side. All other doors have double O-ring with annuloses.
There are 3 spacers to protect o-ring. They should not be removed!
The Cryo-pump static seal to VC1 also viton. All gate valves and right angle valve plates have single viton o-ring seal.
Small single viton o-rings on all optical quality viewports.
Helium will permiate through these fast. Leak checking time is limited to 5-10 minutes.
All other seals are copper gaskits. We have 2 manual right angle with METAL-dynamic seal [ VATRING ] as VV1 & RV1
Prep for this work:
I was trying to get some pics of the optics as a zeroth-level reference for the pre-vent loss with the single arms locked, but since our SL7 upgrade, the sensoray won't work anymore . I'll try fixing this during the daytime.
I ran a BNC from the PD on the AS table along the cable rack to a free ADC channel on the LSC whitening board. I lay the BNC on top of the other cables in the rack, so as not to disturb anything. I also was careful not to touch the other cables on the LSC whitening board when I plugged in my BNC. The PD now reads out to... a mystery channel. The mystery channel goes then to c1lsc ADC0 channels 9-16 (since the BNC goes to input 8, it should be #16). To find the channel, I opened the c1lsc model and found that adc0 channel 15 (0-indexed in the model) goes to a terminator.
Rather than mess with the LSC model, Gautam freed up C1:ALS-BEATY_FINE_I, and I'm reading out the AS signal there.
I misaligned the x-arm then re-installed the AS PO PD, using the scope to center the beam then connecting it to the BNC to (first the mystery channel, then BEATY). I turned off all the lights.
I went to misalign the x-arms, but the some of the control channels are white boxed. The only working screen is on pianosa.
The noise on the AS signal is much larger than that on the MC trans signal, and the DC difference for misaligned vs locked states is much less than the RMS (spectrum attached); the coherence between MC trans and AS is low. However, after estimating that for ~30ppm the locked vs misaligned states should only be ~0.3-0.4% different, and double checking that we are well above ADC and dark noise (blocked the beam, took another spectrum) and not saturating the PD, these observations started to make more sense.
To make the measurement in cds, I also made the following changes to a copy opf Johannes' assess_armloss_refl.py that I placed in /opt/rtcds/caltech/c1/scripts/lossmap_scripts/armloss_cds/ :
I started taking a measurement, but quickly realized that the mode cleaner has been locked to a higher order mode for about an hour, so I spend some time moving the MC. It would repeatedly lock on the 00 mode, but the alignment must be bad because the transmission fluctuates between 300 and 1400, and the lock only lasts about 5 minutes.
All 7 Acromag units are now installed in the vacuum chassis. They are connected to 24V DC power and Ethernet.
I have merged and migrated the two EPICS databases from c1vac1 and c1vac2 onto the new machine, with appropriate modifications to address the Acromags rather than VME crate.
I have tested all the digital output channels with a voltmeter, and some of the inputs. Still more channels to be tested.
I’ll follow up with a wiring diagram for channel assignments.
I began moving the AA and AI chassis over to 1X1/1X2 as outlined in the elog.
The chassis were mostly filled with empty cables. There was one cable attached to the output of a QPD interface board, but there was nothing attached to the input so it was clearly not in use and I disconnected it.
I also attach a picture of some of the SMA connectors I had to rotate to accommodate the chassis in their new locations.
The chassis are installed, and the anti-imaging chassis can be seen second from the top; the anti-aliasing chassis can be seen 7th from the top.
I need to breakout the SCSI on the back of the AA chassis, because ADC breakout board only has a DB36 adapter available; the other cables are occupied by the signals from the WFS dewhitening outputs.
It is posted at the 40m wiki with Gautam' help. Printed copies posted around doors also.
This problem resurfaced, which I noticed when I couldn't get the single arm locks going.
The fix was NOT restarting the c1rfm model, which just brought the misery of all vertex FEs crashing and the usual dance to get everything back.
Restarting the sender models (i.e. c1scx and c1scy) seems to have done the trick though.
I've been looking into the cross-coupling from the SRCL loop control point to the Michelson error point.
[Attachment #1] - Swept sine measurement of transfer function from SRCL_OUT_DQ to MICH_IN1_DQ. Details below.
[Attachment #2] - Attempt to measure time variation of coupling from SRCL control point to MICH error point. Details below.
[Attachment #3] - Histogram of the data in Attachment #2.
[Attachment #4] - Spectrogram of the duration in which data in #2 and #3 were collected, to investigate the occurrance of fast glitches.
Hypothesis: (so that people can correct me where I'm wrong - 40m tests are on DRMI so "MICH" in this discussion would be "DARM" when considering the sites)
Measurement details and next steps:
Attachments #2 and #3
sstop using the ssscope, and just put the ssssignal into the DAQ with sssssome whitening. You'll get 16 bitsśšß.
I increased the resolution on the scope by selecting Average (512) mode. I was a bit confused by this, since Yuki was correct that I had only 4 digits recorded over ethernet, which made me think this was an i/o setting. However the sample acquisition setting was the only thing I could find on the tektronix scope or in its manual about improving vertical resolution. This didn't change the saved file, but I found the more extensive programming manual for the scope, which confirms that using average mode does increase the resolution... from 9 to 14 bits! I'm not even getting that many.
Gautam was doing some DRMI locking, so I replaced the photodiode at the AS port to begin loss measurements again.
There's another setting for DATa:WIDth, that is the number of bytes per data point transferred from the scope.
I tried using the *.25 scope instead, no better results. Changing the vertical resolution directly doesn't change this either. I've also tried changing most of the ethernet settings. I don't think it's something on the scripts side, because I'm using the same scripts that apparently generated the most recent of Johannes' and Yuki's files; I did look through for eg tds3014b.py, and didn't see the resolution explicitly set. Indeed, I get 7 bits of resolution as that function specifies, but most of them aren't filled by the scope. This makes me think the problem is on the scope settings.
With the DRMI locked, I drove a line in MICH using the sensing matrix infrastructure. Then I looked at the error points of MICH, PRCL and SRCL. Initially, the sensing line oscillator output matrix for MICH was set to drive only the BS. Subsequently, I changed the --> PRM and --> SRM matrix elements until the line height in the PRCL and SRCL error signals was minimized (i.e. the change to PRCL and SRCL due to the BS moving, which is a geometric effect, is cancelled by applying the opposite actuation to the PRM/SRM respectively. Then I transferred these to the LSC output matrix (old numbers in brackets).
MICH--> PRM = -0.335 (-0.2655)
MICH--> SRM = -0.35 (+0.25)
I then measured the loop TFs - all 3 loops had UGFs around 100 Hz, coinciding with the peaks of the phase bubbles. I also ran some sensing lines and did a sensing matrix measurement, Attachment #1 - looks similar to what I have obtained in the past, although the relative angles between the DoFs makes no sense to me. I guess the AS55 demod phase can be tuned up a bit.
The demodulation was done offline - I mixed the time series of the actuator and sensor signals with a "local oscillator" cosine wave - but instead of using the entire 5 minute time series and low-passing the mixer output, I divvied up the data into 5 second chunks, windowed with a Tukey window, and have plotted the mean value of the resulting mixer output.
Unrelated to this work: I re-aligned the PMC on the PSL table, mostly in Pitch.
This problem resurfaced. I'm doing the debugging.
6:30pm - "Solved" using the same procedure of stepping through the whitening gains with a small (10 DAC cts pk) signal applied. Simply stepping through the gains with input grounded doesn't seem to do the trick.
I had some success today. I hope that the tweaks I made will allow working with the DRMI during the day as well, though it looks like the main limiting factor in lock duty cycle is angular stability of the PRC.
[Attachment #1]: Repeatable and reliable DRMI locks tonight, stability is mainly limited by angular glitches - I'm not sure yet if these are due to a suspect Oplev servo on the PRM, or if they're because of the tip-tilt PR2/PR3/SR2/SR3.
[Attachment #2]: A pass at measuring the TF from SRCL error point to MICH error point via control noise re-injection. I was trying to measure down to 40 Hz, but lost the lock, and am calling it for the night.
[Attachment #3]: Coherence between PRM oplev error point and beam spot motion on POP QPD.
Note that the MICH actuation is not necessarily optimally de-coupled by actuating on the PRM and SRM yet (i.e. the latter two elements of the LSC output matrix are not precisely tuned yet).
What is the correct way to make feedforward filters for this application? Swept-sine transfer function measurement? Or drive broadband noise at the SRCL error point and then do time-domain Wiener filter construction using SRCL error as the witness and MICH error as the target? Or some other technique? Does this even count as "feedforward" since the sensor is not truly "outside" the loop?
Earlier today, I rebooted a few unresponsive VME crates (susaux, auxey).
The IMC has been unhappy for a couple of days - the glitches in the MC suspensions are more frequent. I reset the dark offsets, minimized MCREFL by hand, and then re-centered the beam on the MC2 Trans QPD. In this config, the IMC has been relatively stable today, although judging by the control room StripTool WFS control signal traces, the suspension glitches are still happening. Since we have to fix the attenuator issue anyways soon, we can do a touch-up on IMC WFS.
I removed the DC PD used for loss measurements. I found that the AS beam path was disturbed - there is a need to change the alignment, this just makes it more work to get back to IFO locking as I have to check alignment onto the AS55 and AS110 PDs.
Single arm locking worked with minimal effort - although the X arm dither alignment doesn't do the intended job of maximizing the transmission. Needs a checkup.
PRMI locking (carrier resonant) was also pretty easy. Stability of the lock is good, locks hold for ~20 minutes at a time and only broke because I was mucking around. However, when the carrier is resonant, I notice a smeared scatter pattern on the ITMX camera that I don't remember from before. I wonder if the FF idea can be tested in the simpler PRMI config.
After recovering these two simpler IFO configurations, I improved the cavity alignment by hand and with the ASS servos that work. Then I re-centered all the Oplev beams onto their respective QPDs and saved the alignment offsets. I briefly attemped DRMI locking, but had little success, I'm going to try a little later in the evening, so I'm leaving the IFO with the DRMI flashing about, LSC mode off.
The VEA vertex laptop, paola, has a flashing orange indicator which I take to mean some kind of battery issue. When the laptop is disconnected from its AC power adaptor, it immediately shuts down. So this machine is kind of useless for its intended purpose of being a portable computer we can work at optical tables with. The actual battery diagnostics (using upower) don't report any errors.
Today I finished setting up the server that will replace the c1vac1/2 machines. I put it on the martian network at the unassigned IP 192.168.113.72. I assigned it the hostname c1vac and added it to the DNS lookup tables on chiara.
I created a new targets directory on the network drive for the new machine: /cvs/cds/caltech/target/c1vac. After setting EPICS environment environment variables according to 13681 and copying over (and modifiying) the files from /cvs/cds/caltech/target/c1auxex as templates, I was able to start a modbusIOC server on the new machine. I was able to read and write (soft) channel values to the EPICS IOC from other machines on the martian network.
I scripted it as a systemd-managed process which automatically starts on boot and restarts after failure, just as it is set up on c1auxex.
The vacuum and MC are OK
Jon and I stuck a extender card into the eurocrate at 1X8 earlier today (~5pm PT), to see if the box was getting +24V DC from the Sorensen or not. Upon sticking the card in, the FAIL LEDs on all the VME cards came on. We immediately removed the extender card. Without any intervention from us, after ~1 minute, the FAIL LEDs went off again. Judging by the main volume pressure (Attachment #1) and the Vacuum MEDM screen (Attachment #2), this did not create any issues and the c1vac1 computer is still responsive.
But Steve can perhaps run a check in the AM to confirm that this activity didn't break anything.
Is there a reason why extender cards shouldn't be stuck into eurocrates?
Please check your data file and compare with those Johannes made last year. I think the power in your data file may have only three-disits and flactuate about 2%, which brings huge error. (see elog: 40m/14254)
On running the script again, I'm getting negative values for the loss.
This afternoon I started setting up the Supermicro 5017A-EP that will replace c1vac1/2. Following Johannes's procedure in 13681 I installed Debian 8.11 (jessie). There is a more recent stable release, 9.5, now available since the first acromag machine was assembled, but I stuck to version 8 for consistency. We already know that version to work. The setup is sitting on the left side of the electronics bench for now.
That was likely me. I had recentered the beam on the PD I'm using for the armloss measurements, and I probably moved the wrong steering mirror. The transmission from MC2 is sent to a steering mirror that directs it to the MC2 transmission QPD; the transmission from this steering mirror I direct to the armloss MC QPD (the second is what I was trying to adjust).
Note: The MC2 trans QPD goes out to a cable that is labelled MC2 op lev. This confusion should be fixed.
I realigned the MC and recentered the beam on the QPD. Indeed the beam on MC2 QPD was up and left, and the lock was lost pretty quickly, possibly because the beam wasn't centered. Lock was unstable for a while, and I rebooted C1PSL once during this process because the slow machine was unresponsive.
When tweaking the alignment near MC2, take care not to bump the table, as this also chang es the MC2 alignment.
Once the MC was stably locked, I was able to maximize MC transmission at ~15,400 counts. I then centered the spot on the MC2 trans QPD, and transmission dropped to ~14800 counts. After tweaking the alignment again, it was recovered to ~15,000 counts. Gautam then engaged the WFS servo and the beam was centered on MC2 trans QPD, transmission level dropped to ~14,900.
I tried to plot a long trend MC Transmitted today. I could not get farther than 2017 Aug 4
The mode cleaner was misaligned probably due to the earthquake (the drop in the MC transmitted value slightly after utc 7:38:52 as seen in the second plot). The plots show PMC transmitted and MC sum signals from 10th june 07:10:08 UTC over a duration of 17 hrs. The PMC was realigned at about 4-4:15 pm today by rana. This can be seen in the first plot.
The IMC has been misbehaving for the last 5 hours. Why? I turned the WFS servos off. afaik, aaron was the last person to work on the IFO, so i'm not taking any further debugging steps so as to not disturb his setup.
I'm checking out the data this morning, running armloss_AS_calc.py using the parameters Yuki used here.
I made the following changes to scripts (measurement script and calculator script)
I repeated the 'dark' measurements, because I need 20 files to run the script and the measurements before had the window on the scope set larger than the integration time in the script, so it was padded with bad values that were influencing the calculation.
On running the script again, I'm getting negative values for the loss. I removed the beamstops from the PDs, and re-centered the beams on the PDs to repeat the YARM measurements.
The Contec test board with Dsub37Fs was on the top shelf of E7