40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 39 of 326  Not logged in ELOG logo
ID Date Author Type Category Subject
  14496   Tue Mar 26 04:25:13 2019 JohannesUpdateUpgradec1susaux upgrade plan
Quote:

Main TODO items

  • Debug issue with Acromag DC power wiring
  • Complete wiring from chassis feedthroughs to Acromag terminals, following this wiring diagram
  • Check/set the configuration of each Acromag unit using the software on the Windows laptop
  • Set the analog channel calibrations in the EPICS database file
  • Test each channel ex situ. Chub and I discussed an idea to use two DB-37F breakout boards, with the wiring between the board terminals manually set. One DAC channel would be calibrated and driven to test other ADC channels. A similar approach could be used for the digital input/output channels.

Just a few remarks, since I heard from Gautam that c1susaux is next in line for upgrade.

All units have already been configured with IP addresses and settings following the scheme explained on the slow controls wiki page. I did this while powering the units in the chassis, so I'm not sure where the short is coming from. Is the power supply maybe not sourcing enough current? Powering all units at the same time takes significant current, something like >1.5 Amps if I remember correctly. These are the IPs I assigned before I left:

Acromag Unit IP Address
C1SUSAUX_ADC00 192.168.115.20
C1SUSAUX_ADC01 192.168.115.21
C1SUSAUX_ADC02 192.168.115.22
C1SUSAUX_ADC03 192.168.115.23
C1SUSAUX_ADC04 192.168.115.24
C1SUSAUX_ADC05 192.168.115.25
C1SUSAUX_ADC06 192.168.115.26
C1SUSAUX_ADC07 192.168.115.27
C1SUSAUX_ADC08 192.168.115.28
C1SUSAUX_ADC09 192.168.115.29
C1SUSAUX_DAC00 192.168.115.40
C1SUSAUX_DAC01 192.168.115.41
C1SUSAUX_DAC02 192.168.115.42
C1SUSAUX_DAC03 192.168.115.43
C1SUSAUX_BIO00 192.168.115.60
C1SUSAUX_BIO01 192.168.115.61
C1SUSAUX_BIO02 192.168.115.62

I used black/white twisted-pair wires for A/D, red/white for D/A, and green/white for BIO channels. I found it easiest to remove the blue terminal blocks from the Acromag units for doing the majority of the wiring, but wasn't able to finish it. I had also done the analog channel calibrations using the windows untility using multimeters and one of the precision voltage sources I had brought over from the Bridge labs, but it's probably a good idea to check it and correct if necessary. I also recommend to check that the existing wiring particularly for MC1 and MC2 is correct, as I had swapped their order in the channel assignment in the past.

While looking through the database files I noticed two glaring mistakes which I fixed:

  1. The definition of C1SUSAUX_BIO2 was missing in /cvs/cds/caltech/target/c1susaux2/C1SUSAUX.cmd. I added it after the assignments for C1SUSAUX_BIO1
  2. Due to copy/paste the database files /cvs/cds/caltech/target/c1susaux2/C1_SUS-AUX_<OPTIC>.db files were still pointing to C1AUXEX. I overwrote all instances of this in all database files with C1SUSAUX.

 

  14495   Mon Mar 25 10:21:05 2019 JonUpdateUpgradec1susaux upgrade plan

Now that the Acromag upgrade of c1vac is complete, the next system to be upgraded will be c1susaux. We chose c1susaux because it is one of the highest-priority systems awaiting upgrade, and because Johannes has already partially assembled its Acromag replacement (see photos below). I've assessed the partially-assembled Acromag chassis and the mostly-set-up host computer and propose we do the following to complete the system.

Documentation

As I go, I'm writing step-by-step documentation here so that others can follow this procedure for future systems. The goal is to create a standard procedure that can be followed for all the remaining upgrades.

Acromag Chassis Status

The bulk of the remaining work is the wiring and testing of the rackmount chassis housing the Acromag units. This system consists of 17 units: 10 ADCs, 4 DACs, and 3 digitial I/O modules. Johannes has already created a full list of channel wiring assignments. He has installed DB37-to-breakout board feedthroughs for all the signal cable connections. It looks like about 40% of the wiring from the breakout boards to Acromag terminals is already done.

The Acromag units have to be initially configured using the Windows laptop connected by USB. Last week I wasn't immediately able to check their configuration because I couldn't power on the units. Although the DC power wiring is complete, when I connected a 24V power supply to the chassis connector and flipped on the switch, the voltage dropped to ~10V irrespective of adjusting the current limit. The 24V indicator lights on the chassis front and back illuminated dimly, but the Acromag lights did not turn on. I suspect there is a short to ground somewhere, but I didn't have time to investigate further. I'll check again this week unless someone else looks at it first.

Host Computer Status

The host computer has already been mostly configured by Johannes. So far I've only set up IP forwarding rules between the martian-facing and Acromag-facing ethernet interfaces (the Acromags are on a subnet inaccessible from the outside). This is documented in the link above. I also plan to set up local installations of modbus and EPICS, as explained below. The new EPICS command file (launches the IOC) and database files (define the channels) have already been created by Johannes. I think all that remains is to set up the IOC as a persistent system service.

Host computer OS

Recommendation from Keith Thorne:

For CDS lab-wide, Jamie Rollins and Ryan Blair have been maintaining Debian 8 and 9 repos with some of these.  
They have somewhat older EPICS versions and may not include all the modules we have for SL7.
One worry is whether they will keep up Debian 9 maintained, as Debian 10 is already out.

I would likely choose Debian 9 instead of Ubuntu 18.04.02, as not sure of Ubuntu repos for EPICS libraries.

Based on this, I propose we use Debian 9 for our Acromag systems. I don't see a strong reason to switch to SL7, especially since c1vac and c1susaux are already set-up using Debian 8. Although Debian 8 is one version out of date, I think it's better to get a well-documented and tested procedure in place before we upgrade the working c1vac and c1susaux computers. When we start building the next system, let's install Debian 9 (or 10, if it's available), get it working with EPICS/modbus, then loop back to c1vac and c1susaux for the OS upgrade.

Local vs. central modbus/EPICS installation

The current convention is for all machines to share a common installation which is hosted on the /cvs/cds network drive. This seems appealing because only a single central EPICS distribution needs to be maintained. However, from experience attempting this on c1vac, I'm convinced this is a bad design for the new Acromag systems.

The problem is that any network outage, even routine maintenance or brief glitches, wreaks havoc on Acromags set up this way. When the network is interrupted, the modbus executable disappears mid-execution, crashing the process and hanging the OS (I think related to the deadlocked NFS mount), so that the only way to recover is to manually power-cycle. Still worse, this can happen silently (channel values freeze), meaning that, e.g., watchdog protections might fail.

To avoid this, I'm planning to install a local EPICS distribution from source on c1susaux, just as I did for c1vac. This only takes a few minutes to do, and I will include the steps in the documented procedure. Building from source also better protects against OS-dependent buginess.

Main TODO items

  • Debug issue with Acromag DC power wiring
  • Complete wiring from chassis feedthroughs to Acromag terminals, following this wiring diagram
  • Check/set the configuration of each Acromag unit using the software on the Windows laptop
  • Set the analog channel calibrations in the EPICS database file
  • Test each channel ex situ. Chub and I discussed an idea to use two DB-37F breakout boards, with the wiring between the board terminals manually set. One DAC channel would be calibrated and driven to test other ADC channels. A similar approach could be used for the digital input/output channels.
Attachment 1: IMG_3136.jpg
IMG_3136.jpg
Attachment 2: IMG_3138.jpg
IMG_3138.jpg
Attachment 3: IMG_3137.jpg
IMG_3137.jpg
  14494   Thu Mar 21 21:50:31 2019 ranaUpdateVACProtection against AC power loss

agreed - we need all pumps on UPS for their safety and also so that we can spin them down safely. Can you and Chub please find a suitable UPS?

Quote:

However, I discovered that TP1---the pump that might be most damaged by a sudden power failure---is not on the UPS. It's plugged directly into a 240V outlet along the wall. This is because the current UPS doesn't have any 240V sockets. I'd recommend we get one that can handle all the turbo pumps.

  14493   Thu Mar 21 18:36:59 2019 JonOmnistructureUpgradeVacuum Controls Switchover Completed

Updated vac channel list is attached. There are several new ADC channels.

Quote:

Hardware & Channel Assignments

All of the new hardware is now permanently installed in the vacuum rack. This includes the SuperMicro rack server (c1vac), the IOLAN serial device server, a vacuum subnet switch, and the Acromag chassis. Every valve/pump signal cable that formerly connected to the VME bus through terminal blocks has been refitted with a D-sub connector and screwed directly onto feedthroughs on the Acromag chassis.

The attached pdf contains the master list of assigned Acromag channels and their wiring.

Attachment 1: 40m_Vacuum_Acromag_Channels_20190321.pdf
40m_Vacuum_Acromag_Channels_20190321.pdf 40m_Vacuum_Acromag_Channels_20190321.pdf 40m_Vacuum_Acromag_Channels_20190321.pdf
  14492   Thu Mar 21 18:09:36 2019 KojiUpdateCDSdb file preparation for acromag c1susaux

I have updated the google doc spreadsheet to indicate the required action for the new dbfile generation.

There are three types of actions:

1. COPY - Just duplicate the old EPICS db entry. This is for soft channels, calc channels.
2. DELETE - Delete the entry for some physical channels that will not be implemented on Acromag (oplev, dewhitening mon, AI monitor, etc)
3. REPLACE - For the physical channels, we want to replace the port names.

The blue part of the spreadsheet indicates the action for each channel. If it is a physical channel, the assigned module and the channel are indicated there. What we still want to do is to use the these information for generating the port name which looks like "@asynMask(C1VAC_XT1221A_ADC 1 -16)MODBUS_DATA".

The links to the spreadsheets can be found on 40m wiki: https://wiki-40m.ligo.caltech.edu/CDS/SlowControls/c1susaux

Attachment 1: Screen_Shot_2019-03-21_at_18.06.53.png
Screen_Shot_2019-03-21_at_18.06.53.png
  14491   Thu Mar 21 17:22:52 2019 JonUpdateVACMore vac controls upgrades

I've converted all the vac control system code to run on Python 3.4, the latest version available through the Debian package manager. Note that these codes now REQUIRE Python 3.x. We decided there was no need to preserve Python 2.x compatibility. I'm leaving the vac system returned to its nominal state ("vacuum normal + RGA").

Quote:

The vac controls system is going down for migration from Python 2.7 to 3.4. Will advise when it is back up.

 

  14490   Thu Mar 21 12:46:22 2019 JonUpdateVACMore vac controls upgrades

The vac controls system is going down for migration from Python 2.7 to 3.4. Will advise when it is back up.

  14489   Wed Mar 20 20:07:22 2019 JonUpdateVACDoing vac controls work

Work is completed and the vac system is back in its nominal state.

Quote:

I'm rebooting the IOLAN server to load new serial ports. The interlocks might trip when the pressure gauge readbacks cut out.

 

  14488   Wed Mar 20 19:26:25 2019 JonUpdateVACProtection against AC power loss

Today I implemented protection of the vac system against extended power losses. Previously, the vac controls system (both old and new) could not communicate with the APC Smart-UPS 2200 providing backup power. This was not an issue for short glitches, but for extended outages the system had no way of knowing it was running on dwindling reserve power. An intelligent system should sense the outage and put the IFO into a controlled shutdown, before the batteries are fully drained.

What enabled this was a workaround Gautam and I found for communicating with the UPS serially. Although the UPS has a serial port, neither the connector pinout nor the low-level command protocol are released by APC. The only official way to communicate with the UPS is through their high-level PowerChute software. However, we did find "unofficial" documentation of APC's protocol. Using this information, I was able to interface the the UPS to the IOLAN serial device server. This allowed the UPS status to be queried using the same Python/TCP sockets model as all the other serial devices (gauges, pumps, etc.). I created a new service called "serial_UPS.service" to persistently run this Python process like the others. I added a new EPICS channel "C1:Vac-UPS_status" which is updated by this process.

With all this in place, I added new logic to the interlock.py code which closes all valves and stops all pumps in the event of a power failure. To be conservative, this interlock is also tripped when the communications link with the UPS is disconnected (i.e., when the power state becomes unknown). I tested the new conditions against both communication failure (by disconnecting the serial cable) and power failure (by pressing the "Test" button on the UPS front panel). This protects TP2 and TP3. However, I discovered that TP1---the pump that might be most damaged by a sudden power failure---is not on the UPS. It's plugged directly into a 240V outlet along the wall. This is because the current UPS doesn't have any 240V sockets. I'd recommend we get one that can handle all the turbo pumps.

For future reference:

Pin 1: RxD

Pin 2: TxD

Pin 5: GND

Standard: RS-232

Baud rate: 2400

Data bits: 8

Parity: none

Stop bits: 1

Handshaking: none

 

 

Attachment 1: IMG_3146.jpg
IMG_3146.jpg
  14487   Wed Mar 20 12:31:30 2019 JonUpdateVACDoing vac controls work

I'm rebooting the IOLAN server to load new serial ports. The interlocks might trip when the pressure gauge readbacks cut out.

  14486   Mon Mar 18 20:22:28 2019 gautamUpdateALSALS stability test

I'm running a test to see how stable the EX green lock is. For this purpose, I've left the slow temperature tuning servo on (there is a 100 count limiter enabled, so nothing crazy should happen).

  14485   Mon Mar 18 18:10:14 2019 KojiSummaryGeneralTask items and priority

[Gautam/Chub/Koji] ~ Mini discussion

Maintenance / Upgrade Items

(Priority high to low)

  • TT/IO suspension upgrade (solidworks work) -> order components -> TT characterization
  • Acromag upgrade c1susaux
    • Produce spread sheetfor DB files. Learn new format of the DB file with Acromag. Develop a python code for the DB file generation (Jon->Koji)
  • Satellite Box upgrade
    • Rack mount? Front panel DB connectors. New circuits (PD-LED)
       
  • Acromag iscaux1/2 & isc whitening upgrade
     
  • new RC mirror characterization -> installation
  14484   Mon Mar 18 17:06:12 2019 gautamUpdateOptical LeversITMY HeNe replaced

Oplev HeNe was replaced this afternoon. We did some HeNe shuffling:

  1. A new HeNe was being used for the fiber illumination demo at EX. We took that out and decided to use it as the new ITMX HeNe. It had 2.6mW output at 632nm (measured with the Ophir power meter)
  2. Old ETMY HeNe was used for fiber illumination demo.
  3. Old ITMX HeNe was putting out no light - it will be disposed.

Attachment #1 shows the RIN and Attachment #2 and #3 show the PIT and YAW TFs with the new HeNe.

The ITMX Oplev path is still not great - the ingoing beam is within 2mm of clipping on a 2" lens used in the POX path, and there is a bunch of scattered red light everywhere. We should take the opportunity when the chamber is open to try and have a better layout (it may be tricky to optize without touching the two in-vacuum steering optics).

Quote:

I'll ask Chub to replace it this afternoon.

Attachment 1: OLRIN.pdf
OLRIN.pdf
Attachment 2: OL_PIT.pdf
OL_PIT.pdf
Attachment 3: OL_YAW.pdf
OL_YAW.pdf
  14483   Mon Mar 18 12:27:42 2019 gautamUpdateGeneralIFO status
  1. c1iscaux2 VME crate is damaged - see Attachment #1. 
    • It is not generating the 12V supply voltage, and so nothing in the crate works.
    • Tried resetting via front panel button, power cycling by removing power cable on rear, all to no effect.
    • Tried pulling out all cards and checking if there was an internal short that was causing the failure - looks like the problem is with the crate itself.
    • Not sure how long this machine has been unresponsive as we don't have any readback of the status of the eurocrate machines.
    • Not a showstopper, mainly we can't control the whitening settings for AS55, REFL55, REFL165 and ALSY. 
    • Acromag installation schedule should be accelerated.
    • * Koji reminded me that \text{VME crate} \ \neq \ \text{eurocrate}. The former is what is used for the slow machines, the latter is what is used for holding the iLIGO style electronics boards.
  2. ITMX oplev is dead - see Attachment #2.
    • Lasted ~3 years (installed March 2016).
    • I confirmed that no light is coming out of the laser head on the optical table.
    • I'll ask Chub to replace it this afternoon.
  3. c1susaux is unresponsive
    • I didn't reboot it as I didn't want to spend some hours freeing ITMY. 
    • At some point we will have to bite the bullet and do it.
  4. Input pointing is still not stable
    • I aligned the input pointing using TT1/TT2 to maximize TRX/TRY before lunch, but in 1 hour, the pointing has already drifted.
  5. POX/POY locking is working okay. TRX has large low-frequency fluctuations because of ITMX not having an Oplev servo, should be rectified once we swap out the HeNe.

The goal for this week is to test out the ALS system, so this is kind of a workable state since POX/POY locking is working. But the number of broken things is accumulating fast.

Attachment 1: IMG_7343.JPG
IMG_7343.JPG
Attachment 2: ITMXOL.png
ITMXOL.png
  14482   Sun Mar 17 21:06:17 2019 AnjaliUpdateALSAmplifier characterisation

The goal was to characterise the new amplifier (AP1053). For a practice, I did the characterisation of the old amplifier.This test is similar to that reported in Elog ID 13602.

  • Attachment #1 shows the schematic of the setup for gain characterisation and Attachment #2 shows the results of gain characterisation. 
  • The gain measurement is comparable with the previous results. From the data sheet, 10 dB gain is guaranteed in the frequency range 10-450 MHz. From our observation, the gain is not flat pver this region. We have measured a maximum gain of 10.7 dB at 6 MHz and it has then decreased upto 8.5 dB at 500 MHz
  • Attachement #3 shows the schematic of the setup for the noise characterisation and Attachment # 4 shows the results of noise measurment. 
  • The noise measurement doesn't look fine. We probably have to repeat this measurement.
Attachment 1: Gain_measurement.pdf
Gain_measurement.pdf
Attachment 2: Amplifier_gain.pdf
Amplifier_gain.pdf
Attachment 3: noise_measurement.pdf
noise_measurement.pdf
Attachment 4: noise_characterisation.pdf
noise_characterisation.pdf
  14481   Sun Mar 17 13:35:39 2019 AnjaliUpdateALSPower splitter characterization

We characterized the power splitter ( Minicircuit- ZAPD-2-252-S+). The schematic of the measurement setup is shown in attachment #1. The network/spectrum/impedance analyzer (Agilent 4395A) was used in the network analyzer mode for the characterisation. The RF output is enabled in the network analyser mode. We used an other spliiter (Power splitter #1) to splitt the RF power such that one part goes to the network analzer and the other part goes to the power spliiter (Power splitter #2) . We are characterising power splitter #2 in this test. The characterisation results and comparison with the data sheet values are shown in Attachment # 2-4.

Attachment #2 : Comparison of total loss in port 1 and 2

Attachment #3 : Comparison of amplitude unbalance

Attachment #4 : Comparison of phase unbalance

  • From the data sheet: the splitter is wideband, 5 to 2500 MHz, useable from 0.5 to 3000 MHz. We performd the measurement from 1 MHz to 500 MHz (limited by the band width of the network analyzer).
  • It can be seen from attachment #2 and #4 that there is a sudden increase below ~11 MHz. The reason for this is not clear to me
  • The mesured total loss value for port 1 and port 2 are slightly higher than that specified in the data sheet.From the data sheet, the maximum loss in port 1 and port 2 in the range at 450 MHz are 3.51 dB and 3.49 dB respectively. The measured values are 3.61 dB and 3.59 dB respectively for port 1 and port 2, which is higher than the values mentioed in the data sheet. It can also be seen from attachment #1 (b) that the expected trend in total loss with frequency is that the loss is decreasing with increase in frequency and we are observing the opposite trend in the frequency range 11-500 MHz. 
  • From the data sheet, the maximum amplitude balance in the 5 MHz-500 MHz range is 0.02 dB and the measured maximum value is 0.03 dB
  • Similary for the phase unbalance, the maximum value specified by the data sheet in the 5 MHz- 500 MHz range is 0.12 degree and the measurement shows a phase unbalance upto 0.7 degree in this frequency range
  • So the observations shows that the measured values are slighty higher than that specified in the data sheet values.
Attachment 1: Measurement_setup.pdf
Measurement_setup.pdf
Attachment 2: Total_loss.pdf
Total_loss.pdf
Attachment 3: Amplitude_unbalance.pdf
Amplitude_unbalance.pdf
Attachment 4: Phase_unbalance.pdf
Phase_unbalance.pdf
  14480   Sun Mar 17 00:42:20 2019 gautamUpdateALSNF1611 cannot be shot-noise limited?

Summary:

Per the manual (pg12) of the NF 1611 photodiode, the "Input Noise Current" is 16 pA/rtHz. It also specifies that for "Linear Operation", the max input power is 1 mW, which at 1um corresponds to a current shot noise of ~14 pA/rtHz. Therefore,

  1. This photodiode cannot be shot-noise limited if we also want to stay in the spec-ed linear regime.
  2. We don't need to worry so much about the noise figure of the RF amplifier that follows the photodiode. In fact, I think we can use a higher gain RF amplifier with a slightly worse noise figure (e.g. ZHL-3A) as we will benefit from having a larger frequency discriminant with more RF power reaching the delay line.

Details:

Attachment #1: Here, I plot the expected voltage noise due to shot noise of the incident light, assuming 0.75 A/W for InGaAs and 700V/A transimpedance gain. 

  • For convenience, I've calibrated on the twin axes the current shot noise (X) and equivalent amplifier noise figure at a given voltage noise, assuming a 50 ohm system (Y).
  • The 16 pA/rtHz input current noise exceeds the shot noise contribution for powers as high as 1 mW.
  • Even at 0.5 mW power on the PD, we can use the ZHL-3A rather than the Teledyne:
    • This calculation was motivated by some suspicious features in the Teledyne amplifier gain, I will write a separate elog about that. 
    • For the light levels we have, I expect ~3dBm RF signal from the photodiode. With the 24dB of gain from the ZHL-3A, the signal becomes 27dBm, which is smaller (but close to) the spec-ed max output of the ZHL-3A, which is 29.5 dBm. Is this too close to the edge?
    • I will measure the gain/noise of the ZHL-3A to get a better answer to these questions.
  • If in the future we get a better photodiode setup that reaches sub-1nV/rtHz (dark/electronics) voltage noise, we may have to re-evaluate what is an appropriate RF amplifier.
Attachment 1: PDnoise.pdf
PDnoise.pdf
  14479   Thu Mar 14 23:26:47 2019 AnjaliUpdateALSALS delay line electronics

Attachment #1 shows the schematic of the test setup. Signal generator (Marconi) was used to supply the RF input. We observed the IF output in the following three test conditions.

  1. Observed the spectrum with FM modulation (fcarrier of 40 MHz and fmod of 221 Hz )- a peak at 221 Hz was observed.
  2. Observed the noise spectrum without FM modulation.
  3. Observed the noise spectrum after disconnecting the delayed output of the delay line. 
  • It is observed that the broad band noise level is higher without FM modulation (2) compared to that we observed after disconnecting the delayed output of the delay line (3).
  • It is also observed that the noise level is increasing with increase in RF input power. 
  • We need to find the reason for increase in broad band noise .
Attachment 1: test_setup_ALS_delay_line_electronics.pdf
test_setup_ALS_delay_line_electronics.pdf
  14478   Wed Mar 13 01:27:30 2019 gautamUpdateALSALS delay line electronics

This test was done, and I determine the frequency discriminant to be \approx 5 \mu \mathrm{V}/\mathrm{Hz} (for an RF signal level of ~2 dBm). 

Attachment #1: Measured and predicted value of the DFD discriminant for a few RF signal levels.

  • Methodology was to drive an FM (deviation = 25 Hz, fMod = 221 Hz, fCarrier ~ 40 MHz) with the Marconi, and look at the IF spectrum peak height on a SR785
  • The "Design" curve is calculated using the circuit parameters, assuming 4dB conversion loss in the mixer itself, and 3dB insertion loss due to various impedance matching transformers and couplers in the RF signal chain. I fudged the insertion/convertion loss numbers to get this curve to line up with the measurements (by eye).
  • For the measurement, I assume the value for FM deviation displayed on the Marconi is an RMS value (this is the best I can gather from the manual). I'll double checking by looking at the RFmon spectrum directly on the Agilent NA.
  • X axis calibrated by reading off from the RF power monitor using a DMM and using the calibration data from the bench.
  • I could never get the ratio of peak heights in Ichan/Qchan (or the other way around) to better than ~ 1/8 (by moving the carrier frequency around). Not sure I can explain that - small non-orthogonality between I and Q channels cannot explain this level of leakage.

Attachment #2: Measured noise spectrum in the 1Y2 (LSC) electronics rack, calibrated to Hz/rtHz using the discriminant from Attachment #1.

  • Something funky with the I channel for X, I'll re-take that spectrum.

I'm still waiting on some parts for the new BeatMouth before giving the whole system a whirl. In the meantime, I'll work on the EX and EY green setups, to try and improve the mode-matching and better characterize the expected suppressed frequency noise of the end NPROs - the goal here is to rule out the excess low-frequency noise that was seen in the ALS signals coming from unsuppressed frequency noise.

Bottom lines: 

  1. The DFD noise is at the level of ~ 10mHz/rtHz above 10 Hz. This justifies the need for whitening before ADC-ing.
  2. The measured signal/noise levels in the DFD chain are in good agreement with the "expected" levels from circuit component values and typical insertion/conversion loss values.
  3. Why are there so many 60 Hz harmonics???
Attachment 1: DFDcal.pdf
DFDcal.pdf
Attachment 2: DFDnoise.pdf
DFDnoise.pdf
  14477   Tue Mar 12 22:51:25 2019 gautamUpdateALSALS delay line electronics

This Hanford alog may be of relevance as we are using the aLIGO AA chassis for the IR ALS channels. We aren't expecting any large amplitude high frequency signals for this application, but putting this here in case it's useful someday.

  14476   Fri Mar 8 08:40:26 2019 AnjaliConfiguration Frequency stabilization of 1 micron source

The schematic of the homodyne configuration is shown below.

Following are the list of components

Item Quantity Availability Part number  Remarks
Laser (NPRO) 1 Yes    
Couplers (50/50) 5 3 No's FOSC-2-64-50-L-1-H64F-2 Fiber type : Hi1060 Flex fiber
Delay fiber  two loops of 80 m Yes PM 980

 

One set of fiber is now kept along the arm of the interferometer

InGaAs PD (BW > 100 MHz) 4 Yes NF1611

Fiber coupled (3 No's)

Free space ( 2 No's)

SR560 3 Yes    
  • The fiber mismatch between the couplers and the delay fiber could affect the coupling efficiency
Attachment 1: Homodyne_setup.png
Homodyne_setup.png
  14475   Thu Mar 7 01:06:38 2019 gautamUpdateALSALS delay line electronics

Summary:

The restoration of the delay-line electronics is complete. The chassis has not been re-installed yet, I will put it back in tomorrow. I think the calculations and measurements are in good agreement.

Details:

Apart from restoring the transimpedance of the I/F amplifier, I also had to replace the two differential-sending AD8672s in the RF Log detector circuit for both LO and RF paths in the ALS-X board. I performed the same tests as I did the last time on the electronics bench, results will be uploaded to the DCC page for the 40m version of the board. I think the board is performing as advertised, although there is some variation in the noise of the two pairs of I/Q readouts. Sticking with the notation of the HP Application Note for delay line frequency discriminators, here are some numebrs for our delay line system:

  • K_{\phi} = 3.7 \ \mathrm{V/rad}  - measured by driving the LO/RF inputs with Fluke/Marconi at 7dBm/0dBm (which are the expected signal levels accounting for losses between the BeatMouth and the demodulator) and looking at the Vpp of the resulting I/F beat signal on a scope. This is assuming we use the differential output of the demodulator (divide by 2 if we use the single-ended output instead).
  • \tau_d = \frac{45 \ \mathrm{m}}{0.75c} \approx 0.2 \mu s [see measurement]
  • K_{d} = K_{\phi}2 \pi \tau_{d} \approx 4 \mu \mathrm{V/Hz} (to be confirmed by measurement by driving a known FM signal with the Marconi)
  • Assuming 1mW of light on our beat PDs and perfect contrast, the phase noise due to shot noise is \pi \sqrt{2\bar{P}\frac{hc}{\lambda}} / 1 \ \mathrm{mW} \approx 60 \ \mathrm{nrad /}\sqrt{\mathrm{Hz}}which is ~ 5 orders of magnitude lower than the electronics noise in equivalent frequency noise at 100 Hz.
  • The noise due to the FET mixer seems quite complicated to calculate - but as a lower bound, the Johnson current noise due to the 182 ohms at each RF input is ~ 10 pA/rtHz. With a transimpedance gain of 1 kohm, this corresponds to ~10 nV/rtHz. 

In conclusion: the ALS noise is very likely limited by ADC noise (~1 Hz/rtHz frequency noise for 5uV/rtHz ADC noise). We need some whiteningWhy whiten the demodulated signal instead of directly incorporating the whitening into the I/F amplifier input stage? Because I couldn't find a design that satisfies all the following criteria (this was why my previous design was flawed):

  1. The commutating part of the FET mixer must be close to ground potential always.
  2. The loading of the FET mixer is mostly capacitive.
  3. The DC gain of the I/F amplifier is low, with 20-30dB gain at 100 Hz, and then rolled off again at high frequencies for stability and sum-frequency rejection. In fact, it's not even obvious to me that we want a low DC gain - the quantity K_{\phi} is directly proportional to the DC transimpedance gain, and we want that to be large for more sensitive frequency discriminating.

So Rich suggested separating the transimpedance and whitening operations. The output noise of the differential outputs of the demodulator unit is <100 nV/rtHz at 100 Hz, so we should be able to saturate that noise level with a whitening unit whose input referred noise level is < 100 nV/rtHz. I'm going to see if there are any aLIGO whitening board spares - the existing whitening boards are not a good candidate I think because of the large DC signal level.

  14474   Tue Mar 5 15:56:27 2019 gautamSummaryTip-TIltDiscussion points about TT re-design

Chub, Koji and I have been talking about Udit's re-design. Here are a few points that were raised. Chub/Koji can add to/correct where necessary. Summary is that this needs considerable work before we can order the parts for a prototype and characterize it. I think the requirements may be stated as:

  1. The overall pendulum length should be similar to that of the SOS, i.e. ~0.3m (current length is more like 0.1m) such that the eigenfrequencies are lowered to more like ~1 Hz. Mainly we wan't to avoid any overlap with the stack eigenmodes. This may require an additional stiffening piece near the top of the tower as we have for the SOS. What is a numerical way to spec this?
  2. The center of the 2" optic should be 6" from the table.
  3. The mass of the optic + holder should be similar to the current design so we may use the same suspension wires (I believe they are a different thickness than that used for the SOS).
  4. Ensure we can extract any transmitted beams without clipping.
  5. Fine pitch adjustment capablity should be yyy mrad (20mrad?).
  6. We should preserve the footprint of the existing TTs, given the space constraints in vacuum. Moreover, we should be able to use dog-clamps to fix the tower in place, so the base plate should be designed accordingly.
  7. Keep the machining requirements as simple as possible while achieving the above requirements- i.e. do we really need rounded optic holder? Why not just rectangular? Similarly for other complicated features in the current design.

Some problems with Udit's design as it stands:

  1. I noticed that the base of the TT and the center of the 2" optic are 4" separated. The SOS cage base and center of 3" optic are separated by 6". Currently, there is an adaptor piece that raises the TT height to match that of the SOS. If we are doing a re-design, shouldn't we just aim for the correct height in the first place?
  2. Udit doesn't seem to have taken into account the torque due to the optic+holder in the pitch balancing calculations he did. Since this is expected to be >> that of any rod/screw we use for fine pitch balancing, we need to factor that into the calculation.
  3. For the coarse pitch adjustment, we'd need to slide the wire clamping piece relative to the optic holding piece. Rather than do this stochastically and hope for the best, the idea was to use a threaded screw to realize this operation in a controlled way. However, Udit's design doesn't include the threaded hole.
  4. There are many complicated machining features which are un-necessary.
  14473   Sun Mar 3 14:16:31 2019 gautamUpdateIOOMegatron hard-rebooted

IMC was not locked for the past several hours. Turned out MC autolocker was stuck, and I could not ssh into megatron because it was in some unresponsive state. I had to hard-reboot megatron, and once it came back up, I restarted the MCautolocker, FSS slow servo and nds2 processes. IMC re-locked immediately.

I was pulling long stretches of OSEM data from the NDS2 server (megatron) last night, I wonder if this flakiness is connected. Megatron is still running Ubuntu12.

  14472   Sat Mar 2 14:19:35 2019 gautamUpdateCDSFSS Slow servo gains not burt-ed

PSL NPRO PZT voltage showed large low frequency (hour timescale) excursions on the control room StripTool trace, leading me to suspect the slow servo wasn't working as expected. Yesterday evening, I keyed the unresponsive c1psl crate at ~9 PM PST, and had to run the burtrestore to get the PMC locking working. I must have pressed the wrong button on burtgooey or something, because all the FSS_SLOW channels were reset to 0. What's more, their values were not being saved by the hourly burt-snap script, so I don't have any lookback on what these values were. There isn't any detailed record on the elog about what the optimal values for these are, and the most recent reference I could find was Ki=0.1, Kp=Kd=0, which is what I've set it now to. The servo isn't running away, so I'm leaving things in this state, PID tuning can be done later.

I also added the FSS Slow servo channels to the burt snapshot requirement file at /cvs/cds/caltech/target/c1psl/autoBurt.req, and confirmed that the snapshots are getting the channels from now onwards.

While looking at the req file, I saw a bunch of *_MOPA* channels and also several other currently unused channels. Probably would benefit from going through these and commenting out all the legacy channels, to minimize disk space wastage (though we compress the snapshot files every few years anyways I guess).

Reminder that this (unrelated) issue still needs to be looked into... Note also that the new vacuum system does not have burt snapshot set up (i.e. it is still trying to get the old channels from the c1vac1 and c1vac2 databases, which while has significant overlap with the new system, should probably be setup correctly).

  14471   Wed Feb 27 21:34:21 2019 gautamUpdateGeneralSuspension diagnosis

In my effort to understand what's going on with the suspensions, I've kicked all the suspensions and shutdown the watchdogs at 1235366912. PSL shutter is closed to avoid trying to lock to the swinging cavity. The primary aims are

  1. To see how much the resonant peaks have shifted w.r.t. the database, if at all - I claim that the ETMY resonances have shifted by a large amount and also has lost one of the resonant peaks.
  2. To check the status of the existing diagonalization.

All the tests I have done so far (looking at free swinging data, resonant frequencies in the Oplev error signals etc) seem to suggest that the problem is mechanical rather than electrical. I'll do a quick check of the OSEM PD whitening unit in 1Y4 to be sure.But the fact that the same three peaks appear in the OSEM and Oplev spectra suggests to me that the problem is not electrical.

Watchdogs restored at 10 AM PST

  14470   Mon Feb 25 20:20:07 2019 KojiUpdateSUSDIN 41612 (96pin) shrouds installed to vertex SUS coil drivers

The forthcoming Acromag c1susaux is supposed to use the backplane connectors of the sus euro card modules.

However, the backplane connectors of the vertex sus coil drivers were already used by the fast switches (dewhitening) of c1sus.

Our plan is to connect the Acromag cables to the upper connectors, while the switch channels are wired to the lower connector by soldering jumper wires between the upper and lower connectors on board.

To make the lower 96pin DIN connector available for this, we needed DIN 41612 (96pin) shroud. Tyco Electronics 535074-2 is the correct component for this purpose. The shrouds have been installed to the backplane pins of the coil driver circuit D010001. The shroud has the 180deg rotation dof. The direction of the shroud was matched with the ones on the upper connectors.

Attachment 1: P_20190222_175058.jpg
P_20190222_175058.jpg
  14469   Fri Feb 22 12:19:46 2019 gautamUpdateIOOTT coil driver Vmon

To debug the issue of the suspected drifting TTs further, I temporarily hijacked CH0-CH8 of ADC1 in the c1lsc expansion chassis, and connected the "MON" outputs of the coil drivers (D010001) to them via some DB9 breakouts. The idea is to see if the problem is electrical. We should see some  slow drift in the voltage to the TTs correlated with the spot walking off the IPPOS QPD. From the wiring diagram, it doesn't look like there is any monitoring (slow or fast) of the control voltages to the TT coils, this should be factored into the Acromag upgrade of c1iscaux/c1iscaux2. EPICS monitoring should be sufficient for this purpose so I didn't setup any new DQ channels, I'll just look at the EPICS from the IOP model.

Quote:
Already, in the last ~1 hour, there has been considerable drift - see Attachment #2. The spot, which started at the center of the CCD monitor, has now nearly drifted off the top end. The ITMX and BS Oplev spots have been pretty constant over the same timescale, so it has to be the TTs?
  14468   Wed Feb 20 23:55:51 2019 gautamUpdateALSALS delay line electronics

Summary:

Last year, I worked on the ALS delay line electronics, thinking that we were in danger of saturation. The analysis was incorrect. I find that for RF signal levels between -10 dBm and +15 dBm, assuming 3dB insertion loss due to components and 5 dB conversion loss in the mixer, there is no danger of saturation in the I/F part of the circuit.

Details:

The key is that the MOSFET mixer used in the demodulation circuit drives an I/F current and not voltage. The I-to-V conversion is done by a transimpedance amplifier and not a voltage amplifier. The confusion arose from interpreting the gain of the first stage of the I/F amplifier as 1 kohm/10 ohm = 100. The real figures of merit we have to look at are the current through, and voltage across, the transimpedance resistor.  So I think we should revert to the old setup. This analysis is consistent with an actual test I did on the board, details of which may be found here.

We may still benefit from some whitening of the signal before digitization between 10-100 Hz, need to check what is an appropriate place in the signal chain to put in some whitening, there are some constraints to the circuit topology because of the MOSFET mixer.

One part of the circuit topology I'm still confused by is the choice of impedance-matching transformer at the RF-input of this demod board - why is a 75 ohm part used instead of a 50 ohm part? Isn't this going to actually result in an impedance mismatch given our RG405 cabling?

Update: Having pulled out the board, it looks like the input transformer is an ADT-1-1, and NOT an ADT1-1WT as labelled on the schematic. The former is indeed a 50ohm part. So it makes sense to me now.

Since we have the NF1611 fiber coupled PDs, I'm going to try reviving the X arm ALS to check out what the noise is after bypassing the suspect Menlo PDs we were using thus far. My re-analysis can be found in the attached zip of my ipynb (in PDF form).

Attachment 1: delayLineDemod.pdf.zip
  14467   Wed Feb 20 18:26:05 2019 gautamUpdateIOOIPPOS recommissioned

I've suspected that the TTs are drifting significantly over the course of the last couple of days, because despite repeated alignment efforts, the AS beam spot has drifted off the center of the camera view. I tried looking at IPPOS, but found that there was no data. Looking at the table, the QPD was turned backwards, and the DAQ cable wasn't connected (neither at the PD end, nor at 1Y2, where instead, a cable labelled "Spare QPD" was plugged in). Fortunately, the beam was making it out of the vacuum. So as to have a quantitative diagnostic, I reconnected the QPD, turned it the right way round, and adjusted the steering onto it such that with the AS spot on the center of the CCD monitor, the beam is also centered on the QPD. The calibration is uncertain, but at least we will be able to see how much the spot drifts on the QPD over some days. Also, we only have 16 Hz readback of this stuff.

I leave it to Chub to take the high-res photo and update the wiki, which was last done in 2012.


Already, in the last ~1 hour, there has been considerable drift - see Attachment #2. The spot, which started at the center of the CCD monitor, has now nearly drifted off the top end. The ITMX and BS Oplev spots have been pretty constant over the same timescale, so it has to be the TTs?

Attachment 1: IMG_7330.JPG
IMG_7330.JPG
Attachment 2: Screenshot_from_2019-02-20_19-43-27.png
Screenshot_from_2019-02-20_19-43-27.png
  14466   Tue Feb 19 22:52:17 2019 gautamUpdateASSY arm clipping doubtful

In an earlier elog, I had claimed that the suspected clipping of the cavity axis in the Y arm was not solved even after shifting the heater. I now think that it is extremely unlikely that there is still clipping due to the heater. Nevertheless, the ASS system is not working well. Some notes:

  1. The heater has been shifted nearly 1-inch relative to the cavity axis compared to its old position - see Attachment #1 which compares the overhead shot of the suspension cage before and after the Jan 2019 vent.
  2. On Sunday, I was able to recover TRY ~ 1.0 (but not as high as I was able to get by intentionally setting a yaw offset to the ASS) by hand alignment with the spot on ETMY much closer to the center of the optic, judging by the camera. There are offsets on the dither alignment error signals which depend on the dither frequency, so the A2L signals are not good judges of how well centered we are on the optic.
  3. By calculating the power lost by clipping a Gaussian beam cross-section with a rectangular block from one side (an admittedly naive model of clipping), I find that we'd have to be within 15 mm of the line connecting the centers of ITMY and ETMY to even see ~10 ppm loss, see Attachment #2. So it is hard to believe that this is still a problem. Also, see  Attachment #3 which compares side-by-side the view of ETMY as seen through the EY optical table viewport before and after the Jan 2019 vent.

We have to systematically re-commission the ASS system to get to the bottom of this.

Attachment 1: overheadComparison.pdf
overheadComparison.pdf
Attachment 2: clipping.pdf
clipping.pdf
Attachment 3: rearComparison.pdf
rearComparison.pdf
  14465   Tue Feb 19 19:03:18 2019 ranaUpdateComputersMartian router -> WPA2

I have swapped our martian router's WiFi security over to WPA2 (AES) from the previous, less-secure, system. Creds are in the secrets-40-red.

  14464   Mon Feb 18 19:16:55 2019 ranaSummaryComputersnew laptop setup: ASIA

The old IBM laptop (Asia) has died from a fan error after 7 years. WE have a new Lenovo 330 IdeaPad to replace it:

  1. to enter bios, the usual FN keys don't work. Power off laptop. Insert paperclip into small hole on laptop side with upside-down U symbol. Laptop powers up into BIOS setup.
  2. Insert SL 7.6 DVD into drive
  3. Change all settings from modern UEFI into Legacy support. Change Boot order to put CDROM first.
  4. Boot.
  5. Touchpad is not detected. Hookup mouse for setup.
  6. Delete windows partition.
  7. Setup wireless network according to (https://wiki-40m.ligo.caltech.edu/Network). Computer name = asia.martian. 
  8. Set root password. Do not create user (we want to make the controls acct later using the command line so that we can set userID and groupID both to 1001).
  9. Begin install...lots of disk access noises for awhile...

Install done. Touchpad not recognized by linux - lots of forum posts about kernel patching...Arrgh!

  14463   Sun Feb 17 17:35:04 2019 gautamSummaryLoss MeasurementInferred X arm loss

Summary:

To complete the story before moving on to ALS, I decided to measure the X arm loss. It is estimated to be 20 +/- 5 ppm. This is surprising to say the least, so I'm skeptical - the camera image of the ETMX spot when locked almost certainly looks brighter than in Oct 2016, but I don't have numerical proof. But I don't see any obvious red flags in the data quality/analysis yet. If true, this suggests that the "cleaning" of the Yarm optics actually did more harm than good, and if that's true, we should attempt to identify where in the procedure the problem lies - was it in my usage of non-optical grade solvents?

Details:

  1. Unlike the Y arm, the ratio \kappa = 1.006 \pm 0.002 is quite unambiguously greater than 1, which is already indicative of the loss being lower than for the Y arm. This is reliably repeatable over 15 datapoints at least.
  2. Attachment #1 shows the spectrum of the single-bounce off ITMX beam and compares it to ITMY - there is clearly a difference, and my intuition is to suspect some scatter / clipping, but I confirmed that on the AS table, in air, there is no clipping. So maybe it's something in vacuum? But I'm not sure how to explain its absence for the ITMX reflection. I didn't check the Michelson alignment since I misaligned ITMY before locking the XARM - so maybe there's a small shift in the axis of the X arm reflection relative to the Yarm because of the BS alignment. The other possibility is clipping at the BS?
  3. Attachment #2 shows the filtered time series for a short segment of the measurement. The X arm ASS is mostly well behaved, but the main thing preventing me from getting more statistics in is the familiar ETMX glitching problem, which while doesn't directly break the lock causes large swings in TRX. Given the recent experience with ETMY satellite box, I'm leaning towards blaming flaky electronics for this. If this weren't a problem, I'd run a spatial scan of ETMX, but I'm not going to attack this problem today.
  4. Attachments #3 and #4 show the posterior distributions for model parameters and loss respectively. 
  5. Data quality checks done so far (suggestions welcome):
    • Confirmed that there is no fringing from other ITM (in this case ITMY) / PRM / SRM / ETM in the single-bounce off ITMX config, by first macroscopically misaligning all these optics (the spots could be seen to move on the AS port PD, until they vanished, at some point presumably getting clipped in-vac), and then moving the optics around in PIT/YAW and looking for any effect in the fast time-series using NDScope.
    • Checked for slow drifts in locked / misaligned states - looks okay.
    • Checked centering on PDA520 using both o'scope plateau method and IR viewer - I believe the beam to be well centered.

Provisional conclusions:

  1. The actual act of venting / pumping down doesn't have nearly as large an effect on the round-trip loss as does working in chamber - the IX and EX chambers have not been opened since the 2016 vent.
  2. The solvent marks visible with the green flashlight on ETMY possibly signals the larger loss for the Y arm. 
Attachment 1: DQcheck_XARM.pdf
DQcheck_XARM.pdf
Attachment 2: consolidated.pdf
consolidated.pdf
Attachment 3: posterior_modelParams_XARM.pdf
posterior_modelParams_XARM.pdf
Attachment 4: posterior_Loss_XARM.pdf
posterior_Loss_XARM.pdf
  14462   Fri Feb 15 21:15:42 2019 gautamUpdateVACdd backup of c1vac made
  1. Connected one of the solid-state drives to c1vac. It was /dev/sdb.
  2. Formatted the drive using sudo mkfs -t ext4 /dev/sdb
  3.  Mounted it as /mnt/backup using sudo mount /dev/sdb /mnt/backup
  4. Started a tmux session for the dd, called DDbackup
  5. Started the dd backup using  sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync
  6. Backup completed in 719 seconds: need to test if it works...
controls@c1vac:~$ sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync
[sudo] password for controls: 
^C283422+0 records in
283422+0 records out
18574344192 bytes (19 GB) copied, 719.699 s, 25.8 MB/s
Quote:
 
  • Generate a bootable backup hard drive for c1vac, which could be swapped in on a short time scale after a failure.
  14461   Fri Feb 15 20:07:02 2019 JonUpdateVACUpdated vacuum punch list

While working on the vac controls today, I also took care of some of the remaining to-do items. Below is a summary of what was done, and what still remains.

Completed today

  • TP2/3 overcurrent interlock raised from 1 to 1.2 A. This was tripping during normal operation as the pump accelerates from low-speed (standby) to normal-speed mode.
  • Interlock conditions on VABSSCO/VABSSCI removed. Per discussion with Steve, these are not vent valves, but rather isolation valves between the BS/IOO/OMC annuli. The interlocks were preventing the valves from opening, and hence the IOO and OMC annuli from being pumped.
  • Channel exposed for interlocking in-vacuum high-voltage drivers. The channel name is C1:Vac-interlock_high_voltage. The vac interlock service sets this channel's value to 0 when the main volume pressure is in the range 3 mtorr-500 torr, and to 1 otherwise.
  • Annuli pumping integrated into the set of recognized states. "Vacuum normal" now refers to TP1 and TP2 pumping on the main volume AND TP3 pumping on all the annuli. The system is currently running in this state.
  • TP1 lowered to the nominal speed setting recommended by Steve: 33.6 krpm (560 Hz).

Still remaining

  • Implement a "blinker" input-output signal loop between two Acromags to detect hardware failures like the one today.
  • Add an AC power monitor to sense extended power losses and automatically put the system into safe shutdown.
  • Migrate the RGA to c1vac. Still some issues getting the serial comm working.
  • Troubleshoot the SuperBee (backup) main volume Parani gauge. It has not communicated with c1vac since a serial adapter was replaced two weeks ago. Chub thinks the gauge was possibly damaged by arcing during the replacement.
  • Scripting for more automated pumpdowns.
  • Generate a bootable backup hard drive for c1vac, which could be swapped in on a short time scale after a failure.
  14460   Fri Feb 15 19:50:09 2019 ranaUpdateVACVac system is back up

The acromags are on the UPS. I suspect the transient came in on one of the signal lines. Chub tells me he unplugged one of the signal cables from the chassis around the time things died on Monday, although we couldn't reproduce the problem doing that again today.

In this situation it wasn't the software that died, but the acromag units themselves. I have an idea to detect future occurrences using a "blinker" signal. One acromag outputs a periodic signal which is directly sensed by another acromag. The can be implemented as another polling condition enforced by the interlock code.

Quote:

If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?

Quote:

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)

 

  14459   Fri Feb 15 18:42:57 2019 gautamUpdateLSCTRY 60 Hz solved

A more permanent fix than a crocodile clip was implemented. Should probably look to do this for the X end unit as well.

Attachment 1: IMG_7323.JPG
IMG_7323.JPG
  14458   Fri Feb 15 18:41:18 2019 ranaUpdateVACVac system is back up

If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?

Quote:

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)

  14457   Fri Feb 15 15:22:08 2019 gautamUpdateCDSc1rfm errors persist

I restarted c1scyc1rfm (so both sender and receiver models were cycled) and power-cycled the c1iscey and c1sus machines. The TRY PD is certainly seeing light - it is just not getting piped over to c1rfm. dmesg doesn't give any clues. I'm out of ideas.

P.S. The new reality seems to be that getting ITMY stuck in the event of a c1susaux reboot is inevitable. As is the practise for ITMX, I tried slowly ramping the PIT and YAW biases to 0 slowly - but in the process of ramping YAW to 0, the optic got stuck. I am ramping in steps of 0.1 (in units of the PIT/YAW sliders, waiting ~3 seconds between steps), I guess I can try ramping even more slowly.

Update: I power cycled the physical RFM switch. This necessitated reboot of all vertex FEs. But seems like things are back to normal now...

Note: to unstick ITMY, seems like the best approach is:

  1. Jiggle bias until SIDE shadow sensor is on average above it's half-light level. This is the critical step. A bias of +20000 cts on the fast SIDE output seems to help.
  2. Set YAW bias to -10, ramp down the BIAS in steps of 0.1, watching shadow sensor levels to ensure optic doesn't get stuck again.
  3. Hope for the best. Iterate if necessary.
Quote:

The pressure is still 2e-4 torr according to CC1 so I thought I'd give ASS debugging a go tonight. But the arm transmission signal isn't coming through to the LSC model from the end PDs - so a resurfacing of this problem. Rebooting the sender model, c1scy, did not fix the problem. Moreover, c1susaux is dead. The last time I rebooted it, ITMY got stuck so I'm not going to attempt a revival tonight.

Attachment 1: Screenshot_from_2019-02-15_15-21-47.png
Screenshot_from_2019-02-15_15-21-47.png
  14456   Fri Feb 15 11:58:45 2019 JonUpdateVACVac system is back up

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals) that could only be cleared by power cycling the units. After resetting the system, the main volume pressure dropped quickly and is now < 2e-5 torr, so normal operations can resume. For future reference, below is the procedure to safely reset these units from a trouble state.

Vacromag Reset Procedure

  • TP2 and TP3 can be left running, but isolate them by closing valves V4 and V5.
  • TP1 can also be left running, but manually flip the operation mode on the front of the controller from REMOTE to LOCAL. This prevents the pump from receiving a "stop" command when its control Acromag shuts down.
  • Close all the pneumatic valves in the system (they'll otherwise close automatically when their control Acromags shut down).
  • On c1vac, stop the modbusIOC service. Sometimes this takes ~1 min to actually terminate.
  • Turn off the Acromags by flipping the "24 V" on the back of the chassis.
  • Wait ~10 sec, then turn them back on.
  • Start the modbusIOC service. It may take up to ~1 min for all the readings on the MEDM screen to initialize.
  • Ensure that the rotation speed of TP1,2,3 are still all nominal.
  • If pumps are OK, open V4, V5, and V7, then open V1. This restores the system to the "Maximum pumping speed" state.
  • Flip the TP1 controller operation state back to REMOTE.
  14455   Thu Feb 14 23:14:12 2019 gautamUpdateCDSc1rfm errors

The pressure is still 2e-4 torr according to CC1 so I thought I'd give ASS debugging a go tonight. But the arm transmission signal isn't coming through to the LSC model from the end PDs - so a resurfacing of this problem. Rebooting the sender model, c1scy, did not fix the problem. Moreover, c1susaux is dead. The last time I rebooted it, ITMY got stuck so I'm not going to attempt a revival tonight.

  14454   Thu Feb 14 21:29:24 2019 gautamSummaryLoss MeasurementInferred Y arm loss

Summary:

From the measurements I have, the Y arm loss is estimated to be 58 +/- 12 ppm. The quoted values are the median (50th percentile) and the distance to the 25th and 75th quantiles. This is significantly worse than the ~25 ppm number Johannes had determined. The data quality is questionable, so I would want to get some better data and run it through this machinery and see what number that yields. I'll try and systematically fix the ASS tomorrow and give it another shot.

Model and analysis framework:

Johannes and I have cleaned up the equations used for this calculation - while we may make more edits, the v1 of the document lives here. The crux of it is that we would like to measure the quantity \kappa = \frac{P_L}{P_M}, where P_{L(M)} is the power reflected from the resonant cavity (just the ITM). This quantity can then be used to back out the round-trip loss in the resonant cavity, with further model parameters which are:

  1. ITM and ETM power transmissivities
  2. Modulation depths and mode-matching efficiency into the cavity
  3. The statistical uncertainty on the measurement of the quantity \kappa, call it \sigma_{\kappa}

If we ignore the 3rd for a start, we can calculate the "expected" value of \kappa as a function of the round-trip loss, for some assumed uncertainties on the above-mentioned model parameters. This is shown in the top plot in Attachment #1, and while this was generated using emcee, is consistent with the first order uncertainty propagation based result I posted in my previous elog on this subject. The actual samples of the model parameters used to generate these curves are shown in the bottom. What this is telling us is that even if we have no measurement uncertainty on \kappa, the systematic uncertainties are of the order of 5 ppm, for the assumed variation in model parameters.

The same machinery can be run backwards - assuming we have multiple measurements of \kappa, we then also have a sample variance, \sigma_{\kappa}. The uncertainty on the sample variance estimator is also known, and serves to quantify the prior distribution on the parameter \sigma_{\kappa} for our Monte-Carlo sampling. The parameter \sigma_{\kappa} itself is required to quantify the likelihood of a given set of model parameters, given our measurement. For the measurements I did this week, my best estimate of \kappa \pm \sigma_{\kappa} = 0.995 \pm 0.005. Plugging this in, and assuming uncorrelated gaussian uncertainties on the model parameters, I can back out the posterior distributions.

For convenience, I separate the parameters into two groups - (i) All the model parameters excluding the RT loss, and (ii) the RT loss. Attachment #2 and Attachment #3 show the priors (orange) and posteriors (black) of these quantities. 

Interpretations:

  1. This particular technique only gives us information about the RT loss - much less so about the other model parameters. This can be seen by the fact that the posteriors for the loss is significantly different from the prior for the loss, but not for the other parameters. Potentially, the power of the technique is improved if we throw other measurements at it, like ringdowns.
  2. If we want to reach the 5 ppm uncertainty target, we need to do better both on the measurement of the DC reflection signals, and also narrow down the uncertainties on the other model parameters.

Some assumptions:

So that the experts on MC analysis can correct me wheere I'm wrong.

  1. The prior distributions are truncated independent Gaussians - truncated to avoid sampling from unphysical regions (e.g. negative ITM transmission). I've not enforced the truncation analytically - i.e. I just assume a -infinity probability to samples drawn from the unphysical parts, but to be completely sure, the actual cavity equations enforce physicality independently (i.e. the MC generates a set of parameters which is input to another function, which checks for the feasibility before making an evaluation). One could argue that the priors on some of these should be different - e.g. uniform PDF for loss between some bounds? Jeffrey's prior for \sigma_{\kappa}?
  2. How reasonable is it to assume the model parameter uncertainties are uncorrelated? For exaple, \eta, \beta_1, \beta_2 are all determined from the ALS-controlled cavity scan
Attachment 1: modelPerturb.pdf
modelPerturb.pdf
Attachment 2: posterior_modelParams.pdf
posterior_modelParams.pdf
Attachment 3: posterior_Loss.pdf
posterior_Loss.pdf
  14453   Thu Feb 14 18:16:24 2019 JonUpdateVACVacromag failure

I sent Gautam instructions to first try stopping the modbus service, power cycling the Acromag chassis, then restarting the service. I've seen the Acromags go into an unresponsive state after a strong electrical transient or shorted signal wires, and the unit has to be power cycled to be reset.

If this doesn't resolve it, I'll come in tomorrow to help with the Acromag replacement. We have plenty of spares.

Quote:

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

 

  14452   Thu Feb 14 15:37:35 2019 gautamUpdateVACVacromag failure

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

Details:

  1. Chub alerted me he had changed the main N2 line pressure, but this did not show up in the trend data. In fact, the trend data suggested that all 3 N2 gauges had stopped logging data (they just held the previous value) since sometime on Monday, see Attachment #1.
  2. We verified that the gauges were being powered, and that the analog voltage output of the gauges made sense in the drill press room ---> So this suggested something was wrong at the Vacuum rack electronics rack.
  3. Went to the vacuum rack, saw no obvious indicator lights signalling a fault.
  4. So I restarted the modbus process on c1vac using sudo systemctl restart modbusIOC.service. The way Jon has this setup, this service controls all the sub-processes talking to gauges and TPs, so resatrting this master process should have brought everything back.
  5. This tripped the interlock, and all valves got closed.
  6. Once the modbus service restarted, most things came back normally. However, V1, V3, V4 and V5 readbacks were listed as "UNDEF".
  7. The way the interlock code works, it checks a valve state change request against the monitor channel, so all these valves could not be opened.
  8. We confirmed that the valves themselves were operational, by bypassing the itnerlock logic and directly actuating on the valve - but this is not a safe way of running overnight so we decided to shut everything down.
  9. We also confirmed that the problem is with one particular Acromag unit - switching the readback Dsub connector to another channel (e.g. V1 --> VM2) showed the expected readback.
  10. As a further check - I connected a windows laptop with the Acromag software installed, to the suspected XT1111 - it reported an error message saying "USB device may be damaged". Plugging into another XT111 in the crate, I was able to access the unit in the normal way.
  11. The phoenix connector architecture of the Acromags makes it possible to replace this single unit (we have spare XT1111 units) without disturbing the whole system - so barring objections, we plan to do this at 9am tomorrow. The replacement plan is summarized in Attachment #2.

Pressure of the main volume seems to have stabilized - see Attachment #3, so it should be fine to leave the IFO in this state overnight.

Questions:

  1. What caused the original failure of the writing to the ADC channels hooked up to the N2 gauges? There isn't any logging setup from the modbus processes afaik.
  2. What caused the failure of the XT1111? What is the failure mode even? Because some other channels on the same XT1111 are working...
  3. Was it user error? The only operation carried out by me was restarting the modbus services - how did this damage the readback channels for just four valves? I think Chub also re-arranged some wires at the end, but unplugging/re-connecting some cables shouldn't produce this kind of response...

The whole point of the upgrade was to move to a more reliable system - but seems quite flaky already.

Attachment 1: Screenshot_from_2019-02-14_15-40-36.png
Screenshot_from_2019-02-14_15-40-36.png
Attachment 2: IMG_7320.JPG
IMG_7320.JPG
Attachment 3: Screenshot_from_2019-02-14_20-43-15.png
Screenshot_from_2019-02-14_20-43-15.png
  14451   Wed Feb 13 02:28:58 2019 gautamSummaryLoss MeasurementY arm loss

Attachment #1 shows estimated systematic uncertainty contributions due to 

  1. ITM transmission by +/- 0.01 % about the nominal value of 1.384 %
  2. ETM transmission of +/- 3 ppm about the nominal value of 13.7 ppm
  3. Mode matching efficiency into the cavity by +/- 5% about the nominal value of 92%.

In all the measurements so far, the ratio seems to be < 1, so this would seem to set a lower bound on the loss of ~35 ppm. The dominant source of systematic uncertainty is the 5% assumed fudge in the mode-matching

To do: 

  1. Account for uncertainties on modulation depths
  2. To estimate if the amount of fluctuation we are seeing in the reflected signal even after normalizing by the MC transmission, get an estimate of statistical uncertainty in the reflected power due to 
    • Pointing jitter - is there some spec for the damped angular displacement of the TT1/TT2?
    • Cavity length in-loop residual

Bottom line: I think we need to have other measurements and simultaenously analyse the data to get a more precise estimate of the loss.

Attachment 1: systUnc.pdf
systUnc.pdf
  14450   Tue Feb 12 22:59:17 2019 gautamSummaryLoss MeasurementY arm loss

Summary:

There are still several data quality issues that can be improved. I think there is little point in reading too much into this until some of the problems outlined below are fixed and we get a better measurement.

Details:

  1. Mainly, we are plagued by the inability of the ASS system to get back to the good transmission levels - I haven't done a careful diagnosis of the servo, but the ITM PIT output always seems to run away. As a result, the later measurements are poor, as can be seen in Attachment #2.
  2. For this reason, we can't easily sample different spot positions on the ETM.
  3. Data processing:
    • Download AS reflection and MC transmission DQ channels
    • Take their ratio
    • Downsample to 4 Hz by repeated application of scipy.signal.decimate by a factor of 8 each time, thrice, with the filtfilt option enabled
  4. Attachment #1 and #2 are basically showing the same data - the former collects all locked (top left) and misaligned (top right) data segments and plots them with the corresponding TRY values in the bottom row. The second plot shows a pseudo-continuous time series (pseudo because the segments transitioning from locked to misaligned states have been excised).

As an interim fix, I'm going to try and use the Oplevs as a DC reference, and run the dither alignment from zero each time, as this prevents the runaway problem at least. Data run started at 11:20 pm.

Attachment 1: segmented.pdf
segmented.pdf
Attachment 2: consolidated.pdf
consolidated.pdf
  14449   Tue Feb 12 18:00:32 2019 gautamSummaryLoss MeasurementLoss measurement setup

Another arm loss measurement started at 6pm.

  14448   Mon Feb 11 19:53:59 2019 gautamSummaryLoss MeasurementLoss measurement setup

To measure the Y-arm loss, I decided to start with the classic reflectivity method. To prepare for this measurement, I did the following:

  1. Placed a PDA 520 in the AS beam path on the AS table.
  2. Centered AS beam on above PDA 520.
  3. Monitored signal from PDA520 and the MC transmission simultaneously in the single-bounce from ITMY config (i.e. all other optics were misaligned). Convinced myself that variations in the two signals were correlated, thus ruling out in this rough test any interference from ghost beams from ITMX / PRM etc.
  4. For the DAQ, I decided to use the two ALS Y arm channels in 1Y4, mainly because we have some whitening electronics available there - the OMC model would've been ideal but we don't have free whitening channels available there. So I ran long BNCs to the rack, labelled them.
  5. It'd be nice to have these signals logged to frames, so I added DQ-channels for the IN1 points of the BEATY_FINE filters, recording at 2048 Hz for now. Of course this necessitated restart of the c1lsc model, which caused all the vertex FEs to crash, but the reboot script brought everything back smoothly.
  6. Not sure what to make of the shape of the spectrum of the AS photodiode, see Attachment #1 - looks like some kind of scattering shelf but I checked the centering on the PD itself, looks good. In any case, with the whitening gains I'm using, seems like both channels are measuring above ADC noise.
  7. Found that the existing misalignment to the ETMY does not eliminate signatures of cavity flash in the AS photodiode. So I increased the amount of misalignment until I saw no evidence of flashes in the reflected photodiode.
  8. Johannes' old scripts didn't work out of the box - so I massaged it into a form that works.
  9. Re-centered Oplevs to try and keep them as well centered in the linear range as possible, maybe the DC position info from the Oplevs is useful in the analysis.

I'm running a measurement tonight, starting now (~1130PM), should be done in ~1hour, may need to do more data-quality improvements to get a realistic loss number, but I figured I'd give this a whirl.

I'm rather pleased with an initial look at the first align/misalign cycle, at least there is discernable contrast between the two states - Attachment #2. The data is normalized by MC transmission, and then sig.decimated by x512 (8**3).

Attachment 1: DQcheck.pdf
DQcheck.pdf
Attachment 2: initialData.pdf
initialData.pdf
  14447   Mon Feb 11 16:38:34 2019 gautamUpdateLSCETMY OL calibration updated

Since we changed the HeNe, I updated the calibration factors, and accepted the changes in the SDF.

DOF OLD [urad/ct] NEW [urad/ct]
PITCH 140 176
YAW 143

193

Attachment 1: OL_calib_ETMY_PERROR.pdf
OL_calib_ETMY_PERROR.pdf
Attachment 2: OL_calib_ETMY_YERROR.pdf
OL_calib_ETMY_YERROR.pdf
ELOG V3.1.3-