40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 298 of 339  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  14460   Fri Feb 15 19:50:09 2019 ranaUpdateVACVac system is back up

The acromags are on the UPS. I suspect the transient came in on one of the signal lines. Chub tells me he unplugged one of the signal cables from the chassis around the time things died on Monday, although we couldn't reproduce the problem doing that again today.

In this situation it wasn't the software that died, but the acromag units themselves. I have an idea to detect future occurrences using a "blinker" signal. One acromag outputs a periodic signal which is directly sensed by another acromag. The can be implemented as another polling condition enforced by the interlock code.

Quote:

If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?

Quote:

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)

 

  14461   Fri Feb 15 20:07:02 2019 JonUpdateVACUpdated vacuum punch list

While working on the vac controls today, I also took care of some of the remaining to-do items. Below is a summary of what was done, and what still remains.

Completed today

  • TP2/3 overcurrent interlock raised from 1 to 1.2 A. This was tripping during normal operation as the pump accelerates from low-speed (standby) to normal-speed mode.
  • Interlock conditions on VABSSCO/VABSSCI removed. Per discussion with Steve, these are not vent valves, but rather isolation valves between the BS/IOO/OMC annuli. The interlocks were preventing the valves from opening, and hence the IOO and OMC annuli from being pumped.
  • Channel exposed for interlocking in-vacuum high-voltage drivers. The channel name is C1:Vac-interlock_high_voltage. The vac interlock service sets this channel's value to 0 when the main volume pressure is in the range 3 mtorr-500 torr, and to 1 otherwise.
  • Annuli pumping integrated into the set of recognized states. "Vacuum normal" now refers to TP1 and TP2 pumping on the main volume AND TP3 pumping on all the annuli. The system is currently running in this state.
  • TP1 lowered to the nominal speed setting recommended by Steve: 33.6 krpm (560 Hz).

Still remaining

  • Implement a "blinker" input-output signal loop between two Acromags to detect hardware failures like the one today.
  • Add an AC power monitor to sense extended power losses and automatically put the system into safe shutdown.
  • Migrate the RGA to c1vac. Still some issues getting the serial comm working.
  • Troubleshoot the SuperBee (backup) main volume Parani gauge. It has not communicated with c1vac since a serial adapter was replaced two weeks ago. Chub thinks the gauge was possibly damaged by arcing during the replacement.
  • Scripting for more automated pumpdowns.
  • Generate a bootable backup hard drive for c1vac, which could be swapped in on a short time scale after a failure.
  14462   Fri Feb 15 21:15:42 2019 gautamUpdateVACdd backup of c1vac made
  1. Connected one of the solid-state drives to c1vac. It was /dev/sdb.
  2. Formatted the drive using sudo mkfs -t ext4 /dev/sdb
  3.  Mounted it as /mnt/backup using sudo mount /dev/sdb /mnt/backup
  4. Started a tmux session for the dd, called DDbackup
  5. Started the dd backup using  sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync
  6. Backup completed in 719 seconds: need to test if it works...
controls@c1vac:~$ sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync
[sudo] password for controls: 
^C283422+0 records in
283422+0 records out
18574344192 bytes (19 GB) copied, 719.699 s, 25.8 MB/s
Quote:
 
  • Generate a bootable backup hard drive for c1vac, which could be swapped in on a short time scale after a failure.
  14465   Tue Feb 19 19:03:18 2019 ranaUpdateComputersMartian router -> WPA2

I have swapped our martian router's WiFi security over to WPA2 (AES) from the previous, less-secure, system. Creds are in the secrets-40-red.

  14466   Tue Feb 19 22:52:17 2019 gautamUpdateASSY arm clipping doubtful

In an earlier elog, I had claimed that the suspected clipping of the cavity axis in the Y arm was not solved even after shifting the heater. I now think that it is extremely unlikely that there is still clipping due to the heater. Nevertheless, the ASS system is not working well. Some notes:

  1. The heater has been shifted nearly 1-inch relative to the cavity axis compared to its old position - see Attachment #1 which compares the overhead shot of the suspension cage before and after the Jan 2019 vent.
  2. On Sunday, I was able to recover TRY ~ 1.0 (but not as high as I was able to get by intentionally setting a yaw offset to the ASS) by hand alignment with the spot on ETMY much closer to the center of the optic, judging by the camera. There are offsets on the dither alignment error signals which depend on the dither frequency, so the A2L signals are not good judges of how well centered we are on the optic.
  3. By calculating the power lost by clipping a Gaussian beam cross-section with a rectangular block from one side (an admittedly naive model of clipping), I find that we'd have to be within 15 mm of the line connecting the centers of ITMY and ETMY to even see ~10 ppm loss, see Attachment #2. So it is hard to believe that this is still a problem. Also, see  Attachment #3 which compares side-by-side the view of ETMY as seen through the EY optical table viewport before and after the Jan 2019 vent.

We have to systematically re-commission the ASS system to get to the bottom of this.

Attachment 1: overheadComparison.pdf
overheadComparison.pdf
Attachment 2: clipping.pdf
clipping.pdf
Attachment 3: rearComparison.pdf
rearComparison.pdf
  14467   Wed Feb 20 18:26:05 2019 gautamUpdateIOOIPPOS recommissioned

I've suspected that the TTs are drifting significantly over the course of the last couple of days, because despite repeated alignment efforts, the AS beam spot has drifted off the center of the camera view. I tried looking at IPPOS, but found that there was no data. Looking at the table, the QPD was turned backwards, and the DAQ cable wasn't connected (neither at the PD end, nor at 1Y2, where instead, a cable labelled "Spare QPD" was plugged in). Fortunately, the beam was making it out of the vacuum. So as to have a quantitative diagnostic, I reconnected the QPD, turned it the right way round, and adjusted the steering onto it such that with the AS spot on the center of the CCD monitor, the beam is also centered on the QPD. The calibration is uncertain, but at least we will be able to see how much the spot drifts on the QPD over some days. Also, we only have 16 Hz readback of this stuff.

I leave it to Chub to take the high-res photo and update the wiki, which was last done in 2012.


Already, in the last ~1 hour, there has been considerable drift - see Attachment #2. The spot, which started at the center of the CCD monitor, has now nearly drifted off the top end. The ITMX and BS Oplev spots have been pretty constant over the same timescale, so it has to be the TTs?

Attachment 1: IMG_7330.JPG
IMG_7330.JPG
Attachment 2: Screenshot_from_2019-02-20_19-43-27.png
Screenshot_from_2019-02-20_19-43-27.png
  14468   Wed Feb 20 23:55:51 2019 gautamUpdateALSALS delay line electronics

Summary:

Last year, I worked on the ALS delay line electronics, thinking that we were in danger of saturation. The analysis was incorrect. I find that for RF signal levels between -10 dBm and +15 dBm, assuming 3dB insertion loss due to components and 5 dB conversion loss in the mixer, there is no danger of saturation in the I/F part of the circuit.

Details:

The key is that the MOSFET mixer used in the demodulation circuit drives an I/F current and not voltage. The I-to-V conversion is done by a transimpedance amplifier and not a voltage amplifier. The confusion arose from interpreting the gain of the first stage of the I/F amplifier as 1 kohm/10 ohm = 100. The real figures of merit we have to look at are the current through, and voltage across, the transimpedance resistor.  So I think we should revert to the old setup. This analysis is consistent with an actual test I did on the board, details of which may be found here.

We may still benefit from some whitening of the signal before digitization between 10-100 Hz, need to check what is an appropriate place in the signal chain to put in some whitening, there are some constraints to the circuit topology because of the MOSFET mixer.

One part of the circuit topology I'm still confused by is the choice of impedance-matching transformer at the RF-input of this demod board - why is a 75 ohm part used instead of a 50 ohm part? Isn't this going to actually result in an impedance mismatch given our RG405 cabling?

Update: Having pulled out the board, it looks like the input transformer is an ADT-1-1, and NOT an ADT1-1WT as labelled on the schematic. The former is indeed a 50ohm part. So it makes sense to me now.

Since we have the NF1611 fiber coupled PDs, I'm going to try reviving the X arm ALS to check out what the noise is after bypassing the suspect Menlo PDs we were using thus far. My re-analysis can be found in the attached zip of my ipynb (in PDF form).

Attachment 1: delayLineDemod.pdf.zip
  14469   Fri Feb 22 12:19:46 2019 gautamUpdateIOOTT coil driver Vmon

To debug the issue of the suspected drifting TTs further, I temporarily hijacked CH0-CH8 of ADC1 in the c1lsc expansion chassis, and connected the "MON" outputs of the coil drivers (D010001) to them via some DB9 breakouts. The idea is to see if the problem is electrical. We should see some  slow drift in the voltage to the TTs correlated with the spot walking off the IPPOS QPD. From the wiring diagram, it doesn't look like there is any monitoring (slow or fast) of the control voltages to the TT coils, this should be factored into the Acromag upgrade of c1iscaux/c1iscaux2. EPICS monitoring should be sufficient for this purpose so I didn't setup any new DQ channels, I'll just look at the EPICS from the IOP model.

Quote:
Already, in the last ~1 hour, there has been considerable drift - see Attachment #2. The spot, which started at the center of the CCD monitor, has now nearly drifted off the top end. The ITMX and BS Oplev spots have been pretty constant over the same timescale, so it has to be the TTs?
  14470   Mon Feb 25 20:20:07 2019 KojiUpdateSUSDIN 41612 (96pin) shrouds installed to vertex SUS coil drivers

The forthcoming Acromag c1susaux is supposed to use the backplane connectors of the sus euro card modules.

However, the backplane connectors of the vertex sus coil drivers were already used by the fast switches (dewhitening) of c1sus.

Our plan is to connect the Acromag cables to the upper connectors, while the switch channels are wired to the lower connector by soldering jumper wires between the upper and lower connectors on board.

To make the lower 96pin DIN connector available for this, we needed DIN 41612 (96pin) shroud. Tyco Electronics 535074-2 is the correct component for this purpose. The shrouds have been installed to the backplane pins of the coil driver circuit D010001. The shroud has the 180deg rotation dof. The direction of the shroud was matched with the ones on the upper connectors.

Attachment 1: P_20190222_175058.jpg
P_20190222_175058.jpg
  14471   Wed Feb 27 21:34:21 2019 gautamUpdateGeneralSuspension diagnosis

In my effort to understand what's going on with the suspensions, I've kicked all the suspensions and shutdown the watchdogs at 1235366912. PSL shutter is closed to avoid trying to lock to the swinging cavity. The primary aims are

  1. To see how much the resonant peaks have shifted w.r.t. the database, if at all - I claim that the ETMY resonances have shifted by a large amount and also has lost one of the resonant peaks.
  2. To check the status of the existing diagonalization.

All the tests I have done so far (looking at free swinging data, resonant frequencies in the Oplev error signals etc) seem to suggest that the problem is mechanical rather than electrical. I'll do a quick check of the OSEM PD whitening unit in 1Y4 to be sure.But the fact that the same three peaks appear in the OSEM and Oplev spectra suggests to me that the problem is not electrical.

Watchdogs restored at 10 AM PST

  14472   Sat Mar 2 14:19:35 2019 gautamUpdateCDSFSS Slow servo gains not burt-ed

PSL NPRO PZT voltage showed large low frequency (hour timescale) excursions on the control room StripTool trace, leading me to suspect the slow servo wasn't working as expected. Yesterday evening, I keyed the unresponsive c1psl crate at ~9 PM PST, and had to run the burtrestore to get the PMC locking working. I must have pressed the wrong button on burtgooey or something, because all the FSS_SLOW channels were reset to 0. What's more, their values were not being saved by the hourly burt-snap script, so I don't have any lookback on what these values were. There isn't any detailed record on the elog about what the optimal values for these are, and the most recent reference I could find was Ki=0.1, Kp=Kd=0, which is what I've set it now to. The servo isn't running away, so I'm leaving things in this state, PID tuning can be done later.

I also added the FSS Slow servo channels to the burt snapshot requirement file at /cvs/cds/caltech/target/c1psl/autoBurt.req, and confirmed that the snapshots are getting the channels from now onwards.

While looking at the req file, I saw a bunch of *_MOPA* channels and also several other currently unused channels. Probably would benefit from going through these and commenting out all the legacy channels, to minimize disk space wastage (though we compress the snapshot files every few years anyways I guess).

Reminder that this (unrelated) issue still needs to be looked into... Note also that the new vacuum system does not have burt snapshot set up (i.e. it is still trying to get the old channels from the c1vac1 and c1vac2 databases, which while has significant overlap with the new system, should probably be setup correctly).

  14473   Sun Mar 3 14:16:31 2019 gautamUpdateIOOMegatron hard-rebooted

IMC was not locked for the past several hours. Turned out MC autolocker was stuck, and I could not ssh into megatron because it was in some unresponsive state. I had to hard-reboot megatron, and once it came back up, I restarted the MCautolocker, FSS slow servo and nds2 processes. IMC re-locked immediately.

I was pulling long stretches of OSEM data from the NDS2 server (megatron) last night, I wonder if this flakiness is connected. Megatron is still running Ubuntu12.

  14475   Thu Mar 7 01:06:38 2019 gautamUpdateALSALS delay line electronics

Summary:

The restoration of the delay-line electronics is complete. The chassis has not been re-installed yet, I will put it back in tomorrow. I think the calculations and measurements are in good agreement.

Details:

Apart from restoring the transimpedance of the I/F amplifier, I also had to replace the two differential-sending AD8672s in the RF Log detector circuit for both LO and RF paths in the ALS-X board. I performed the same tests as I did the last time on the electronics bench, results will be uploaded to the DCC page for the 40m version of the board. I think the board is performing as advertised, although there is some variation in the noise of the two pairs of I/Q readouts. Sticking with the notation of the HP Application Note for delay line frequency discriminators, here are some numebrs for our delay line system:

  • K_{\phi} = 3.7 \ \mathrm{V/rad}  - measured by driving the LO/RF inputs with Fluke/Marconi at 7dBm/0dBm (which are the expected signal levels accounting for losses between the BeatMouth and the demodulator) and looking at the Vpp of the resulting I/F beat signal on a scope. This is assuming we use the differential output of the demodulator (divide by 2 if we use the single-ended output instead).
  • \tau_d = \frac{45 \ \mathrm{m}}{0.75c} \approx 0.2 \mu s [see measurement]
  • K_{d} = K_{\phi}2 \pi \tau_{d} \approx 4 \mu \mathrm{V/Hz} (to be confirmed by measurement by driving a known FM signal with the Marconi)
  • Assuming 1mW of light on our beat PDs and perfect contrast, the phase noise due to shot noise is \pi \sqrt{2\bar{P}\frac{hc}{\lambda}} / 1 \ \mathrm{mW} \approx 60 \ \mathrm{nrad /}\sqrt{\mathrm{Hz}}which is ~ 5 orders of magnitude lower than the electronics noise in equivalent frequency noise at 100 Hz.
  • The noise due to the FET mixer seems quite complicated to calculate - but as a lower bound, the Johnson current noise due to the 182 ohms at each RF input is ~ 10 pA/rtHz. With a transimpedance gain of 1 kohm, this corresponds to ~10 nV/rtHz. 

In conclusion: the ALS noise is very likely limited by ADC noise (~1 Hz/rtHz frequency noise for 5uV/rtHz ADC noise). We need some whiteningWhy whiten the demodulated signal instead of directly incorporating the whitening into the I/F amplifier input stage? Because I couldn't find a design that satisfies all the following criteria (this was why my previous design was flawed):

  1. The commutating part of the FET mixer must be close to ground potential always.
  2. The loading of the FET mixer is mostly capacitive.
  3. The DC gain of the I/F amplifier is low, with 20-30dB gain at 100 Hz, and then rolled off again at high frequencies for stability and sum-frequency rejection. In fact, it's not even obvious to me that we want a low DC gain - the quantity K_{\phi} is directly proportional to the DC transimpedance gain, and we want that to be large for more sensitive frequency discriminating.

So Rich suggested separating the transimpedance and whitening operations. The output noise of the differential outputs of the demodulator unit is <100 nV/rtHz at 100 Hz, so we should be able to saturate that noise level with a whitening unit whose input referred noise level is < 100 nV/rtHz. I'm going to see if there are any aLIGO whitening board spares - the existing whitening boards are not a good candidate I think because of the large DC signal level.

  14477   Tue Mar 12 22:51:25 2019 gautamUpdateALSALS delay line electronics

This Hanford alog may be of relevance as we are using the aLIGO AA chassis for the IR ALS channels. We aren't expecting any large amplitude high frequency signals for this application, but putting this here in case it's useful someday.

  14478   Wed Mar 13 01:27:30 2019 gautamUpdateALSALS delay line electronics

This test was done, and I determine the frequency discriminant to be \approx 5 \mu \mathrm{V}/\mathrm{Hz} (for an RF signal level of ~2 dBm). 

Attachment #1: Measured and predicted value of the DFD discriminant for a few RF signal levels.

  • Methodology was to drive an FM (deviation = 25 Hz, fMod = 221 Hz, fCarrier ~ 40 MHz) with the Marconi, and look at the IF spectrum peak height on a SR785
  • The "Design" curve is calculated using the circuit parameters, assuming 4dB conversion loss in the mixer itself, and 3dB insertion loss due to various impedance matching transformers and couplers in the RF signal chain. I fudged the insertion/convertion loss numbers to get this curve to line up with the measurements (by eye).
  • For the measurement, I assume the value for FM deviation displayed on the Marconi is an RMS value (this is the best I can gather from the manual). I'll double checking by looking at the RFmon spectrum directly on the Agilent NA.
  • X axis calibrated by reading off from the RF power monitor using a DMM and using the calibration data from the bench.
  • I could never get the ratio of peak heights in Ichan/Qchan (or the other way around) to better than ~ 1/8 (by moving the carrier frequency around). Not sure I can explain that - small non-orthogonality between I and Q channels cannot explain this level of leakage.

Attachment #2: Measured noise spectrum in the 1Y2 (LSC) electronics rack, calibrated to Hz/rtHz using the discriminant from Attachment #1.

  • Something funky with the I channel for X, I'll re-take that spectrum.

I'm still waiting on some parts for the new BeatMouth before giving the whole system a whirl. In the meantime, I'll work on the EX and EY green setups, to try and improve the mode-matching and better characterize the expected suppressed frequency noise of the end NPROs - the goal here is to rule out the excess low-frequency noise that was seen in the ALS signals coming from unsuppressed frequency noise.

Bottom lines: 

  1. The DFD noise is at the level of ~ 10mHz/rtHz above 10 Hz. This justifies the need for whitening before ADC-ing.
  2. The measured signal/noise levels in the DFD chain are in good agreement with the "expected" levels from circuit component values and typical insertion/conversion loss values.
  3. Why are there so many 60 Hz harmonics???
Attachment 1: DFDcal.pdf
DFDcal.pdf
Attachment 2: DFDnoise.pdf
DFDnoise.pdf
  14479   Thu Mar 14 23:26:47 2019 AnjaliUpdateALSALS delay line electronics

Attachment #1 shows the schematic of the test setup. Signal generator (Marconi) was used to supply the RF input. We observed the IF output in the following three test conditions.

  1. Observed the spectrum with FM modulation (fcarrier of 40 MHz and fmod of 221 Hz )- a peak at 221 Hz was observed.
  2. Observed the noise spectrum without FM modulation.
  3. Observed the noise spectrum after disconnecting the delayed output of the delay line. 
  • It is observed that the broad band noise level is higher without FM modulation (2) compared to that we observed after disconnecting the delayed output of the delay line (3).
  • It is also observed that the noise level is increasing with increase in RF input power. 
  • We need to find the reason for increase in broad band noise .
Attachment 1: test_setup_ALS_delay_line_electronics.pdf
test_setup_ALS_delay_line_electronics.pdf
  14480   Sun Mar 17 00:42:20 2019 gautamUpdateALSNF1611 cannot be shot-noise limited?

Summary:

Per the manual (pg12) of the NF 1611 photodiode, the "Input Noise Current" is 16 pA/rtHz. It also specifies that for "Linear Operation", the max input power is 1 mW, which at 1um corresponds to a current shot noise of ~14 pA/rtHz. Therefore,

  1. This photodiode cannot be shot-noise limited if we also want to stay in the spec-ed linear regime.
  2. We don't need to worry so much about the noise figure of the RF amplifier that follows the photodiode. In fact, I think we can use a higher gain RF amplifier with a slightly worse noise figure (e.g. ZHL-3A) as we will benefit from having a larger frequency discriminant with more RF power reaching the delay line.

Details:

Attachment #1: Here, I plot the expected voltage noise due to shot noise of the incident light, assuming 0.75 A/W for InGaAs and 700V/A transimpedance gain. 

  • For convenience, I've calibrated on the twin axes the current shot noise (X) and equivalent amplifier noise figure at a given voltage noise, assuming a 50 ohm system (Y).
  • The 16 pA/rtHz input current noise exceeds the shot noise contribution for powers as high as 1 mW.
  • Even at 0.5 mW power on the PD, we can use the ZHL-3A rather than the Teledyne:
    • This calculation was motivated by some suspicious features in the Teledyne amplifier gain, I will write a separate elog about that. 
    • For the light levels we have, I expect ~3dBm RF signal from the photodiode. With the 24dB of gain from the ZHL-3A, the signal becomes 27dBm, which is smaller (but close to) the spec-ed max output of the ZHL-3A, which is 29.5 dBm. Is this too close to the edge?
    • I will measure the gain/noise of the ZHL-3A to get a better answer to these questions.
  • If in the future we get a better photodiode setup that reaches sub-1nV/rtHz (dark/electronics) voltage noise, we may have to re-evaluate what is an appropriate RF amplifier.
Attachment 1: PDnoise.pdf
PDnoise.pdf
  14481   Sun Mar 17 13:35:39 2019 AnjaliUpdateALSPower splitter characterization

We characterized the power splitter ( Minicircuit- ZAPD-2-252-S+). The schematic of the measurement setup is shown in attachment #1. The network/spectrum/impedance analyzer (Agilent 4395A) was used in the network analyzer mode for the characterisation. The RF output is enabled in the network analyser mode. We used an other spliiter (Power splitter #1) to splitt the RF power such that one part goes to the network analzer and the other part goes to the power spliiter (Power splitter #2) . We are characterising power splitter #2 in this test. The characterisation results and comparison with the data sheet values are shown in Attachment # 2-4.

Attachment #2 : Comparison of total loss in port 1 and 2

Attachment #3 : Comparison of amplitude unbalance

Attachment #4 : Comparison of phase unbalance

  • From the data sheet: the splitter is wideband, 5 to 2500 MHz, useable from 0.5 to 3000 MHz. We performd the measurement from 1 MHz to 500 MHz (limited by the band width of the network analyzer).
  • It can be seen from attachment #2 and #4 that there is a sudden increase below ~11 MHz. The reason for this is not clear to me
  • The mesured total loss value for port 1 and port 2 are slightly higher than that specified in the data sheet.From the data sheet, the maximum loss in port 1 and port 2 in the range at 450 MHz are 3.51 dB and 3.49 dB respectively. The measured values are 3.61 dB and 3.59 dB respectively for port 1 and port 2, which is higher than the values mentioed in the data sheet. It can also be seen from attachment #1 (b) that the expected trend in total loss with frequency is that the loss is decreasing with increase in frequency and we are observing the opposite trend in the frequency range 11-500 MHz. 
  • From the data sheet, the maximum amplitude balance in the 5 MHz-500 MHz range is 0.02 dB and the measured maximum value is 0.03 dB
  • Similary for the phase unbalance, the maximum value specified by the data sheet in the 5 MHz- 500 MHz range is 0.12 degree and the measurement shows a phase unbalance upto 0.7 degree in this frequency range
  • So the observations shows that the measured values are slighty higher than that specified in the data sheet values.
Attachment 1: Measurement_setup.pdf
Measurement_setup.pdf
Attachment 2: Total_loss.pdf
Total_loss.pdf
Attachment 3: Amplitude_unbalance.pdf
Amplitude_unbalance.pdf
Attachment 4: Phase_unbalance.pdf
Phase_unbalance.pdf
  14482   Sun Mar 17 21:06:17 2019 AnjaliUpdateALSAmplifier characterisation

The goal was to characterise the new amplifier (AP1053). For a practice, I did the characterisation of the old amplifier.This test is similar to that reported in Elog ID 13602.

  • Attachment #1 shows the schematic of the setup for gain characterisation and Attachment #2 shows the results of gain characterisation. 
  • The gain measurement is comparable with the previous results. From the data sheet, 10 dB gain is guaranteed in the frequency range 10-450 MHz. From our observation, the gain is not flat pver this region. We have measured a maximum gain of 10.7 dB at 6 MHz and it has then decreased upto 8.5 dB at 500 MHz
  • Attachement #3 shows the schematic of the setup for the noise characterisation and Attachment # 4 shows the results of noise measurment. 
  • The noise measurement doesn't look fine. We probably have to repeat this measurement.
Attachment 1: Gain_measurement.pdf
Gain_measurement.pdf
Attachment 2: Amplifier_gain.pdf
Amplifier_gain.pdf
Attachment 3: noise_measurement.pdf
noise_measurement.pdf
Attachment 4: noise_characterisation.pdf
noise_characterisation.pdf
  14483   Mon Mar 18 12:27:42 2019 gautamUpdateGeneralIFO status
  1. c1iscaux2 VME crate is damaged - see Attachment #1. 
    • It is not generating the 12V supply voltage, and so nothing in the crate works.
    • Tried resetting via front panel button, power cycling by removing power cable on rear, all to no effect.
    • Tried pulling out all cards and checking if there was an internal short that was causing the failure - looks like the problem is with the crate itself.
    • Not sure how long this machine has been unresponsive as we don't have any readback of the status of the eurocrate machines.
    • Not a showstopper, mainly we can't control the whitening settings for AS55, REFL55, REFL165 and ALSY. 
    • Acromag installation schedule should be accelerated.
    • * Koji reminded me that \text{VME crate} \ \neq \ \text{eurocrate}. The former is what is used for the slow machines, the latter is what is used for holding the iLIGO style electronics boards.
  2. ITMX oplev is dead - see Attachment #2.
    • Lasted ~3 years (installed March 2016).
    • I confirmed that no light is coming out of the laser head on the optical table.
    • I'll ask Chub to replace it this afternoon.
  3. c1susaux is unresponsive
    • I didn't reboot it as I didn't want to spend some hours freeing ITMY. 
    • At some point we will have to bite the bullet and do it.
  4. Input pointing is still not stable
    • I aligned the input pointing using TT1/TT2 to maximize TRX/TRY before lunch, but in 1 hour, the pointing has already drifted.
  5. POX/POY locking is working okay. TRX has large low-frequency fluctuations because of ITMX not having an Oplev servo, should be rectified once we swap out the HeNe.

The goal for this week is to test out the ALS system, so this is kind of a workable state since POX/POY locking is working. But the number of broken things is accumulating fast.

Attachment 1: IMG_7343.JPG
IMG_7343.JPG
Attachment 2: ITMXOL.png
ITMXOL.png
  14484   Mon Mar 18 17:06:12 2019 gautamUpdateOptical LeversITMY HeNe replaced

Oplev HeNe was replaced this afternoon. We did some HeNe shuffling:

  1. A new HeNe was being used for the fiber illumination demo at EX. We took that out and decided to use it as the new ITMX HeNe. It had 2.6mW output at 632nm (measured with the Ophir power meter)
  2. Old ETMY HeNe was used for fiber illumination demo.
  3. Old ITMX HeNe was putting out no light - it will be disposed.

Attachment #1 shows the RIN and Attachment #2 and #3 show the PIT and YAW TFs with the new HeNe.

The ITMX Oplev path is still not great - the ingoing beam is within 2mm of clipping on a 2" lens used in the POX path, and there is a bunch of scattered red light everywhere. We should take the opportunity when the chamber is open to try and have a better layout (it may be tricky to optize without touching the two in-vacuum steering optics).

Quote:

I'll ask Chub to replace it this afternoon.

Attachment 1: OLRIN.pdf
OLRIN.pdf
Attachment 2: OL_PIT.pdf
OL_PIT.pdf
Attachment 3: OL_YAW.pdf
OL_YAW.pdf
  14486   Mon Mar 18 20:22:28 2019 gautamUpdateALSALS stability test

I'm running a test to see how stable the EX green lock is. For this purpose, I've left the slow temperature tuning servo on (there is a 100 count limiter enabled, so nothing crazy should happen).

  14487   Wed Mar 20 12:31:30 2019 JonUpdateVACDoing vac controls work

I'm rebooting the IOLAN server to load new serial ports. The interlocks might trip when the pressure gauge readbacks cut out.

  14488   Wed Mar 20 19:26:25 2019 JonUpdateVACProtection against AC power loss

Today I implemented protection of the vac system against extended power losses. Previously, the vac controls system (both old and new) could not communicate with the APC Smart-UPS 2200 providing backup power. This was not an issue for short glitches, but for extended outages the system had no way of knowing it was running on dwindling reserve power. An intelligent system should sense the outage and put the IFO into a controlled shutdown, before the batteries are fully drained.

What enabled this was a workaround Gautam and I found for communicating with the UPS serially. Although the UPS has a serial port, neither the connector pinout nor the low-level command protocol are released by APC. The only official way to communicate with the UPS is through their high-level PowerChute software. However, we did find "unofficial" documentation of APC's protocol. Using this information, I was able to interface the the UPS to the IOLAN serial device server. This allowed the UPS status to be queried using the same Python/TCP sockets model as all the other serial devices (gauges, pumps, etc.). I created a new service called "serial_UPS.service" to persistently run this Python process like the others. I added a new EPICS channel "C1:Vac-UPS_status" which is updated by this process.

With all this in place, I added new logic to the interlock.py code which closes all valves and stops all pumps in the event of a power failure. To be conservative, this interlock is also tripped when the communications link with the UPS is disconnected (i.e., when the power state becomes unknown). I tested the new conditions against both communication failure (by disconnecting the serial cable) and power failure (by pressing the "Test" button on the UPS front panel). This protects TP2 and TP3. However, I discovered that TP1---the pump that might be most damaged by a sudden power failure---is not on the UPS. It's plugged directly into a 240V outlet along the wall. This is because the current UPS doesn't have any 240V sockets. I'd recommend we get one that can handle all the turbo pumps.

For future reference:

Pin 1: RxD

Pin 2: TxD

Pin 5: GND

Standard: RS-232

Baud rate: 2400

Data bits: 8

Parity: none

Stop bits: 1

Handshaking: none

 

 

Attachment 1: IMG_3146.jpg
IMG_3146.jpg
  14489   Wed Mar 20 20:07:22 2019 JonUpdateVACDoing vac controls work

Work is completed and the vac system is back in its nominal state.

Quote:

I'm rebooting the IOLAN server to load new serial ports. The interlocks might trip when the pressure gauge readbacks cut out.

 

  14490   Thu Mar 21 12:46:22 2019 JonUpdateVACMore vac controls upgrades

The vac controls system is going down for migration from Python 2.7 to 3.4. Will advise when it is back up.

  14491   Thu Mar 21 17:22:52 2019 JonUpdateVACMore vac controls upgrades

I've converted all the vac control system code to run on Python 3.4, the latest version available through the Debian package manager. Note that these codes now REQUIRE Python 3.x. We decided there was no need to preserve Python 2.x compatibility. I'm leaving the vac system returned to its nominal state ("vacuum normal + RGA").

Quote:

The vac controls system is going down for migration from Python 2.7 to 3.4. Will advise when it is back up.

 

  14492   Thu Mar 21 18:09:36 2019 KojiUpdateCDSdb file preparation for acromag c1susaux

I have updated the google doc spreadsheet to indicate the required action for the new dbfile generation.

There are three types of actions:

1. COPY - Just duplicate the old EPICS db entry. This is for soft channels, calc channels.
2. DELETE - Delete the entry for some physical channels that will not be implemented on Acromag (oplev, dewhitening mon, AI monitor, etc)
3. REPLACE - For the physical channels, we want to replace the port names.

The blue part of the spreadsheet indicates the action for each channel. If it is a physical channel, the assigned module and the channel are indicated there. What we still want to do is to use the these information for generating the port name which looks like "@asynMask(C1VAC_XT1221A_ADC 1 -16)MODBUS_DATA".

The links to the spreadsheets can be found on 40m wiki: https://wiki-40m.ligo.caltech.edu/CDS/SlowControls/c1susaux

Attachment 1: Screen_Shot_2019-03-21_at_18.06.53.png
Screen_Shot_2019-03-21_at_18.06.53.png
  14494   Thu Mar 21 21:50:31 2019 ranaUpdateVACProtection against AC power loss

agreed - we need all pumps on UPS for their safety and also so that we can spin them down safely. Can you and Chub please find a suitable UPS?

Quote:

However, I discovered that TP1---the pump that might be most damaged by a sudden power failure---is not on the UPS. It's plugged directly into a 240V outlet along the wall. This is because the current UPS doesn't have any 240V sockets. I'd recommend we get one that can handle all the turbo pumps.

  14495   Mon Mar 25 10:21:05 2019 JonUpdateUpgradec1susaux upgrade plan

Now that the Acromag upgrade of c1vac is complete, the next system to be upgraded will be c1susaux. We chose c1susaux because it is one of the highest-priority systems awaiting upgrade, and because Johannes has already partially assembled its Acromag replacement (see photos below). I've assessed the partially-assembled Acromag chassis and the mostly-set-up host computer and propose we do the following to complete the system.

Documentation

As I go, I'm writing step-by-step documentation here so that others can follow this procedure for future systems. The goal is to create a standard procedure that can be followed for all the remaining upgrades.

Acromag Chassis Status

The bulk of the remaining work is the wiring and testing of the rackmount chassis housing the Acromag units. This system consists of 17 units: 10 ADCs, 4 DACs, and 3 digitial I/O modules. Johannes has already created a full list of channel wiring assignments. He has installed DB37-to-breakout board feedthroughs for all the signal cable connections. It looks like about 40% of the wiring from the breakout boards to Acromag terminals is already done.

The Acromag units have to be initially configured using the Windows laptop connected by USB. Last week I wasn't immediately able to check their configuration because I couldn't power on the units. Although the DC power wiring is complete, when I connected a 24V power supply to the chassis connector and flipped on the switch, the voltage dropped to ~10V irrespective of adjusting the current limit. The 24V indicator lights on the chassis front and back illuminated dimly, but the Acromag lights did not turn on. I suspect there is a short to ground somewhere, but I didn't have time to investigate further. I'll check again this week unless someone else looks at it first.

Host Computer Status

The host computer has already been mostly configured by Johannes. So far I've only set up IP forwarding rules between the martian-facing and Acromag-facing ethernet interfaces (the Acromags are on a subnet inaccessible from the outside). This is documented in the link above. I also plan to set up local installations of modbus and EPICS, as explained below. The new EPICS command file (launches the IOC) and database files (define the channels) have already been created by Johannes. I think all that remains is to set up the IOC as a persistent system service.

Host computer OS

Recommendation from Keith Thorne:

For CDS lab-wide, Jamie Rollins and Ryan Blair have been maintaining Debian 8 and 9 repos with some of these.  
They have somewhat older EPICS versions and may not include all the modules we have for SL7.
One worry is whether they will keep up Debian 9 maintained, as Debian 10 is already out.

I would likely choose Debian 9 instead of Ubuntu 18.04.02, as not sure of Ubuntu repos for EPICS libraries.

Based on this, I propose we use Debian 9 for our Acromag systems. I don't see a strong reason to switch to SL7, especially since c1vac and c1susaux are already set-up using Debian 8. Although Debian 8 is one version out of date, I think it's better to get a well-documented and tested procedure in place before we upgrade the working c1vac and c1susaux computers. When we start building the next system, let's install Debian 9 (or 10, if it's available), get it working with EPICS/modbus, then loop back to c1vac and c1susaux for the OS upgrade.

Local vs. central modbus/EPICS installation

The current convention is for all machines to share a common installation which is hosted on the /cvs/cds network drive. This seems appealing because only a single central EPICS distribution needs to be maintained. However, from experience attempting this on c1vac, I'm convinced this is a bad design for the new Acromag systems.

The problem is that any network outage, even routine maintenance or brief glitches, wreaks havoc on Acromags set up this way. When the network is interrupted, the modbus executable disappears mid-execution, crashing the process and hanging the OS (I think related to the deadlocked NFS mount), so that the only way to recover is to manually power-cycle. Still worse, this can happen silently (channel values freeze), meaning that, e.g., watchdog protections might fail.

To avoid this, I'm planning to install a local EPICS distribution from source on c1susaux, just as I did for c1vac. This only takes a few minutes to do, and I will include the steps in the documented procedure. Building from source also better protects against OS-dependent buginess.

Main TODO items

  • Debug issue with Acromag DC power wiring
  • Complete wiring from chassis feedthroughs to Acromag terminals, following this wiring diagram
  • Check/set the configuration of each Acromag unit using the software on the Windows laptop
  • Set the analog channel calibrations in the EPICS database file
  • Test each channel ex situ. Chub and I discussed an idea to use two DB-37F breakout boards, with the wiring between the board terminals manually set. One DAC channel would be calibrated and driven to test other ADC channels. A similar approach could be used for the digital input/output channels.
Attachment 1: IMG_3136.jpg
IMG_3136.jpg
Attachment 2: IMG_3138.jpg
IMG_3138.jpg
Attachment 3: IMG_3137.jpg
IMG_3137.jpg
  14496   Tue Mar 26 04:25:13 2019 JohannesUpdateUpgradec1susaux upgrade plan
Quote:

Main TODO items

  • Debug issue with Acromag DC power wiring
  • Complete wiring from chassis feedthroughs to Acromag terminals, following this wiring diagram
  • Check/set the configuration of each Acromag unit using the software on the Windows laptop
  • Set the analog channel calibrations in the EPICS database file
  • Test each channel ex situ. Chub and I discussed an idea to use two DB-37F breakout boards, with the wiring between the board terminals manually set. One DAC channel would be calibrated and driven to test other ADC channels. A similar approach could be used for the digital input/output channels.

Just a few remarks, since I heard from Gautam that c1susaux is next in line for upgrade.

All units have already been configured with IP addresses and settings following the scheme explained on the slow controls wiki page. I did this while powering the units in the chassis, so I'm not sure where the short is coming from. Is the power supply maybe not sourcing enough current? Powering all units at the same time takes significant current, something like >1.5 Amps if I remember correctly. These are the IPs I assigned before I left:

Acromag Unit IP Address
C1SUSAUX_ADC00 192.168.115.20
C1SUSAUX_ADC01 192.168.115.21
C1SUSAUX_ADC02 192.168.115.22
C1SUSAUX_ADC03 192.168.115.23
C1SUSAUX_ADC04 192.168.115.24
C1SUSAUX_ADC05 192.168.115.25
C1SUSAUX_ADC06 192.168.115.26
C1SUSAUX_ADC07 192.168.115.27
C1SUSAUX_ADC08 192.168.115.28
C1SUSAUX_ADC09 192.168.115.29
C1SUSAUX_DAC00 192.168.115.40
C1SUSAUX_DAC01 192.168.115.41
C1SUSAUX_DAC02 192.168.115.42
C1SUSAUX_DAC03 192.168.115.43
C1SUSAUX_BIO00 192.168.115.60
C1SUSAUX_BIO01 192.168.115.61
C1SUSAUX_BIO02 192.168.115.62

I used black/white twisted-pair wires for A/D, red/white for D/A, and green/white for BIO channels. I found it easiest to remove the blue terminal blocks from the Acromag units for doing the majority of the wiring, but wasn't able to finish it. I had also done the analog channel calibrations using the windows untility using multimeters and one of the precision voltage sources I had brought over from the Bridge labs, but it's probably a good idea to check it and correct if necessary. I also recommend to check that the existing wiring particularly for MC1 and MC2 is correct, as I had swapped their order in the channel assignment in the past.

While looking through the database files I noticed two glaring mistakes which I fixed:

  1. The definition of C1SUSAUX_BIO2 was missing in /cvs/cds/caltech/target/c1susaux2/C1SUSAUX.cmd. I added it after the assignments for C1SUSAUX_BIO1
  2. Due to copy/paste the database files /cvs/cds/caltech/target/c1susaux2/C1_SUS-AUX_<OPTIC>.db files were still pointing to C1AUXEX. I overwrote all instances of this in all database files with C1SUSAUX.

 

  14497   Tue Mar 26 18:35:06 2019 JonUpdateUpgradeModbus IOC is running on c1susaux2

Thanks to new info from Johannes, I was able to finish setting up the modbus IOC on c1susaux2. It turns out the 17 Acromags draw ~1.9 A, which is way more than I had expected. Hence the reason I had suspected a short. Adding a second DC supply in parallel solves the problem. There is no issue with the wiring.

With the Acromags powered on, I carried out the following:

  • Confirmed c1susaux2 can communicate with each Acromag at its assigned IP address
  • Modified the EPICS .cmd file to point to the local modbus installation (not the remote executable on /cvs/cds)
  • Debugged several IOC initialization errors. All were caused by minor typos in the database files.
  • Scripted the modbus IOC to launch as a systemd service (will add implementation details to the documentation page)

The modbusIOC is now running as a peristent system service, which is automatically launched on boot and relaunched after a crash. I'm able to access a random selection of channels using caget.

What's left now is to finish the Acromag-to-feedthrough wiring, then test/calibrate each channel.

  14498   Thu Mar 28 19:40:02 2019 gautamUpdateALSBeatMouth with NF1611s assembled

Summary:

The parts I was waiting for arrived. I finished the beat mouth assembly, and did some characterization. Everything looks to be working as expected.

Details:

Attachment #1: Photo of the front panel. I am short of two fiber mating sleeves that are compatible with PM fibers, but those are just for monitoring, so not critical to the assembly at this stage. I'll ask Chub to procure these.

Attachment #2: Photo of the inside of the BeatMouth. I opted to use the flexible RG-316 cables for all the RF interconnects. Rana said these aren't the best option, remains to be seen if interference between cables is an issue. If so, we can replace them with RG-58. I took the opportunity to give each fiber beam splitter its own spool, and cleaned all the fiber tips.

Attachment #3: Transfer function measurement. The PDFR setup behind 1X5/1X6 was used. I set the DC current to the laser to 30.0 mA (as read off the display of the current source), which produced ~400uW of light at the fiber coupled output of the diode laser. This was injected into the "PSL" input coupler of the BeatMouth, and so gets divided down to ~100 uW by the time it reaches the PDs. From the DC monitor values (~430mV), the light hitting the PDs is actually more consistent with 60uW, which is in agreement with the insertion loss of the fiber beamsplitters, and the mating sleeves.

The two responses seem reasonably well balanced (to within 20% - do we expect this to be better?). Even though judging by the DC monitor, there was more light incident on the Y PD than on the X PD, the X response was actually stronger than the Y. 

I also took the chance to do some other tests:

  • Inject light into the "X(Y)-ARM" input coupler of the Beat Mouth - confirmed that only the X(Y) NF1611's DC monitor output showed any change. The DC light level was ~1V in this condition, which again is consistent with expected insertion losses as compared to the "PSL" input case, there is 1 less fiber beamsplitter and mating sleeve.
  • Injected light into each of the input couplers, looked at the interior of the BeatMouth with an IR viewer for evidence of fiber damage, and saw none. Note that we are not doing anything special to dump the light at the unused leg of the fiber beamsplitter (which will eventually be a monitor port). Perhaps, nominally, this port should be dumped in some appropriate way.

Attachment #4: Dark Noise analysis. I used a ZHL-500-HLN+ to boost the PD's dark noise above the AG4395's measurement noise floor. The measured noise level seems to suggest either (i) the input-referred current noise of the PD circuitry is a little lower than the spec of 16 pA/rtHz (more like 13 pA/rtHz) or (ii) the transimpedance is lower than the spec of 700 V/A (more like 600 V/A). Probably some combination of the two. Seems reasonable to me.

Next steps:

The optical part of the ALS detection setup is now complete. The next step is to measure the ALS noise with this sysytem. I will use the X arm for this purpose (I'd like to make the minor change of switching the existing resistive power splitter at the delay line to the newly acquired splitters which have 3dB lower insertion loss). 

Attachment 1: IMG_7381.JPG
IMG_7381.JPG
Attachment 2: IMG_7382.JPG
IMG_7382.JPG
Attachment 3: relTF_schem.pdf
relTF_schem.pdf
Attachment 4: darkNoise.pdf
darkNoise.pdf
  14499   Thu Mar 28 23:29:00 2019 KojiUpdateSUSSuspension PD whitening and I/F boards modified for susaux replacement

Now the sus PD whitening bards are ready to move the back plane connectoresto the lower row and to plug the acromag interface board to the upper low.


Sus PD whitening boards on 1X5 rack (D000210-A1) had slow and fast channels mix in a single DIN96 connector. As we are going to use the rear-side backplane connector for Acromag access, we wanted to migrate the fast channel somewhere. For this purpose, the boards were modified to duplicate the fast signals to the lower DIN96 connector.

The modification was done on the back layer of the board (Attachment 1).
The 28A~32A and 28C~32C of P1 are connected to the corresponding pins of P2 (Attachment 2). The connections were thouroughly checked by a multimeter.

After the modification the boards were returned to the same place of the crate. The cables, which had been identified and noted before disconnection, were returned to the connectors.

The functionarity of the 40 (8sus*5ch) whitening switches were confimred using DTT one by one by looking at the transfer functions between SUS LSC EXC to the PD input filter IN1. All the switches showed the proper whitening in the measurments.

The PD slow mon (like C1:SUS-XXX_xxPDMon) channels were also checked and they returned to the values before the modification, except for the BS UL PD. As the fast version of the signal returned to the previous value, the monitor circuit was suspicious. Therefore the opamp of the monitor channels (LT1125) were replaced and the value came back to the previous value (attachment 3).

 

Attachment 1: IMG_7474.JPG
IMG_7474.JPG
Attachment 2: D000210_backplane.pdf
D000210_backplane.pdf
Attachment 3: Screenshot_from_2019-03-28_23-28-23.png
Screenshot_from_2019-03-28_23-28-23.png
  14500   Fri Mar 29 11:43:15 2019 JonUpdateUpgradeFound c1susaux database bug

I found the current bias output channels, C1:SUS-<OPTIC>_<DOF>BiasAdj, were all pointed at C1:SUS-<OPTIC>_ULBiasSet for every degree of freedom. This same issue appeared in all eight database files (one per optic), so it looks like a copy-and-paste error. I fixed them to all reference the correct degree of freedom.

  14501   Fri Mar 29 15:47:58 2019 gautamUpdateAUXAUX laser fiber moved from AS table to PSL table

[anjali, gautam]

To facilitate the 1um MZ frequency stabilization project, I decided that the AUX laser was a better candidate than any of the other 3 active NPROs in the lab as (i) it is already coupled into a ~60m long fiber, (ii) the PSL table has the most room available to set up the readout optics for the delayed/non-delayed beams and (iii) this way I can keep working on the IR ALS system in parallel. So we moved the end of the fiber from the AS table to the SE corner of the PSL table. None of the optics mode-matching the AUX beam to the interferometer were touched, and we do not anticipate disturbing the input coupling into the fiber either, so it should be possible to recover the AUX beam injection into the IFO relatively easily.

Anjali is going to post detailed photos, beam layout, and her proposed layout/MM solutions later today. The plan is to use free space components for everything except the fiber delay line, as we have these available readily. It is not necessarily the most low-noise option, but for a first pass, maybe this is sufficient and we can start building up a noise budget and identify possible improvements.

The AUX laser remians in STANDBY mode for now. HEPA was turned up while working at the PSL table, and remains on high while Anjali works on the layout.

  14502   Fri Mar 29 21:00:06 2019 gautamUpdateALSBeatMouth with NF1611s installed
  • Newfocus 15V current limited supply was taken from bottom NE corner of the ITMY Oplev table to power the BeatMouth on the PSL table
  • BeatMouth was installed on top shelf on PSL table [Attachment #1].
  • Light levels in fibers were checked:
    • PSL: initially, only ~200uW / 4mW was coupled in. This was improved to 2.6mW/4mW (~65% MM) which was deemed sufficient for a first test), by tweaking the alignment of, and into the collimator.
    • EX: ~900uW measured at the PSL table. I remember the incident power being ~1mW. So this is pretty good.
  • Fibers hooked up to BeatMouth:
    • EX light only, DC mon of X PD reads -2.1V.
    • With PSL light, I get -4.6 V.
    • For these numbers, with the DC transimpedance of 10kohm and the RF transimpedance of 700 ohm, I expect a beat of ~0dBm
  • DC light level stability is being monitored by a temporarily hijacked PSL NPRO diagnostic Acromag channel. Main motivation is to confirm that the alignment to the special axes of the PM fibers is still good and we aren't seeing large tempreature-driven waveplate effects.
  • RF part of the circuit is terminated into 50ohms for now -
    • there is still a quesiton as to what is the correct RF amplifier to use in sending the signal to the 1Y3 rack.
    • An initial RF beat power level measurement yielded -5dBm, which is inconsistent with the DC monitor voltages, but I'm not sure what frequency the beat was at, will make a more careful measurement with a scope or the network analyzer.
    • We want the RF pre-amp to be:
      • Low noise, keeping this in mind
      • High enough gain to boost the V/Hz discriminant of the electronic delay line
      • Not too high gain that we run into compression / saturate some of the delay line electronics - specifically, the LO input of the LSC demod board has a Teledyne amp in the signal chain, and so we need to ensure the signal level there is <16dBm (nominal level is 10dBm).
      • I'm evaluating options...
  • At 1Y3:
    • I pulled out the delay-line enclosure, and removed the (superglued) resistive power splitters with the help of some acetone
    • The newly acquired power splitters (ZAPD-2-252-S+) were affixed to the front panel, in which I made some mounting holes.
    • The new look setup, re-installed at 1Y3, is shown in Attachment #2.
Attachment 1: IMG_7384.JPG
IMG_7384.JPG
Attachment 2: IMG_7385.JPG
IMG_7385.JPG
  14503   Sun Mar 31 15:05:53 2019 gautamUpdateALSFiber beam-splitters not PM

I looked into this a little more today.

  1. Looking at the beat signal between the PSL and EX beams from the NF1611 on a scope (50-ohm input), the signal Vpp was ~200 mV.
  2. In the time that I was poking about, the level dropped to ~150mVpp. seemed suspicious.
  3. Thinking that this has to be related to the polarization mismatch between the interfering beams, I moved the input fibers (blue in Attachment #1) around, and saw the signal amplitude went up to 300mVpp, supporting my initial hypothesis.
  4. The question remains as to where the bulk of the polarization drift is happening. I had spent some effort making sure the input coupled beam to the fiber was well-aligned to one of the special axes of the fiber, and I don't think this will have changed since (i.e. the rotational orientation of the fiber axes relative to the input beam was fixed, since we are using the K6XS mounts with a locking screw for the input couplers). So I flexed the patch cables of the fiber beam splitters inside the BeatMouth, and saw the signal go as high as 700mVpp (the expected level given the values reported by the DC monitor).

This is a problem - such large shifts in the signal level means we have to leave sufficient headroom in the choice of RF amplifier gain to prevent saturation, whereas we want to boost the signal as much as possible. Moreover, this kind of operation of tweaking the fiber seating to increase the RF signal level is not repeatable/reliable. Options as I see it:

  1. Get a fiber BS that is capable of maintaining the beam polarization all the way through to the beat photodiode. I've asked AFW technologies (the company that made our existing fiber BS parts) if they supply such a device, and Andrew is looking into a similar component from Thorlabs.
    • These parts could be costly.
  2. Mix the beams in free space. We have the beam coming from EX to the PSL table, so once we mix the two beams, we can use either a fiber or free-space PD to read out the beatnote. 
    • This approach means we lose some of the advantages of the fiber based setup (e.g. frequent alignment of the free-space MM of the two interfering beams may be required).
    • Potentially increases sensitivity to jitter noise at the free-space/fiber coupling points
Quote:
    • An initial RF beat power level measurement yielded -5dBm, which is inconsistent with the DC monitor voltages, but I'm not sure what frequency the beat was at, will make a more careful measurement with a scope or the network analyzer.
Attachment 1: IMG_7384.JPG
IMG_7384.JPG
  14504   Sun Mar 31 18:39:45 2019 AnjaliUpdateAUXAUX laser fiber moved from AS table to PSL table
  • Attachment #1 shows the schematic of the experimental setup for the frequency noise measurement of 1 um laser source.

  • AUX laser will be used as the seed source and it is already coupled to a 60 m fiber (PM980). The other end of the fiber was at the AS table and we have now removed it and placed in the PSL table.

  • Attachment # 2 shows the photograph of the experimental setup. The orange line shows the beam that is coupled to the delayed arm of MZI and the red dotted line shows the undelayed path.

  • As mentioned, AUX is already coupled to the 60 m fiber and the other end of the fiber is now moved to the PSL table. This end needs to be collimated. We are planning to take the same collimator from AS table where it was coupled into before. The position where the collimator to be installed is shown in attachment #2. Also, we need to rotate the mirror (as indicated in attachment #2) to get the delayed beam along with the undelayed beam and then to combine them. As indicated in attachment #2, we can install one more photo diode to perform  balanced detection.

  • We need to decide on which photodetector to be used. It could be NF1801 or PDA255.

  • We also performed the power measurement at different locations in the beam path. The different locations at which power measurement is done is shown attachment #3

  • There is an AOM in the beam path that coupled to the delayed arm of MZI. The output beam after AOM was coupled to the zero-order port during this measurement. That is the input voltage to the AOM was at 0 V, which essentially says that the beam after the AOM is not deflected and it is coupled to the zero-order port. The power levels measured at different locations in this condition are as follows. A)282 mW B)276 mW C)274 mW D)274 mW E)273 mW F)278 mW G)278 mW H)261 mW I)263 mW J)260 mW K)131 mW L)128 mW M)127 mW N)130 mW

  • It can be seen that the power is halved from J to K. This because of a neutral density filter in the path of the beam

  • In this case, we measured a power of 55 mW at the output of the delayed fiber. We then adjusted the input voltage to the AOM driver to 1 V such that the output of AOM is coupled to the first order port. This reduced the power level in the zero-order port of AOM that is coupled to the delayed arm of the MZI. In this case we measured a power of 0.8 mW at the output of delayed fiber.

  •  We must be careful about the power level that is reaching the photodetector such that it should not exceed the damage threshold of the detector.

  • The power measured at the output of undelayed path is 0.8 mW.

  • We also must place the QWP and HWP in the beam path to align the polarisation.

Quote:

[anjali, gautam]

To facilitate the 1um MZ frequency stabilization project, I decided that the AUX laser was a better candidate than any of the other 3 active NPROs in the lab as (i) it is already coupled into a ~60m long fiber, (ii) the PSL table has the most room available to set up the readout optics for the delayed/non-delayed beams and (iii) this way I can keep working on the IR ALS system in parallel. So we moved the end of the fiber from the AS table to the SE corner of the PSL table. None of the optics mode-matching the AUX beam to the interferometer were touched, and we do not anticipate disturbing the input coupling into the fiber either, so it should be possible to recover the AUX beam injection into the IFO relatively easily.

Anjali is going to post detailed photos, beam layout, and her proposed layout/MM solutions later today. The plan is to use free space components for everything except the fiber delay line, as we have these available readily. It is not necessarily the most low-noise option, but for a first pass, maybe this is sufficient and we can start building up a noise budget and identify possible improvements.

The AUX laser remians in STANDBY mode for now. HEPA was turned up while working at the PSL table, and remains on high while Anjali works on the layout.

 

Attachment 1: Schematic_of_experimental_setup_for_frequency_stabilisation_of_1_micron_source.png
Schematic_of_experimental_setup_for_frequency_stabilisation_of_1_micron_source.png
Attachment 2: 1_micron_setup_for_frequency_noise_measurement.JPG
1_micron_setup_for_frequency_noise_measurement.JPG
Attachment 3: 1_micron_setup_for_frequency_noise_measurement_power_levels.png
1_micron_setup_for_frequency_noise_measurement_power_levels.png
  14505   Mon Apr 1 12:01:52 2019 JonUpdateCDS 

I brought c1susaux back online this morning for suspension-channel test scripting. It had been dead for some time. I followed the procedure outlined in #12542. ITMY became stuck during this process, which Gautam tells me always happens since the last vacuum access, but ITMX is not stuck.

  14506   Mon Apr 1 22:33:00 2019 gautamUpdateCDSITMY freed

While Anjali is working on the 1um MZ setup, the pesky ITMY was liberated from the OSEMs. The "algorithm" :

  • Apply a large (-30000 cts) offset to the side coil using the fast system.
  • Approach the zero of the YAW DoF from -2.00V, PIT from +10V (you'll have to jiggle the offsets until the optic is free swinging, and then step the bias down by 0.1). At this point I had the damping off.
  • Once the PIT bias slider reaches -4V, I engaged all damping loops, and brought the optic to its nominal bias position under damping. 

While doing this work, I noticed several errors corresponding to EPICS channel conflicts. Turns out the c1susaux2 EPICS server was left running, and the MEDM screens (and possibly several scripts) were confused. There has to be some other way of testing the new crate, on an isolated network or something - please do not leave the modbus service running as it potentially interferes with normal IFO operation. For good measure, I stopped the process and shut down the machine since I saw nothing in the elog about any running tests.

Quote:

ITMY became stuck during this process

  14507   Tue Apr 2 14:53:57 2019 gautamUpdateCDSc1vac added to burt

I deleted references to c1vac1 and c1vac2 (which no longer exist) and added c1vac to the autoburt request file list at /opt/rtcds/caltech/c1/burt/autoburt/requestfilelist

  14508   Tue Apr 2 15:02:53 2019 JonUpdateCDSITMY freed

blushI renamed all channels on c1susaux2 from "C1:SUS-..." to "C1:SUS2-..." to avoid contention. When the new system is ready to install, those channel names can be reverted with a quick search-and-replace edit.

Quote:

While doing this work, I noticed several errors corresponding to EPICS channel conflicts. Turns out the c1susaux2 EPICS server was left running, and the MEDM screens (and possibly several scripts) were confused. There has to be some other way of testing the new crate, on an isolated network or something - please do not leave the modbus service running as it potentially interferes with normal IFO operation. For good measure, I stopped the process and shut down the machine since I saw nothing in the elog about any running tests.

  14509   Tue Apr 2 18:40:01 2019 gautamUpdateVACVac failure

While glancing at my Vacuum striptool, I noticed that the IFO pressure is 2e-4 torr. There was an "AC power loss" reported by C1Vac about 4 hours (14:07 local time) ago. We are investigating. I closed the PSL shutter.


Jon and I investigated at the vacuum rack. The UPS was reporting a normal status ("On Line"). Everything looked normal so we attempted to bring the system back to the nominal state. But TP2 drypump was making a loud rattling noise, and the TP2 foreline pressure was not coming down at a normal rate. We wonder if the TP2 drypump has somehow been damaged - we leave it for Chub to investigate and give a more professional assessment of the situation and what the appropriate course of action is.

The PSL shutter will remain closed overning, and the main volume and annuli are valved off. We spun up TP1 and TP3 and decided to leave them on (but they have negligible load).

Attachment 1: vacFail.png
vacFail.png
  14510   Wed Apr 3 09:04:01 2019 gautamUpdateALSNote about new fiber couplers

The new fiber beam splitters we are ordering, PFC-64-2-50-L-P-7-2-FB-0.3W, have the slow axis working and fast axis blocked. The way the light is coupled into the fibers right now is done to maximize the amount of light into the fast axis. So we will have to do a 90deg rotation if we use that part. Probably the easiest thing to do is to put a HWP immediately before the free-space-to-fiber collimator.

Update 6pm: They have an "SB" version of the part with the slow axis blocked and fast axis enabled, same price, so I'll ask Chub to get it.

  14511   Wed Apr 3 09:07:46 2019 gautamUpdateVACVac failure

Overnight pressure trends don't suggest anything went awry after the initial interlock trip. Some watchdog script that monitors vacuum pressure and closes the PSL shutter in the event of pressure exceeding some threshold needs to be implemented. Another pending task is to make sure that backup disk for c1vac actually is bootable and is a plug-and-play replacement.

Attachment 1: vacFailOvernight.png
vacFailOvernight.png
  14512   Wed Apr 3 10:42:36 2019 gautamUpdateVACTP2 forepump replaced

Bob and Chub concluded that the drypump that serves as TP2's forepump had failed. Steve had told me the whereabouts of a spare Agilent IDP-7. This was meant to be a replacement for the TP3 foreline pump when it failed, but we decided to swap it in while diagnosing the failed drypump (which had 2182 hours continuous running according to the hour counter). Sure enough, the spare pump spun up and the TP2fl pressure dropped at a rate consistent with what is expected. I was then able to spin up TP1, TP2 and TP3. 

However, when opening V4 (the foreline of TP1 pumped by TP2), I heard a loud repeated click track (~5Hz) from the electronics rack. Shortly after, the interlocks shut down all the TPs again, citing "AC power loss". Something is not right, I leave it to Jon and Chub to investigate.

  14513   Wed Apr 3 12:32:33 2019 KojiUpdateALSNote about new fiber couplers

Andrew seems to have an integrated solution of PBS+HWP in a singe mount. Or, I wonder if we should use HWP/QWP before the coupler. I am interested in a general solution for this problem in my OMC setup too.

  14514   Wed Apr 3 16:17:17 2019 JonUpdateVACTP2 forepump replaced

I can't explain the mechanical switching sound Gautam reported. The relay controlling power to the TP2 forepump is housed in the main AC relay box under the arm tube, not in the Acromag chassis, so it can't be from that. I've cycled through the pumpdown sequence several times and can't reproduce the effect. The Acromag switches for TP2 still work fine.

In any case, I've made modifications to the vacuum interlocks that will help with two of the issues:

  1. For the "AC power loss" over-triggering: New logic added requiring the UPS to be out of the "on line power, battery OK" state for ~5 seconds before tripping the interlock. This will prevent electrical transients from triggering an emergency shutdown, as seems to be the case here (the UPS briefly isolates the load to battery during such events).
  2. PSL interlocking: New logic added which directly sets C1:AUX-PSL_ShutterRqst --> 0 (closes the PSL shutter) when the main volume pressure is 3 mtorr-500 torr. Previously there was a channel exposed for this interlock (C1:Vac-interlock_high_voltage), but c1aux was not actually monitoring it. Following the convention of every vac interlock, after the PSL shutter has been closed, it has to be manually reopened. Once the pressure is out of this range, the vac system will stop blocking the shutter from reopening, but it will not perform the reopen action itself. gautam: a separate interlock logic needs to be implemented on c1aux (the shutter machine) that only permits the shutter to be opened if the Vac pressure range is okay. The SUS watchdog style AND logic in the EPICS database file should work just fine.

After finishing this vac work, I began a new pumpdown at ~4:30pm. The pressure fell quickly and has already reached ~1e-5 torr. TP2 current and temp look fine.

Quote:

However, when opening V4 (the foreline of TP1 pumped by TP2), I heard a loud repeated click track (~5Hz) from the electronics rack. Shortly after, the interlocks shut down all the TPs again, citing "AC power loss". Something is not right, I leave it to Jon and Chub to investigate.

Attachment 1: IMG_3180.jpg
IMG_3180.jpg
  14515   Wed Apr 3 18:35:54 2019 gautamUpdateVACPSL shutter re-opened

PSL shutter was re-opened at 6pm local time. IMC was locked. As of 10pm, the main volume pressure is already back down to the 8e-6 level.

ELOG V3.1.3-