40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 301 of 350  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  14422   Tue Jan 29 22:12:40 2019 gautamUpdateSUSAlignment prep

Since we may want to close up tomorrow, I did the following prep work:

  1. Cleaned up Y-end suspension eleoctronics setup, connected the Sat Box back to the flange
    • The OSEMs are just sitting on the table right now, so they are just seeing the fully open voltage
    • Post filter insertion, the four face OSEMs report ~3-4% lower open-voltage values compared to before, which is compatible with the transmission spec for the filters (T>95%)
    • The side OSEM is reporting ~10% lower - perhaps I just didn't put the filter on right, something to be looked at inside the chamber
  2. Suspension watchdog restoration
    • I'd shutdown all the watchdogs during the Satellite box debacle
    • However, I left ITMY, ETMY and SRM tripped as these optics are EQ-stopped / don't have the OSEMs inserted.
  3. Checked IMC alignment
    • After some hand-alignment of the IMC, it was locked, transmission is ~1200 counts which is what I remember it being
  4. Checked X-arm alignment
    • Strictly speaking, this has to be done after setting the Y-arm alignment as that dictates the input pointing of the IMC transmission to the IFO, but I decided to have a quick look nevertheless
    • Surprisingly, ITMX damping isn't working very well it seems - the optic is clearly swinging around a lot, and the shadow sensor RMS voltage is ~10s of mV, whereas for all the other optics, it is ~1mV.
    • I'll try the usual cable squishing voodoo

Rather than try and rush and close up tomorrow, I propose spending the day tomorrow cleaning the peripheral areas of the optic, suspension cage, and chamber. Then on Thursday morning, we can replace the Y-arm optics, try and recover the cavity alignment, and then aim for a Thursday afternoon pumpdown. The main motivation is to reduce the time the optics spend in air after F.C. peeling and going to vacuum.

  14423   Wed Jan 30 11:54:24 2019 gautamUpdateSUSMore alignment prep

[chub, gautam]

  1. ETMY cage was wiped down
    • Targeted potential areas where dust could drift off from and get attracted to a charged HR surface
    • These areas were surprisingly dusty, even left a grey mark on the wipe [Attachment #1] - we think we did a sufficiently thorough job, but unclear if this helps the loss numbers
    • More pictures are on gPhoto
  2. Filters on SD and LR OSEMs were replaced - the open shadow sensor voltages with filters in/out are consistent with the T>95% coating spec.
  3. IPANG beam position was checked 
    • It is already too high, missing the first steering optic by ~0.5 inch, not the greatest photo but conclusion holds [Attachment #2].
    • I think we shouldn't worry about it for this pumpdown, we can fix it when we put in the new PR3.
  4. Cage wiping procedure was repeated on ITMY
    • The cage was much dustier than ETMY
    • However, the optic itself (barrel and edge of HR face) was cleaner
    • All accessible areas were wiped with isopropanol
    • Before/after pics are on gPhoto (even after cleaning, there are some marks on the suspension that looks like dust, but these are machining marks)

Procedure tomorrow [comments / suggestions welcome]:

  1. Start with IY chamber
    • Peel first contact with TopGun jet flowing
    • Inspect optic face with green flashlight to check for residual First Contact
    • Replace ITMY suspension cage in its position, clamp it down
    • Release ITMY from its EQ stops
    • Replace OSEMs in ITMY cage, best effort to recover previous alignment of OSEMs in their holders (I have a photo before removal of OSEMs), which supposedly minimized the coupling of the B-R modes into the shadow sensor signals
    • Best effort to have shadow sensor PD outputs at half their fully open voltages (with DC bias voltage applied)
    • Quick check that we are hitting the center of the ITM with the alignment tool
    • Check that the Oplev HeNe is reasonably centered on steering mirrors
    • Tie down OSEM cabling to the ITMY cage with clean copper wire
    • Replace the OSEM wiring tower
    • Release the SRM from its EQ stops
    • Check table leveling
    • Take pictures of everything, check that we have not left any tools inside the chamber
    • Heavy doors on
  2. Next, EY chamber
    • Repeat first seven bullets from the IY chamber, :%s/ITMY/ETMY/g
    • Confirm sufficient clearance between IFO beam axis and the elliptical reflector
    • Check Oplev beam path
    • Check table leveling
    • Take pictures of everything, check that we have not left any tools inside the chamber
    • Heavy doors on
  3. IFO alignment checks - basically follow the wiki, we want to be able to lock both arms (or at least see TEM00 resonances), and see that the PRC and SRC mode flashes look reasonable.
  4. Tighten all heavy doors up
  5. Pump down

All photos have been uploaded to google photos.

Attachment 1: IMG_5958.JPG
IMG_5958.JPG
Attachment 2: IMG_5962.JPG
IMG_5962.JPG
  14424   Wed Jan 30 19:25:40 2019 gautamUpdateSUSXarm cavity alignment

Squishing cables at the ITMX satellite box seems to have fixed the wandering ITM that I observed yesterday - the sooner we are rid of these evil connectors the better.

I had changed the input pointing of the green injection from EX to mark a "good" alignment of the cavity axis, so I used the green beam to try and recover the X arm alignment. After some tweaking of the ITM and ETM angle bias voltages, I was able to get good GTRX values [Attachment #1], and also see clear evidence of (admittedly weak) IR resonances in TRX [Attachment #2]. I can't see the reflection from ITMX on the AS camera, but I suspect this is because the ITMY cage is in the way. This will likely have to be redone tomorrow after setting the input pointing for the Y arm cavity axis, but hopefully things will converge faster and we can close up sooner. Closing the PSL shutter for now...


I also rebooted the unresponsive c1susaux to facilitate the alignment work tomorrow.

Attachment 1: Xarm.png
Xarm.png
Attachment 2: Xarm_IR.png
Xarm_IR.png
  14425   Fri Feb 1 01:24:06 2019 gautamUpdateSUSAlmost ready for pumpdown tomorrow

[koji, chub, jon, rana, gautam]

Full story tomorrow, but we went through most of the required pre close-up checks/tasks (i.e. both arms were locked, PRC and SRC cavity flashes were observed). Tomorrow, it remains to 

  1. Confirm clearance between elliptical reflector and ETMY
  2. Confirm leveling of ETMY table
  3. Take pics of ETMY table
  4. Put heavy door on ETMY chamber
  5. Pump down

The ETMY suspension chain needs to be re-characterized (neither the old settings, nor a +/- 1 gain setting worked well for us tonight), but this can be done once we are back under vacuum.

  14426   Fri Feb 1 13:16:50 2019 gautamUpdateSUSPumpdown 83 underway

[chub, bob, gautam]

  1. Steps described in previous elog were carried out
  2. EY heavy door was put on at about 1130am.
  3. Pumpdown commenced at ~noon. We are going down at ~3 torr/min.
  14427   Fri Feb 1 14:44:14 2019 gautamUpdateSUSY arm FC cleaning and reinstall

[Attachment #1]: ITMY HR face after cleaning. I determined this to be sufficiently clean and re-installed the optic.

[Attachment #2]: ETMY HR face after cleaning. This is what the HR face looks like after 3 rounds of First-Contact application. After the first round, we noticed some arc-shaped lines near the center of the optic's clear aperture. We were worried this was a scratch, but we now believe it to be First-Contact residue, because we were able to remove it after drag wiping with acetone and isopropanol. However, we mistrust the quality of the solvents used - they are not any special dehydrated kind, and we are looking into acquiring some dehydrated solvents for future cleaning efforts.

[Attachment #3]: Top view of ETMY cage meant to show increased clearance between the IFO axis and the elliptical reflector.

Many more photos (including table leveling checks) on the google-photos page for this vent. The estimated time between F.C. peeling and pumpdown is ~24 hours for ITMY and ~15 hours for ETMY, but for the former, the heavy doors were put on ~1 hour after the peeling.

The first task is to fix the damping of ETMY.

Attachment 1: IMG_5974.JPG
IMG_5974.JPG
Attachment 2: IMG_5986.JPG
IMG_5986.JPG
Attachment 3: IMG_5992.JPG
IMG_5992.JPG
  14428   Fri Feb 1 21:52:57 2019 gautamUpdateSUSPumpdown 83 underway

[jon, koji, gautam]

  1. IFO is at ~1 mtorr, but pressure is slowly rising because of outgassing presumably (we valved off the turbos from the main volume)
  2. Everything went smooth -
    • 760 torr to 500 mtorr took ~7 hours (we deliberately kept a slow pump rate)
    • TP3 current was found to rise above 1 A easily as we opened RV2 during the turbo pumping phase, particularly in going from 500 mtorr to 10 mtorr, so we just ran TP2 more aggressively rather than change the interlock condition.
    • The pumpspool is isolated from the main volume - TP1-3 are running (TP2 and TP3 are on Standby mode) but are only exposed to the small pumpspool volume and RGA volume).
    • RP1 and RP3 were turned off, and the manual roughing line was disconnected.
    • We will resume the pumping on Monday.

I'm leaving all suspension watchdogs tripped over the weekend as part of the suspension diagonalization campaign...

  14429   Sat Feb 2 21:53:24 2019 KojiUpdateVACovernight leak rate

The pressure of the main volume increased from ~1mtorr to 50mtorr for the past 24 hours (86ksec). This rate is about x1000 of the reported number on Jan 10. Do we suspect vacuum leak?

Quote:

Overnight, the pressure increased from 247 uTorr to 264 uTorr over a period of 30000 seconds. Assuming an IFO volume of 33,000 liters, this corresponds to an average leak rate of ~20 uTorr L / s.

 

Attachment 1: Screen_Shot_2019-02-02_at_21.49.33.png
Screen_Shot_2019-02-02_at_21.49.33.png
  14430   Sun Feb 3 15:15:21 2019 gautamUpdateVACovernight leak rate

I looked into this a bit today. Did a walkthrough of the lab, didn't hear any obvious hissing (makes sense, that presumably would signal a much larger leak rate).

Attachment #1: Data from the 30 ksec we had the main vol valved off on Jan 10, but from the gauges we have running right now (the CC gauges have not had their HV enabled yet so we don't have that readback).

Attachment #2: Data from ~150 ksec from Friday night till now.

Interpretation: The number quoted from Jan 10 is from the cold-cathode gauge (~20 utorr increase). In the same period, the Pirani gauge reports a increase of ~5 mtorr (=250x the number reported by the cold-cathode gauge). So which gauge do we trust in this regime more? Additionally, the rate at which the annuli pressures are increasing seem consistent between Jan 10 and now, at ~100 mtorr every 30 ksec.

I don't think this is conclusive, but at least the leak rates between Jan 10 and now don't seem that different for the annuli pressures. Moreover, for the Jan 10 pumpdown, we had the IFO at low pressure for several days over the chirstmas break, which presumably gave time for some outgassing which was cleaned up by the TPs on Jan 10, whereas for this current pumpdown, we don't have that luxury.

Do we want to do a systematic leak check before resuming the pumpdown on Monday? The main differences in vacuum I can think of are

  1. Two pieces of Kapton tape are now in the EY chamber.
  2. Possible resiudue from cleaning solvents in IY and EY chambers are still outgassing.

This entry by Steve says that the "expected" outgassing rate is 3-5 mtorr per day, which doesn't match either the current observation or that from Jan 10.

Attachment 1: Jan10_data.png
Jan10_data.png
Attachment 2: Feb1_data.png
Feb1_data.png
  14431   Sun Feb 3 20:52:34 2019 KojiUpdateVACovernight leak rate

We can pump down (or vent) annuli. If this is the leak between the main volume and the annuli, we will be able to see the effect on the leak rate. If this is the leak of an  outer o-ring, again pumping down (or venting) of the annuli should temporarily decrease (or increase) the leak rate..., I guess. If the leak rate is not dependent on the pressure of the annuli, we can conclude that it is internal outgassing.

  14432   Mon Feb 4 12:23:24 2019 gautamUpdateVACpumpdown 83 - leak tests

[koji, gautam]

As planned, we valved off the main volume and the annuli from the turbo-pumps at ~730 PM PST. At this time, the main volume pressure was 30 uTorr. It started rising at a rate of ~200 uTorr/hr, which translates to ~5 mtorr/day, which is in the ballpark of what Steve said is "normal". However, the calibration of the Hornet gauge seems to be piecewise-linear (see Attachment #1), so we will have to observe overnight to get a better handle on this number.

We decided to vent the IY and EY chamber annular volumes, and check if this made anu dramatic changes in the main volume pressure increase rate, presumably signalling a leak from the outside. However, we saw no such increase - so right now, the working hypothesis is still that the main volume pressure increase is being driven by outgassing of something from the vacuum.

Let's leave things in this state overnight - V1 and V5 closed so that neither the main volume nor the annuli are being pumped, and get some baseline numbers for what the outgassing rate is.

Attachment 1: PD83.png
PD83.png
  14433   Mon Feb 4 20:13:39 2019 gautamUpdateSUSETMY suspension oddness

I looked at the free-swinging sensor data from two nights ago, and am struggling with the interpretation. 

[Attachment #1] - Fine resolution spectral densities of the 5 shadow sensor signals (y-axis assumes 1ct ~1um). The puzzling feature is that there are only 3 resonant peaks visible around the 1 Hz region, whereas we would expect 4 (PIT, YAW, POS and SIDE). afaik, Lydia looked into the ETMY suspension diagonalization last, in 2016. Compared to her plots (which are in the Euler basis while mine are in the OSEM basis), the ~0.73 Hz peak is nowhere to be seen. I also think the frequency resolution (<1 mHz) is good enough to be able to resolve two closely spaced peaks, so it looks like due to some reason (mechanical or otherwise), there are only 3 independent modes being sensed around 1 Hz.

[Attachment #2] - Koji arrived and we looked at some transfer functions to see if we could make sense of all this. During this investigation, we also think that the UL coil actuator electronics chain has some problem. This test was done by driving the individual coils and looking for the 1/f^2 pendulum transfer function shape in the Oplev error signals. The ~ 4dB difference between UR/LL and LR is due to a gain imbalance in the coil output filter bank, once we have solved the other problems, we can reset the individual coil balancing using this measurement technique.

[Attachment #3] - Downsampled time-series of the data used to make Attachment #1. The ringdown looks pretty clean, I don't see any evidence of any stuck magnets looking at these signals. The X-axis is in kilo-seconds.

We found that the POS and SIDE local damping loops do not result in instability building up. So one option is to use only Oplevs for angular control, while using shadow-sensor damping for POS and SIDE.

Attachment 1: ETMY_sensors_1_Feb_2019_2230_PST.pdf
ETMY_sensors_1_Feb_2019_2230_PST.pdf
Attachment 2: ETMY_UL.pdf
ETMY_UL.pdf
Attachment 3: ETMY_sensors_timeDomain.pdf
ETMY_sensors_timeDomain.pdf
  14434   Tue Feb 5 10:11:30 2019 gautamUpdateVACleak tests complete, pumpdown 83 resumed

I guess we forgot to close V5, so we were indeed pumping on the ITMY and ETMY annuli, but the other three were isolated suggest a leak rate of ~200-300 mtorr/day, see Attachment #1 (consistent with my earlier post).

As for the main volume - according to CC1, the pressure saturates at ~250 uTorr and is stable, while the Pirani P1a reports ~100x that pressure. I guess the cold-cathode gauge is supposed to be more accurate at low pressures, but how well do we believe the calibration on either gauge? Either ways, based on last night's test (see Attachment #2), we can set an upper limit of 12 mtorr/day. This is 2-3x the number Steve said is normal, but perhaps this is down to the fact that the outgassing from the main volume is higher immediately after a vent and in-chamber work. It is also 5x lower rate of pressure increase than what was observed on Feb 2.

I am resuming the pumping down with the turbo-pumps, let's see how long we take to get down to the nominal operating pressure of 8e-6 torr, it ususally takes ~ 1 week. V1, VASV, VASE and VABS were opened at 1030am PST. Per Chub's request (see #14435), I ran RP1 and RP3 for ~30 seconds, he will check if the oil level has changed.

Quote:
 

Let's leave things in this state overnight - V1 and V5 closed so that neither the main volume nor the annuli are being pumped, and get some baseline numbers for what the outgassing rate is.

Attachment 1: Annuli.png
Annuli.png
Attachment 2: MainVol.png
MainVol.png
  14435   Tue Feb 5 10:22:03 2019 chubUpdate oil added to RP-1 & 3

I added lubricating oil to roughing pumps RP1 and RP3 yesterday and this morning.  Also, I found a nearly full 5 gallon jug of grade 19 oil in the lab.  This should set us up for quite a while.  If you need to add oil the the roughing pumps, use the oil in the quart bottle in the flammables cabinet.  It is labeled as Leybold HE-175 Vacuum Pump Oil.  This bottle is small enough to fill the pumps in close quarters.

  14436   Tue Feb 5 19:30:14 2019 gautamUpdateVACMain volume at 20 uTorr

Pumpdown looks healthy, so I'm leaving the TPs on overnight. At some point, we should probably get the RGA going again. I don't know that we have a "reference" RGA trace that we can compare the scan to, should check with Steve. The high power (1 W) beam has not yet been sent into the vacuum, we should probably add the interlock condition that shuts off the PSL shutter before that.

Attachment 1: PD83.png
PD83.png
  14437   Wed Feb 6 10:07:23 2019 ChubUpdate pre-construction inspection

The Central Plant building will be undergoing seismic upgrades in the near future.  The adjoining north wall along the Y arm will be the first to have this work done, from inside the Central Plant.  Project manager Eugene Kim has explained the work to me and also noted our concerns.  He assured me that the seismic noise from the construction will be minimized and we will always be contacted when the heaviest construction is to be done.

Tomorrow at 11am, I will bring Mr. Kim and a few others from the construction team to look at the wall from inside the lab.  If you have any questions or concerns that you want to have addressed, please email them to me or contact Mr. Kim directly at x4860 or through email at eugene.kim@caltech.edu . 

  14438   Thu Feb 7 13:55:25 2019 gautamUpdateVACRGA turned on

[chub, steve, gautam]

Steve came by the lab today. He advised us to turn the RGA on again, now that the main volume pressure is < 20 uTorr. I did this by running the RGAset.py script on c0rga - the temperature of the unit was 22C in the morning, after ~3 hours of the filament being turned on, the temperature has already risen to 34 C. Steve says this is normal. We also opened VM1 (I had to edit the interlocks.yaml to allow VM1 to open when CC1 < 20uTorr instead of 10uTorr), so that the RGA volume is exposed to the main volume. So the nightly scans should run now, Steve suggests ignoring the first few while the pumpdown is still reaching nominal pressure. Note that we probably want to migrate all the RGA stuff to the new c1vac machine.

Other notes from Steve:

  • RP1 and RP3 should have their oil fully changed (as opposed to just topped up)
  • VABSSCI adn VABSSCO are NOT vent valves, they are isolating the annuli of the IOO and OMC chambers from the BS chamber annuli. So next time we vent, we should fix this!
  • Leak rate of 3-5 mTorr/day is "normal" once the system has been pumped for a few days. Steve agrees that our observations of the main volume pressure increase is expected, given that we were at atmosphere.
  • Regarding the upcoming CES construction
    • Steve recommends keeping the door along the east arm, as it is useful for bringing equipment into the lab (end door access is limited because of end optical tables)
    • Particle counter data logging should be resumed before the construction starts, so that we can monitor if the lab is getting dirtier
  • OSEM filters (new ones, i.e. made according to spects in D000209) are in the Clean Cabinet (EX). They are individually packaged in little capsules, see Attachment #1. So the ones I installed were actually a 2002 vintage. We have 50pcs, enough to install new ones on all the core optics + spares.
  14440   Thu Feb 7 19:28:46 2019 gautamUpdateVACIFO recovery

[rana, gautam]

The full 1 W is again being sent into the IMC. We have left the PBS+HWP combo installed as Rana pointed out that it is good to have polarization control after the PMC but before the EOM. The G&H mirror setup used to route a pickoff of the post-EOM beam along the east edge of the PSL table to the AUX laser beat setup was deemed too flaky and has been bypassed. Centering on the steering mirror and subsequently the IMC REFL photodiode was done using an IR viewer - this technique allows one to geometrically center the beam on the steering mirror and PD, to the resolution of the eye, whereas the voltage maximization technique using the monitor port and an o'scope doesn't allow the former. Nominal IMC transmission of ~15,000 counts has been recovered, and the IMC REFL level is also around 0.12, consistent with the pre-vent levels.

  14441   Thu Feb 7 19:34:18 2019 gautamUpdateSUSETMY suspension oddness

I did some tests of the electronics chain today.

  1. Drove a sine-wave using awggui to the UL-EXC channel, and monitored using an o'scope and a DB25 breakout board at J1 of the satellite box, with the flange cable disconnected - while driving 3000 cts amplitude signal, I saw a 2 Vpp signal on the scope, which is consistent with expectations.
  2. Checked resistances of the pin pairs corresponding to the OSEMs at the flange end using a breakout board - all 5 pairs read out ~16-17 ohms.
  3. Rana pointed out that the inductance is the unambiguous FoM here: all coils measured between 3.19 and 3.3 mH according to the LCR meter...

Hypothesising a bad connection between the sat box output J1 and the flange connection cable. Indeed, measuring the OSEM inductance from the DSUB end at the coil-driver board, the UL coil pins showed no inductance reading on the LCR meter, whereas the other 4 coils showed numbers between 3.2-3.3 mH. Suspecting the satellite box, I swapped it out for the spare (S/N 100). This seemed to do the trick, all 5 coil channels read out ~3.3 mH on the LCR meter when measured from the Coil driver board end. What's more, the damping behavior seemed more predictable - in fact, Rana found that all the loops were heavily overdamped. For our suspensions, I guess we want the damping to be critically damped - overdamping imparts excess displacement noise to the optic, while underdamping doesn't work either - in past elogs, I've seen a directive to aim for Q~5 for the pendulum resonances, so when someone does a systematic investigation of the suspensions, this will be something to look out for.. These flaky connectors are proving pretty troublesome, let's start testing out some prototype new Sat Boxes with a better connector solution - I think it's equally important to have a properly thought out monitoring connector scheme, so that we don't have to frequently plug-unplug connectors in the main electronics chain, which may lead to wear and tear.

The input and output matrices were reset to their "naive" values - unfortunately, two eigenmodes still seem to be degenerate to within 1 mHz, as can be seen from the below spectra (Attachment #1). Next step is to identify which modes these peaks actually correspond to, but if I can lock the arm cavities in a stable way and run the dither alignment, I may prioritize measurement of the loss. At least all the coils show the expected 1/f**2 response at the Oplev error point now. The coil output filter gains varied by ~ factor of 2 among the 4 coils, but after balancing the gains, they show identical responses in the Oplev - Attachment #2.

Attachment 1: ETMY_sensors.pdf
ETMY_sensors.pdf
Attachment 2: postDiag.pdf
postDiag.pdf
  14443   Fri Feb 8 02:00:34 2019 gautamUpdateSUSITMY has tendency of getting stuck

As it turns out, now ITMY has a tendency to get stuck. I found it MUCH more difficult to release the optic using the bias jiggling technique, it took me ~ 2 hours. Best to avoid c1susaux reboots, and if it has to be done, take precautions that were listed for ITMX - better yet, let's swap out the new Acromag chassis ASAP. I will do the arm locking tests tomorrow.

Attachment 1: Screenshot_from_2019-02-08_02-04-22.png
Screenshot_from_2019-02-08_02-04-22.png
  14445   Fri Feb 8 20:48:52 2019 gautamUpdateLSCIFO recovery

Several housekeeping tasks were carried out today in preparation for the Y-arm loss measurement.

  1. The mess around the OMC rack was cleared a bit. The vertex laptop paola now lives there, instead of on the ITMY optical table.
  2. Centering of beam on AS photodiodes on AS table (starting from the first optic in this path at the exit point from the vacuum), adjusted AS camera to bring the spot roughly to the center.
  3. POX/POY locking was restored, GTRY/GTRX levels are healthy. TRY was centered on the Thorlabs PD by triggering the LSC lock on AS110.
  4. Oplevs on all four TMs and BS were centered for post-vent alignment.
  5. ETMY OL transfer function was checked since we have swapped the HeNe during the vent, 4.5 Hz UGF for both DoFs and ~30 deg phase margin. The calibration of the error point to urad needs to be double checked.
  6. There are some huge 60 Hz harmonics in the TRY signal - hunting down the source of this. The one thing I can think of that was changed is that we plugged the c1auxey eurocrate into the ethernet powerstrip, I wonder if this created some kind of ground loop.
    • I checked the signal from the PD with a battery powered scope, no evidence of any 60 Hz in the time domain or scope FFT (Attachment #1, FFT in red and time domain signal in green can be seen).
    • Restored the power of c1auxey eurocrate to its original socket in the back of 1Y4 - harmonics still present --> points to the problem being in the whitening board / ADC electronics?
    • The harmonics only seem to show up when TRY > ~0.5
    • Some elog hunting revealed that this signal is being digitized through a modified D990399. So somehow the signal pollution is happening inside this board? Because from the output of this board, the signal is going straight into the ADC.
    • To confirm, I will temporarily hijack another ADC channel and look at the spectrum. There is apparently some kind of daughter board (D040060), but how 60 Hz is coupling at this stage is unknown to me.
  7. The ASS system for both arms still isn't working properly, to be investigated. The dirty TRY signal probably isn't helping the situation.
Attachment 1: IMG_7307.JPG
IMG_7307.JPG
  14446   Mon Feb 11 15:41:49 2019 gautamUpdateLSCTRY 60 Hz solved - but clipping persists

Rich came by the 40m to photocopy some pages from Hobbs, and saw me working on the 60 Hz hunting. As I suspected, the problem was being generated in the D040060. This board receives the photodiode signal single-ended, but has a different power ground than the photodiode (even though the PD is plugged into a power strip that claims to come from 1Y4). The mechanism is not entirely clear - the presence of these 60 Hz features seemed to be dependent on the light level on the TRY photodiode (i.e. they were absent when the PSL shutter is closed, and were more prominent when TRY was 0.9 rather than 0.5) but the PD certainly wasn't saturated - the DC signal was only ~100 mV when viewed on a scope. In any case, Rich suggested the simplest test would be to ground the BNC shield bringing TRY to the rack, to the local ground on the board, which I did using a crocodile clip. This did the trick, the TRY signal RMS is now dominated by the ~1 Hz seismic-driven variation.

 On a more pessimistic note - it looks like the elliptical reflector moving did not work, and the clipping in the Y arm persists no. I am able to recover TRY~1 with the yaw offset on the ETM (which is still lower than the 1.06-1.07 Koji reported in Aug 2018, but I can believe that being down to the MC transmission being a few % lower at 15000cts rather than 15500), while the maximum I see without it is ~0.9. This is puzzling, because when the chamber was open, we saw that there was ~1.5" clearance between the edge of the reflector and the beam on an IR card. I suppose the input pointing could have been off by a small amount. So one of the primary vent objectives wasn't acheieved... But I will push ahead with the loss measurement.

  14447   Mon Feb 11 16:38:34 2019 gautamUpdateLSCETMY OL calibration updated

Since we changed the HeNe, I updated the calibration factors, and accepted the changes in the SDF.

DOF OLD [urad/ct] NEW [urad/ct]
PITCH 140 176
YAW 143

193

Attachment 1: OL_calib_ETMY_PERROR.pdf
OL_calib_ETMY_PERROR.pdf
Attachment 2: OL_calib_ETMY_YERROR.pdf
OL_calib_ETMY_YERROR.pdf
  14452   Thu Feb 14 15:37:35 2019 gautamUpdateVACVacromag failure

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

Details:

  1. Chub alerted me he had changed the main N2 line pressure, but this did not show up in the trend data. In fact, the trend data suggested that all 3 N2 gauges had stopped logging data (they just held the previous value) since sometime on Monday, see Attachment #1.
  2. We verified that the gauges were being powered, and that the analog voltage output of the gauges made sense in the drill press room ---> So this suggested something was wrong at the Vacuum rack electronics rack.
  3. Went to the vacuum rack, saw no obvious indicator lights signalling a fault.
  4. So I restarted the modbus process on c1vac using sudo systemctl restart modbusIOC.service. The way Jon has this setup, this service controls all the sub-processes talking to gauges and TPs, so resatrting this master process should have brought everything back.
  5. This tripped the interlock, and all valves got closed.
  6. Once the modbus service restarted, most things came back normally. However, V1, V3, V4 and V5 readbacks were listed as "UNDEF".
  7. The way the interlock code works, it checks a valve state change request against the monitor channel, so all these valves could not be opened.
  8. We confirmed that the valves themselves were operational, by bypassing the itnerlock logic and directly actuating on the valve - but this is not a safe way of running overnight so we decided to shut everything down.
  9. We also confirmed that the problem is with one particular Acromag unit - switching the readback Dsub connector to another channel (e.g. V1 --> VM2) showed the expected readback.
  10. As a further check - I connected a windows laptop with the Acromag software installed, to the suspected XT1111 - it reported an error message saying "USB device may be damaged". Plugging into another XT111 in the crate, I was able to access the unit in the normal way.
  11. The phoenix connector architecture of the Acromags makes it possible to replace this single unit (we have spare XT1111 units) without disturbing the whole system - so barring objections, we plan to do this at 9am tomorrow. The replacement plan is summarized in Attachment #2.

Pressure of the main volume seems to have stabilized - see Attachment #3, so it should be fine to leave the IFO in this state overnight.

Questions:

  1. What caused the original failure of the writing to the ADC channels hooked up to the N2 gauges? There isn't any logging setup from the modbus processes afaik.
  2. What caused the failure of the XT1111? What is the failure mode even? Because some other channels on the same XT1111 are working...
  3. Was it user error? The only operation carried out by me was restarting the modbus services - how did this damage the readback channels for just four valves? I think Chub also re-arranged some wires at the end, but unplugging/re-connecting some cables shouldn't produce this kind of response...

The whole point of the upgrade was to move to a more reliable system - but seems quite flaky already.

Attachment 1: Screenshot_from_2019-02-14_15-40-36.png
Screenshot_from_2019-02-14_15-40-36.png
Attachment 2: IMG_7320.JPG
IMG_7320.JPG
Attachment 3: Screenshot_from_2019-02-14_20-43-15.png
Screenshot_from_2019-02-14_20-43-15.png
  14453   Thu Feb 14 18:16:24 2019 JonUpdateVACVacromag failure

I sent Gautam instructions to first try stopping the modbus service, power cycling the Acromag chassis, then restarting the service. I've seen the Acromags go into an unresponsive state after a strong electrical transient or shorted signal wires, and the unit has to be power cycled to be reset.

If this doesn't resolve it, I'll come in tomorrow to help with the Acromag replacement. We have plenty of spares.

Quote:

[chub, gautam]

Sumary:

One of the XT1111 units (XT1111a) in the new vacuum system has malfunctioned. So all valves are closed, PSL shutter is also closed, until this is resolved.

 

  14455   Thu Feb 14 23:14:12 2019 gautamUpdateCDSc1rfm errors

The pressure is still 2e-4 torr according to CC1 so I thought I'd give ASS debugging a go tonight. But the arm transmission signal isn't coming through to the LSC model from the end PDs - so a resurfacing of this problem. Rebooting the sender model, c1scy, did not fix the problem. Moreover, c1susaux is dead. The last time I rebooted it, ITMY got stuck so I'm not going to attempt a revival tonight.

  14456   Fri Feb 15 11:58:45 2019 JonUpdateVACVac system is back up

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals) that could only be cleared by power cycling the units. After resetting the system, the main volume pressure dropped quickly and is now < 2e-5 torr, so normal operations can resume. For future reference, below is the procedure to safely reset these units from a trouble state.

Vacromag Reset Procedure

  • TP2 and TP3 can be left running, but isolate them by closing valves V4 and V5.
  • TP1 can also be left running, but manually flip the operation mode on the front of the controller from REMOTE to LOCAL. This prevents the pump from receiving a "stop" command when its control Acromag shuts down.
  • Close all the pneumatic valves in the system (they'll otherwise close automatically when their control Acromags shut down).
  • On c1vac, stop the modbusIOC service. Sometimes this takes ~1 min to actually terminate.
  • Turn off the Acromags by flipping the "24 V" on the back of the chassis.
  • Wait ~10 sec, then turn them back on.
  • Start the modbusIOC service. It may take up to ~1 min for all the readings on the MEDM screen to initialize.
  • Ensure that the rotation speed of TP1,2,3 are still all nominal.
  • If pumps are OK, open V4, V5, and V7, then open V1. This restores the system to the "Maximum pumping speed" state.
  • Flip the TP1 controller operation state back to REMOTE.
  14457   Fri Feb 15 15:22:08 2019 gautamUpdateCDSc1rfm errors persist

I restarted c1scyc1rfm (so both sender and receiver models were cycled) and power-cycled the c1iscey and c1sus machines. The TRY PD is certainly seeing light - it is just not getting piped over to c1rfm. dmesg doesn't give any clues. I'm out of ideas.

P.S. The new reality seems to be that getting ITMY stuck in the event of a c1susaux reboot is inevitable. As is the practise for ITMX, I tried slowly ramping the PIT and YAW biases to 0 slowly - but in the process of ramping YAW to 0, the optic got stuck. I am ramping in steps of 0.1 (in units of the PIT/YAW sliders, waiting ~3 seconds between steps), I guess I can try ramping even more slowly.

Update: I power cycled the physical RFM switch. This necessitated reboot of all vertex FEs. But seems like things are back to normal now...

Note: to unstick ITMY, seems like the best approach is:

  1. Jiggle bias until SIDE shadow sensor is on average above it's half-light level. This is the critical step. A bias of +20000 cts on the fast SIDE output seems to help.
  2. Set YAW bias to -10, ramp down the BIAS in steps of 0.1, watching shadow sensor levels to ensure optic doesn't get stuck again.
  3. Hope for the best. Iterate if necessary.
Quote:

The pressure is still 2e-4 torr according to CC1 so I thought I'd give ASS debugging a go tonight. But the arm transmission signal isn't coming through to the LSC model from the end PDs - so a resurfacing of this problem. Rebooting the sender model, c1scy, did not fix the problem. Moreover, c1susaux is dead. The last time I rebooted it, ITMY got stuck so I'm not going to attempt a revival tonight.

Attachment 1: Screenshot_from_2019-02-15_15-21-47.png
Screenshot_from_2019-02-15_15-21-47.png
  14458   Fri Feb 15 18:41:18 2019 ranaUpdateVACVac system is back up

If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?

Quote:

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)

  14459   Fri Feb 15 18:42:57 2019 gautamUpdateLSCTRY 60 Hz solved

A more permanent fix than a crocodile clip was implemented. Should probably look to do this for the X end unit as well.

Attachment 1: IMG_7323.JPG
IMG_7323.JPG
  14460   Fri Feb 15 19:50:09 2019 ranaUpdateVACVac system is back up

The acromags are on the UPS. I suspect the transient came in on one of the signal lines. Chub tells me he unplugged one of the signal cables from the chassis around the time things died on Monday, although we couldn't reproduce the problem doing that again today.

In this situation it wasn't the software that died, but the acromag units themselves. I have an idea to detect future occurrences using a "blinker" signal. One acromag outputs a periodic signal which is directly sensed by another acromag. The can be implemented as another polling condition enforced by the interlock code.

Quote:

If the acromags lock up whenever there is an electrical spike, shouldn't we have them on UPS to smooth out these ripples? And wasn't the idea to have some handshake/watchdog system to avoid silently dying computers?

Quote:

The problem encountered with the vac controls was indeed resolved via the recommendation I posted yesterday. The Acromags had gone into a protective state (likely caused by an electrical transient in one of the signals)

 

  14461   Fri Feb 15 20:07:02 2019 JonUpdateVACUpdated vacuum punch list

While working on the vac controls today, I also took care of some of the remaining to-do items. Below is a summary of what was done, and what still remains.

Completed today

  • TP2/3 overcurrent interlock raised from 1 to 1.2 A. This was tripping during normal operation as the pump accelerates from low-speed (standby) to normal-speed mode.
  • Interlock conditions on VABSSCO/VABSSCI removed. Per discussion with Steve, these are not vent valves, but rather isolation valves between the BS/IOO/OMC annuli. The interlocks were preventing the valves from opening, and hence the IOO and OMC annuli from being pumped.
  • Channel exposed for interlocking in-vacuum high-voltage drivers. The channel name is C1:Vac-interlock_high_voltage. The vac interlock service sets this channel's value to 0 when the main volume pressure is in the range 3 mtorr-500 torr, and to 1 otherwise.
  • Annuli pumping integrated into the set of recognized states. "Vacuum normal" now refers to TP1 and TP2 pumping on the main volume AND TP3 pumping on all the annuli. The system is currently running in this state.
  • TP1 lowered to the nominal speed setting recommended by Steve: 33.6 krpm (560 Hz).

Still remaining

  • Implement a "blinker" input-output signal loop between two Acromags to detect hardware failures like the one today.
  • Add an AC power monitor to sense extended power losses and automatically put the system into safe shutdown.
  • Migrate the RGA to c1vac. Still some issues getting the serial comm working.
  • Troubleshoot the SuperBee (backup) main volume Parani gauge. It has not communicated with c1vac since a serial adapter was replaced two weeks ago. Chub thinks the gauge was possibly damaged by arcing during the replacement.
  • Scripting for more automated pumpdowns.
  • Generate a bootable backup hard drive for c1vac, which could be swapped in on a short time scale after a failure.
  14462   Fri Feb 15 21:15:42 2019 gautamUpdateVACdd backup of c1vac made
  1. Connected one of the solid-state drives to c1vac. It was /dev/sdb.
  2. Formatted the drive using sudo mkfs -t ext4 /dev/sdb
  3.  Mounted it as /mnt/backup using sudo mount /dev/sdb /mnt/backup
  4. Started a tmux session for the dd, called DDbackup
  5. Started the dd backup using  sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync
  6. Backup completed in 719 seconds: need to test if it works...
controls@c1vac:~$ sudo dd if=/dev/sda of=/dev/sdb bs=64K conv=noerror,sync
[sudo] password for controls: 
^C283422+0 records in
283422+0 records out
18574344192 bytes (19 GB) copied, 719.699 s, 25.8 MB/s
Quote:
 
  • Generate a bootable backup hard drive for c1vac, which could be swapped in on a short time scale after a failure.
  14465   Tue Feb 19 19:03:18 2019 ranaUpdateComputersMartian router -> WPA2

I have swapped our martian router's WiFi security over to WPA2 (AES) from the previous, less-secure, system. Creds are in the secrets-40-red.

  14466   Tue Feb 19 22:52:17 2019 gautamUpdateASSY arm clipping doubtful

In an earlier elog, I had claimed that the suspected clipping of the cavity axis in the Y arm was not solved even after shifting the heater. I now think that it is extremely unlikely that there is still clipping due to the heater. Nevertheless, the ASS system is not working well. Some notes:

  1. The heater has been shifted nearly 1-inch relative to the cavity axis compared to its old position - see Attachment #1 which compares the overhead shot of the suspension cage before and after the Jan 2019 vent.
  2. On Sunday, I was able to recover TRY ~ 1.0 (but not as high as I was able to get by intentionally setting a yaw offset to the ASS) by hand alignment with the spot on ETMY much closer to the center of the optic, judging by the camera. There are offsets on the dither alignment error signals which depend on the dither frequency, so the A2L signals are not good judges of how well centered we are on the optic.
  3. By calculating the power lost by clipping a Gaussian beam cross-section with a rectangular block from one side (an admittedly naive model of clipping), I find that we'd have to be within 15 mm of the line connecting the centers of ITMY and ETMY to even see ~10 ppm loss, see Attachment #2. So it is hard to believe that this is still a problem. Also, see  Attachment #3 which compares side-by-side the view of ETMY as seen through the EY optical table viewport before and after the Jan 2019 vent.

We have to systematically re-commission the ASS system to get to the bottom of this.

Attachment 1: overheadComparison.pdf
overheadComparison.pdf
Attachment 2: clipping.pdf
clipping.pdf
Attachment 3: rearComparison.pdf
rearComparison.pdf
  14467   Wed Feb 20 18:26:05 2019 gautamUpdateIOOIPPOS recommissioned

I've suspected that the TTs are drifting significantly over the course of the last couple of days, because despite repeated alignment efforts, the AS beam spot has drifted off the center of the camera view. I tried looking at IPPOS, but found that there was no data. Looking at the table, the QPD was turned backwards, and the DAQ cable wasn't connected (neither at the PD end, nor at 1Y2, where instead, a cable labelled "Spare QPD" was plugged in). Fortunately, the beam was making it out of the vacuum. So as to have a quantitative diagnostic, I reconnected the QPD, turned it the right way round, and adjusted the steering onto it such that with the AS spot on the center of the CCD monitor, the beam is also centered on the QPD. The calibration is uncertain, but at least we will be able to see how much the spot drifts on the QPD over some days. Also, we only have 16 Hz readback of this stuff.

I leave it to Chub to take the high-res photo and update the wiki, which was last done in 2012.


Already, in the last ~1 hour, there has been considerable drift - see Attachment #2. The spot, which started at the center of the CCD monitor, has now nearly drifted off the top end. The ITMX and BS Oplev spots have been pretty constant over the same timescale, so it has to be the TTs?

Attachment 1: IMG_7330.JPG
IMG_7330.JPG
Attachment 2: Screenshot_from_2019-02-20_19-43-27.png
Screenshot_from_2019-02-20_19-43-27.png
  14468   Wed Feb 20 23:55:51 2019 gautamUpdateALSALS delay line electronics

Summary:

Last year, I worked on the ALS delay line electronics, thinking that we were in danger of saturation. The analysis was incorrect. I find that for RF signal levels between -10 dBm and +15 dBm, assuming 3dB insertion loss due to components and 5 dB conversion loss in the mixer, there is no danger of saturation in the I/F part of the circuit.

Details:

The key is that the MOSFET mixer used in the demodulation circuit drives an I/F current and not voltage. The I-to-V conversion is done by a transimpedance amplifier and not a voltage amplifier. The confusion arose from interpreting the gain of the first stage of the I/F amplifier as 1 kohm/10 ohm = 100. The real figures of merit we have to look at are the current through, and voltage across, the transimpedance resistor.  So I think we should revert to the old setup. This analysis is consistent with an actual test I did on the board, details of which may be found here.

We may still benefit from some whitening of the signal before digitization between 10-100 Hz, need to check what is an appropriate place in the signal chain to put in some whitening, there are some constraints to the circuit topology because of the MOSFET mixer.

One part of the circuit topology I'm still confused by is the choice of impedance-matching transformer at the RF-input of this demod board - why is a 75 ohm part used instead of a 50 ohm part? Isn't this going to actually result in an impedance mismatch given our RG405 cabling?

Update: Having pulled out the board, it looks like the input transformer is an ADT-1-1, and NOT an ADT1-1WT as labelled on the schematic. The former is indeed a 50ohm part. So it makes sense to me now.

Since we have the NF1611 fiber coupled PDs, I'm going to try reviving the X arm ALS to check out what the noise is after bypassing the suspect Menlo PDs we were using thus far. My re-analysis can be found in the attached zip of my ipynb (in PDF form).

Attachment 1: delayLineDemod.pdf.zip
  14469   Fri Feb 22 12:19:46 2019 gautamUpdateIOOTT coil driver Vmon

To debug the issue of the suspected drifting TTs further, I temporarily hijacked CH0-CH8 of ADC1 in the c1lsc expansion chassis, and connected the "MON" outputs of the coil drivers (D010001) to them via some DB9 breakouts. The idea is to see if the problem is electrical. We should see some  slow drift in the voltage to the TTs correlated with the spot walking off the IPPOS QPD. From the wiring diagram, it doesn't look like there is any monitoring (slow or fast) of the control voltages to the TT coils, this should be factored into the Acromag upgrade of c1iscaux/c1iscaux2. EPICS monitoring should be sufficient for this purpose so I didn't setup any new DQ channels, I'll just look at the EPICS from the IOP model.

Quote:
Already, in the last ~1 hour, there has been considerable drift - see Attachment #2. The spot, which started at the center of the CCD monitor, has now nearly drifted off the top end. The ITMX and BS Oplev spots have been pretty constant over the same timescale, so it has to be the TTs?
  14470   Mon Feb 25 20:20:07 2019 KojiUpdateSUSDIN 41612 (96pin) shrouds installed to vertex SUS coil drivers

The forthcoming Acromag c1susaux is supposed to use the backplane connectors of the sus euro card modules.

However, the backplane connectors of the vertex sus coil drivers were already used by the fast switches (dewhitening) of c1sus.

Our plan is to connect the Acromag cables to the upper connectors, while the switch channels are wired to the lower connector by soldering jumper wires between the upper and lower connectors on board.

To make the lower 96pin DIN connector available for this, we needed DIN 41612 (96pin) shroud. Tyco Electronics 535074-2 is the correct component for this purpose. The shrouds have been installed to the backplane pins of the coil driver circuit D010001. The shroud has the 180deg rotation dof. The direction of the shroud was matched with the ones on the upper connectors.

Attachment 1: P_20190222_175058.jpg
P_20190222_175058.jpg
  14471   Wed Feb 27 21:34:21 2019 gautamUpdateGeneralSuspension diagnosis

In my effort to understand what's going on with the suspensions, I've kicked all the suspensions and shutdown the watchdogs at 1235366912. PSL shutter is closed to avoid trying to lock to the swinging cavity. The primary aims are

  1. To see how much the resonant peaks have shifted w.r.t. the database, if at all - I claim that the ETMY resonances have shifted by a large amount and also has lost one of the resonant peaks.
  2. To check the status of the existing diagonalization.

All the tests I have done so far (looking at free swinging data, resonant frequencies in the Oplev error signals etc) seem to suggest that the problem is mechanical rather than electrical. I'll do a quick check of the OSEM PD whitening unit in 1Y4 to be sure.But the fact that the same three peaks appear in the OSEM and Oplev spectra suggests to me that the problem is not electrical.

Watchdogs restored at 10 AM PST

  14472   Sat Mar 2 14:19:35 2019 gautamUpdateCDSFSS Slow servo gains not burt-ed

PSL NPRO PZT voltage showed large low frequency (hour timescale) excursions on the control room StripTool trace, leading me to suspect the slow servo wasn't working as expected. Yesterday evening, I keyed the unresponsive c1psl crate at ~9 PM PST, and had to run the burtrestore to get the PMC locking working. I must have pressed the wrong button on burtgooey or something, because all the FSS_SLOW channels were reset to 0. What's more, their values were not being saved by the hourly burt-snap script, so I don't have any lookback on what these values were. There isn't any detailed record on the elog about what the optimal values for these are, and the most recent reference I could find was Ki=0.1, Kp=Kd=0, which is what I've set it now to. The servo isn't running away, so I'm leaving things in this state, PID tuning can be done later.

I also added the FSS Slow servo channels to the burt snapshot requirement file at /cvs/cds/caltech/target/c1psl/autoBurt.req, and confirmed that the snapshots are getting the channels from now onwards.

While looking at the req file, I saw a bunch of *_MOPA* channels and also several other currently unused channels. Probably would benefit from going through these and commenting out all the legacy channels, to minimize disk space wastage (though we compress the snapshot files every few years anyways I guess).

Reminder that this (unrelated) issue still needs to be looked into... Note also that the new vacuum system does not have burt snapshot set up (i.e. it is still trying to get the old channels from the c1vac1 and c1vac2 databases, which while has significant overlap with the new system, should probably be setup correctly).

  14473   Sun Mar 3 14:16:31 2019 gautamUpdateIOOMegatron hard-rebooted

IMC was not locked for the past several hours. Turned out MC autolocker was stuck, and I could not ssh into megatron because it was in some unresponsive state. I had to hard-reboot megatron, and once it came back up, I restarted the MCautolocker, FSS slow servo and nds2 processes. IMC re-locked immediately.

I was pulling long stretches of OSEM data from the NDS2 server (megatron) last night, I wonder if this flakiness is connected. Megatron is still running Ubuntu12.

  14475   Thu Mar 7 01:06:38 2019 gautamUpdateALSALS delay line electronics

Summary:

The restoration of the delay-line electronics is complete. The chassis has not been re-installed yet, I will put it back in tomorrow. I think the calculations and measurements are in good agreement.

Details:

Apart from restoring the transimpedance of the I/F amplifier, I also had to replace the two differential-sending AD8672s in the RF Log detector circuit for both LO and RF paths in the ALS-X board. I performed the same tests as I did the last time on the electronics bench, results will be uploaded to the DCC page for the 40m version of the board. I think the board is performing as advertised, although there is some variation in the noise of the two pairs of I/Q readouts. Sticking with the notation of the HP Application Note for delay line frequency discriminators, here are some numebrs for our delay line system:

  • K_{\phi} = 3.7 \ \mathrm{V/rad}  - measured by driving the LO/RF inputs with Fluke/Marconi at 7dBm/0dBm (which are the expected signal levels accounting for losses between the BeatMouth and the demodulator) and looking at the Vpp of the resulting I/F beat signal on a scope. This is assuming we use the differential output of the demodulator (divide by 2 if we use the single-ended output instead).
  • \tau_d = \frac{45 \ \mathrm{m}}{0.75c} \approx 0.2 \mu s [see measurement]
  • K_{d} = K_{\phi}2 \pi \tau_{d} \approx 4 \mu \mathrm{V/Hz} (to be confirmed by measurement by driving a known FM signal with the Marconi)
  • Assuming 1mW of light on our beat PDs and perfect contrast, the phase noise due to shot noise is \pi \sqrt{2\bar{P}\frac{hc}{\lambda}} / 1 \ \mathrm{mW} \approx 60 \ \mathrm{nrad /}\sqrt{\mathrm{Hz}}which is ~ 5 orders of magnitude lower than the electronics noise in equivalent frequency noise at 100 Hz.
  • The noise due to the FET mixer seems quite complicated to calculate - but as a lower bound, the Johnson current noise due to the 182 ohms at each RF input is ~ 10 pA/rtHz. With a transimpedance gain of 1 kohm, this corresponds to ~10 nV/rtHz. 

In conclusion: the ALS noise is very likely limited by ADC noise (~1 Hz/rtHz frequency noise for 5uV/rtHz ADC noise). We need some whiteningWhy whiten the demodulated signal instead of directly incorporating the whitening into the I/F amplifier input stage? Because I couldn't find a design that satisfies all the following criteria (this was why my previous design was flawed):

  1. The commutating part of the FET mixer must be close to ground potential always.
  2. The loading of the FET mixer is mostly capacitive.
  3. The DC gain of the I/F amplifier is low, with 20-30dB gain at 100 Hz, and then rolled off again at high frequencies for stability and sum-frequency rejection. In fact, it's not even obvious to me that we want a low DC gain - the quantity K_{\phi} is directly proportional to the DC transimpedance gain, and we want that to be large for more sensitive frequency discriminating.

So Rich suggested separating the transimpedance and whitening operations. The output noise of the differential outputs of the demodulator unit is <100 nV/rtHz at 100 Hz, so we should be able to saturate that noise level with a whitening unit whose input referred noise level is < 100 nV/rtHz. I'm going to see if there are any aLIGO whitening board spares - the existing whitening boards are not a good candidate I think because of the large DC signal level.

  14477   Tue Mar 12 22:51:25 2019 gautamUpdateALSALS delay line electronics

This Hanford alog may be of relevance as we are using the aLIGO AA chassis for the IR ALS channels. We aren't expecting any large amplitude high frequency signals for this application, but putting this here in case it's useful someday.

  14478   Wed Mar 13 01:27:30 2019 gautamUpdateALSALS delay line electronics

This test was done, and I determine the frequency discriminant to be \approx 5 \mu \mathrm{V}/\mathrm{Hz} (for an RF signal level of ~2 dBm). 

Attachment #1: Measured and predicted value of the DFD discriminant for a few RF signal levels.

  • Methodology was to drive an FM (deviation = 25 Hz, fMod = 221 Hz, fCarrier ~ 40 MHz) with the Marconi, and look at the IF spectrum peak height on a SR785
  • The "Design" curve is calculated using the circuit parameters, assuming 4dB conversion loss in the mixer itself, and 3dB insertion loss due to various impedance matching transformers and couplers in the RF signal chain. I fudged the insertion/convertion loss numbers to get this curve to line up with the measurements (by eye).
  • For the measurement, I assume the value for FM deviation displayed on the Marconi is an RMS value (this is the best I can gather from the manual). I'll double checking by looking at the RFmon spectrum directly on the Agilent NA.
  • X axis calibrated by reading off from the RF power monitor using a DMM and using the calibration data from the bench.
  • I could never get the ratio of peak heights in Ichan/Qchan (or the other way around) to better than ~ 1/8 (by moving the carrier frequency around). Not sure I can explain that - small non-orthogonality between I and Q channels cannot explain this level of leakage.

Attachment #2: Measured noise spectrum in the 1Y2 (LSC) electronics rack, calibrated to Hz/rtHz using the discriminant from Attachment #1.

  • Something funky with the I channel for X, I'll re-take that spectrum.

I'm still waiting on some parts for the new BeatMouth before giving the whole system a whirl. In the meantime, I'll work on the EX and EY green setups, to try and improve the mode-matching and better characterize the expected suppressed frequency noise of the end NPROs - the goal here is to rule out the excess low-frequency noise that was seen in the ALS signals coming from unsuppressed frequency noise.

Bottom lines: 

  1. The DFD noise is at the level of ~ 10mHz/rtHz above 10 Hz. This justifies the need for whitening before ADC-ing.
  2. The measured signal/noise levels in the DFD chain are in good agreement with the "expected" levels from circuit component values and typical insertion/conversion loss values.
  3. Why are there so many 60 Hz harmonics???
Attachment 1: DFDcal.pdf
DFDcal.pdf
Attachment 2: DFDnoise.pdf
DFDnoise.pdf
  14479   Thu Mar 14 23:26:47 2019 AnjaliUpdateALSALS delay line electronics

Attachment #1 shows the schematic of the test setup. Signal generator (Marconi) was used to supply the RF input. We observed the IF output in the following three test conditions.

  1. Observed the spectrum with FM modulation (fcarrier of 40 MHz and fmod of 221 Hz )- a peak at 221 Hz was observed.
  2. Observed the noise spectrum without FM modulation.
  3. Observed the noise spectrum after disconnecting the delayed output of the delay line. 
  • It is observed that the broad band noise level is higher without FM modulation (2) compared to that we observed after disconnecting the delayed output of the delay line (3).
  • It is also observed that the noise level is increasing with increase in RF input power. 
  • We need to find the reason for increase in broad band noise .
Attachment 1: test_setup_ALS_delay_line_electronics.pdf
test_setup_ALS_delay_line_electronics.pdf
  14480   Sun Mar 17 00:42:20 2019 gautamUpdateALSNF1611 cannot be shot-noise limited?

Summary:

Per the manual (pg12) of the NF 1611 photodiode, the "Input Noise Current" is 16 pA/rtHz. It also specifies that for "Linear Operation", the max input power is 1 mW, which at 1um corresponds to a current shot noise of ~14 pA/rtHz. Therefore,

  1. This photodiode cannot be shot-noise limited if we also want to stay in the spec-ed linear regime.
  2. We don't need to worry so much about the noise figure of the RF amplifier that follows the photodiode. In fact, I think we can use a higher gain RF amplifier with a slightly worse noise figure (e.g. ZHL-3A) as we will benefit from having a larger frequency discriminant with more RF power reaching the delay line.

Details:

Attachment #1: Here, I plot the expected voltage noise due to shot noise of the incident light, assuming 0.75 A/W for InGaAs and 700V/A transimpedance gain. 

  • For convenience, I've calibrated on the twin axes the current shot noise (X) and equivalent amplifier noise figure at a given voltage noise, assuming a 50 ohm system (Y).
  • The 16 pA/rtHz input current noise exceeds the shot noise contribution for powers as high as 1 mW.
  • Even at 0.5 mW power on the PD, we can use the ZHL-3A rather than the Teledyne:
    • This calculation was motivated by some suspicious features in the Teledyne amplifier gain, I will write a separate elog about that. 
    • For the light levels we have, I expect ~3dBm RF signal from the photodiode. With the 24dB of gain from the ZHL-3A, the signal becomes 27dBm, which is smaller (but close to) the spec-ed max output of the ZHL-3A, which is 29.5 dBm. Is this too close to the edge?
    • I will measure the gain/noise of the ZHL-3A to get a better answer to these questions.
  • If in the future we get a better photodiode setup that reaches sub-1nV/rtHz (dark/electronics) voltage noise, we may have to re-evaluate what is an appropriate RF amplifier.
Attachment 1: PDnoise.pdf
PDnoise.pdf
  14481   Sun Mar 17 13:35:39 2019 AnjaliUpdateALSPower splitter characterization

We characterized the power splitter ( Minicircuit- ZAPD-2-252-S+). The schematic of the measurement setup is shown in attachment #1. The network/spectrum/impedance analyzer (Agilent 4395A) was used in the network analyzer mode for the characterisation. The RF output is enabled in the network analyser mode. We used an other spliiter (Power splitter #1) to splitt the RF power such that one part goes to the network analzer and the other part goes to the power spliiter (Power splitter #2) . We are characterising power splitter #2 in this test. The characterisation results and comparison with the data sheet values are shown in Attachment # 2-4.

Attachment #2 : Comparison of total loss in port 1 and 2

Attachment #3 : Comparison of amplitude unbalance

Attachment #4 : Comparison of phase unbalance

  • From the data sheet: the splitter is wideband, 5 to 2500 MHz, useable from 0.5 to 3000 MHz. We performd the measurement from 1 MHz to 500 MHz (limited by the band width of the network analyzer).
  • It can be seen from attachment #2 and #4 that there is a sudden increase below ~11 MHz. The reason for this is not clear to me
  • The mesured total loss value for port 1 and port 2 are slightly higher than that specified in the data sheet.From the data sheet, the maximum loss in port 1 and port 2 in the range at 450 MHz are 3.51 dB and 3.49 dB respectively. The measured values are 3.61 dB and 3.59 dB respectively for port 1 and port 2, which is higher than the values mentioed in the data sheet. It can also be seen from attachment #1 (b) that the expected trend in total loss with frequency is that the loss is decreasing with increase in frequency and we are observing the opposite trend in the frequency range 11-500 MHz. 
  • From the data sheet, the maximum amplitude balance in the 5 MHz-500 MHz range is 0.02 dB and the measured maximum value is 0.03 dB
  • Similary for the phase unbalance, the maximum value specified by the data sheet in the 5 MHz- 500 MHz range is 0.12 degree and the measurement shows a phase unbalance upto 0.7 degree in this frequency range
  • So the observations shows that the measured values are slighty higher than that specified in the data sheet values.
Attachment 1: Measurement_setup.pdf
Measurement_setup.pdf
Attachment 2: Total_loss.pdf
Total_loss.pdf
Attachment 3: Amplitude_unbalance.pdf
Amplitude_unbalance.pdf
Attachment 4: Phase_unbalance.pdf
Phase_unbalance.pdf
  14482   Sun Mar 17 21:06:17 2019 AnjaliUpdateALSAmplifier characterisation

The goal was to characterise the new amplifier (AP1053). For a practice, I did the characterisation of the old amplifier.This test is similar to that reported in Elog ID 13602.

  • Attachment #1 shows the schematic of the setup for gain characterisation and Attachment #2 shows the results of gain characterisation. 
  • The gain measurement is comparable with the previous results. From the data sheet, 10 dB gain is guaranteed in the frequency range 10-450 MHz. From our observation, the gain is not flat pver this region. We have measured a maximum gain of 10.7 dB at 6 MHz and it has then decreased upto 8.5 dB at 500 MHz
  • Attachement #3 shows the schematic of the setup for the noise characterisation and Attachment # 4 shows the results of noise measurment. 
  • The noise measurement doesn't look fine. We probably have to repeat this measurement.
Attachment 1: Gain_measurement.pdf
Gain_measurement.pdf
Attachment 2: Amplifier_gain.pdf
Amplifier_gain.pdf
Attachment 3: noise_measurement.pdf
noise_measurement.pdf
Attachment 4: noise_characterisation.pdf
noise_characterisation.pdf
  14483   Mon Mar 18 12:27:42 2019 gautamUpdateGeneralIFO status
  1. c1iscaux2 VME crate is damaged - see Attachment #1. 
    • It is not generating the 12V supply voltage, and so nothing in the crate works.
    • Tried resetting via front panel button, power cycling by removing power cable on rear, all to no effect.
    • Tried pulling out all cards and checking if there was an internal short that was causing the failure - looks like the problem is with the crate itself.
    • Not sure how long this machine has been unresponsive as we don't have any readback of the status of the eurocrate machines.
    • Not a showstopper, mainly we can't control the whitening settings for AS55, REFL55, REFL165 and ALSY. 
    • Acromag installation schedule should be accelerated.
    • * Koji reminded me that \text{VME crate} \ \neq \ \text{eurocrate}. The former is what is used for the slow machines, the latter is what is used for holding the iLIGO style electronics boards.
  2. ITMX oplev is dead - see Attachment #2.
    • Lasted ~3 years (installed March 2016).
    • I confirmed that no light is coming out of the laser head on the optical table.
    • I'll ask Chub to replace it this afternoon.
  3. c1susaux is unresponsive
    • I didn't reboot it as I didn't want to spend some hours freeing ITMY. 
    • At some point we will have to bite the bullet and do it.
  4. Input pointing is still not stable
    • I aligned the input pointing using TT1/TT2 to maximize TRX/TRY before lunch, but in 1 hour, the pointing has already drifted.
  5. POX/POY locking is working okay. TRX has large low-frequency fluctuations because of ITMX not having an Oplev servo, should be rectified once we swap out the HeNe.

The goal for this week is to test out the ALS system, so this is kind of a workable state since POX/POY locking is working. But the number of broken things is accumulating fast.

Attachment 1: IMG_7343.JPG
IMG_7343.JPG
Attachment 2: ITMXOL.png
ITMXOL.png
ELOG V3.1.3-