ID |
Date |
Author |
Type |
Category |
Subject |
15713
|
Mon Dec 7 12:38:51 2020 |
gautam | Update | IOO | IMC loop char | Summary:
There seems to be significant phase loss in the TTFSS path, which is limiting the IMC OLTF to <100 kHz.
Details:
See Attachment #1 and #2. The former shows the phase loss, while the latter is just to confirm that the optical gain of the error point is roughly the same, since I noticed this after working on and replacing the RF frequency distribution unit. Unfortunately there have been many other changes also (e.g. the work that Rana and Koji did at the IMC rack, swapping of backplane controls etc etc - maybe they have an OLTF measurement from the time they were working?) so I don't know which is to blame. Off the top of my head, I don't see how the RF source can change the phase lag of the IMC servo at 100 kHz. The only part of the IMC RF chain that I touched was the short cable inside the unit that routes the output of the Wenzel source to the front panel SMA feedthrough. I confirmed with a power meter that the power level of the 29.5 MHz signal at that point is the same before and after my work.
The time domain demod monitor point signals appear somewhat noisier in todays measurement compared to some old data I had from 2018, but I think this isn't significant. Once the SR785 becomes available, I will measure the error point spectrum as well to confirm. One thing I noticed was that like many of our 1U/2U chassis units, the feedthrough returns are shorted to the chassis on the RF source box (and hence presumably also to the rack). The design doc for this box makes many statements about the precautions taken to avoid this, but stops short of saying if the desired behavior was realized, and I can't find anything about it in the elog. Can someone confirm that the shields of all the connectors on the box were ever properly isolated? My suspicion is that the shorting is happening where the all-metal N-feedthroughs touch the drilled surfaces on the front panel - while the front and back surfaces of the panel are insulating, the machined surfaces are not.
This is an unacceptable state but no clear ideas of how to troubleshoot quickly (without going piece by piece into the IMC servo chain) occur to me. I still don't understand how the freq source work could have resulted in this problem but I'm probably overlooking something basic. I'm also wondering why the differential receiving at the TTFSS error point did not require a gain adjustment of the IMC servo? Shouldn't the differential-receiving-single-ended-sending have resulted in an overall x0.5 gain?
Update 8 Dec 1200: To test the hypothesis, I bypassed the SR560 based differential receiving and restored the original config. I am then able to run with the original gain settings, and you see in Attachment #4 that the IMC OLTF UGF is back above 100 kHz. It is still a little lower than it was in June 2019, not sure why. There must be some saturation issues somewhere in the signal chain because I cannot preserve the differential receiving and retain 100 kHz UGF, either by raising the "VCO gain" on the MC servo board, setting the SR560 to G=2, or raising the "Common Gain Adjust" on the FSS box by 6 dB. I don't have a good explanation for why this worked for some weeks and failed now - maybe some issue with the SR560? We don't have many working units so I didn't try switching it.
So either there is a whole mess of lines or the frequency noise suppression is limited. Sigh. |
15714
|
Mon Dec 7 14:32:02 2020 |
gautam | Update | LSC | New demod phases for POX/POY locking | In favor of keeping the same servo gains, I tuned the digital demod phases for the POX and POY photodiode signals to put as much of the PDH error signal in the _I quadrature as possible. The changes are summarized below:
POX / POY demod phases
PD |
Old demod phase [deg] |
New demod phase [deg] |
POX11 |
79.5 |
-75.5 |
POY11 |
-106.0 |
116.0 |
The old locking settings seem to work fine again. This setting isn't set by the ifoconfigure scripts when they do the burt restore - do we want it to be?
Attachments #1 and #2 show some spectra and TFs for the POX/POY loops. In Attachment #2, the reference traces are from the past, while the live traces are from today. In fact, to have the same UGF as the reference traces (from ~1 year ago), I had to also raise the digital servo loop gain by ~20%. Not sure if this can be put down to a lower modulation depth - at least, at the output on the freq ref box, I measured the same output power (at the 0dB variable attenuator gain setting we nominally run in) before and after the changes. But I haven't done an optical measurement of the modulation depth yet. There is also a hint of lesser phase available at ~100 Hz now compared to a year ago. |
15715
|
Mon Dec 7 22:54:30 2020 |
gautam | Update | LSC | Modulation depth measurement | Summary:
I measured the modulation depth at 11 MHz andf 55 MHz using an optical beat + PLL setup. Both numbers are ~0.2 rad, which is consistent with previous numbers. More careful analysis forthcoming, but I think this supports my claim that the optical gain for the PDH locking loops should not have decreased.
Details:
- For this measurement, I closed the PSL shutter between ~4pm and ~9pm local time.
- The photodiode used was the NF1611, which I assumed has a flat response in the 1-200 MHz band, and so did not apply any correction/calibration.
|
15716
|
Tue Dec 8 15:07:13 2020 |
gautam | Update | Computer Scripts / Programs | ndscope updated | I updated the ndscope on rossa to a bleeding edge version (0.7.9+dev0) which has many of the fixes I've requested in recent times (e.g. direct PDF export, see Attachment #1). As usual if you find issue, report it on the issue tracker. The basic functionality for looking at signals seems to be okay so this shouldn't adversely impact locking efforts.
In hindsight - I decided to roll-back to 0.7.9, and have the bleeding edge as a separate binary. So if you call ndscope from the command line, you should still get 0.7.9 and not the bleeding edge. |
15717
|
Wed Dec 9 11:54:11 2020 |
gautam | Update | Optical Levers | ITMX HeNe replaced | The ITMX Oplev (installed in March 2019) was near end of life judging by the SUM channel (see Attachment #1). I replaced it yesterday evening with a new HeNe head. Output power was ~3.25 mW. The head was labelled appropriately and the Oplev spot was recentered on its QPD. The lifetime of ~20 months is short but recall that this HeNe had already been employed as a fiber illuminator at EX and so maybe this is okay.
Loop UGFs and stability margins seem acceptable to me, see Attachment #2-#3. |
15718
|
Wed Dec 9 12:02:04 2020 |
gautam | Update | LSC | POX locking still unsatisfactory | Continuting the IFO recovery - I am unable to recover similar levels of TRX RIN as I had before. Attachment #1 shows that the TRX RIN is ~4x higher in RMS than TRY RIN (the latter is commensurate with what we had previously). The excess is dominated by some low frequency (~1 Hz) fluctuations. The coherence structure is confusing - why is TRY RIN coherent with IMC transmission at ~2 Hz but not TRX? But anyways, doesn't look like its intensity fluctuations on the incident light (unsurprisingly, since the TRY RIN was okay). I thought it may be because of insufficient low-frequency loop gain - but the loop shape is the same for TRX and TRY. I confirmed that the loop UGF is similar now (red trace in Attachment #2) as it was ~1 month ago (black trace in Attachment #2). Seismometers don't suggest excess motion at 1 Hz. I don't think the modulation depth at 11 MHz is to blame either. As I showed earlier, the spectrum of the error point is comparable now as it was previously.
What am I missing? |
15719
|
Wed Dec 9 15:37:48 2020 |
gautam | Update | CDS | RFM switch IP addr reset | I suspect what happened here is that the IP didn't get updated when we went from the 131.215.113.xxx system to 192.168.113.xxx system. I fixed it now and can access the web interface. This system is now ready for remote debugging (from inside the martian network obviously). The IP is 192.168.113.90.
Managed to pull this operation off without crashing the RFM network, phew.
BTW, a windows laptop that used to be in the VEA (I last remember it being on the table near MC2 which was cleared sometime to hold the spare suspensions) is missing. Anyone know where this is ? |
15720
|
Wed Dec 9 16:22:57 2020 |
gautam | Update | SUS | Yet another round of Sat. Box. switcharoo | As discussed at the meeting, I decided to effect a satellite box swap for the misbehaving MC1 unit. I looked back at the summary pages Vmon for the SRM channels, and found that in the last month or so, there wasn't any significant evidence of glitchiness. So I decided to effect that swap at ~4pm today. The sequence of steps was:
- SRM and MC1 watchdogs were disabled.
- Unplugged the two satellite boxes from the vacuum flanges.
- For the record: S/N 102 was installed at MC1, and S/N 104 was installed at SRM. Both were de-lidded, supposedly to mitigate the horrible thermal environment a bit. S/N 104 was the one Koji repaired in Aug 2019 (the serial number isn't visible or noted there, but only one box has jumper wires and Koji's photos show the same jumper wires). In June 2020, I found that the repaired box was glitching again, which is when I swapped it for S/N 102.
- After swapping the two units, I re-enabled the local damping on both optics, and was able to re-lock the IMC no issues.
One thing I was reminded of is that the motion of the MC1 optic by controlling the bias sliders is highly cross-coupled in pitch and yaw, it is almost diagonal. If this is true for the fast actuation path too, that's not great. I didn't check it just now.
While I was working on this, I took the opportunity to also check the functionality of the RF path of the IMC WFS. Both WFS heads seem to now respond to angular motion of the IMC mirror - I once again dithered MC2 and looked at the demodulated signals, and see variation at the dither frequency, see Attachment #1. However, the signals seem highly polluted with strong 60 Hz and harmonics, see the zoomed-in time domain trace in Attachment #2. This should be fixed. Also, the WFS loop needs some re-tuning. In the current config, it actually makes the MC2T RIN worse, see Attachment #3 (reference traces are with WFS loop enabled, live traces are with the loop disabled - sorry for the confusing notation, I overwrote the patched version of DTT that I got from Erik that allows the user legend feature, working on getting that back).
Quote: |
The MC1 suspension has begun to show evidence of glitches again, from Friday/Saturday. You can look at the suspension Vmon tab a few days ago and see that the excess fuzz in the Vmon was not there before. The extra motion is also clearly evident on the MCREFL spot. I noticed this on Saturday evening as I was trying to recover the IMC locking, but I thought it might be Millikan so I didn't look into it further. Usually this is symptomatic of some Satellite box issues. I am not going to attempt to debug this anymore.
|
|
15721
|
Wed Dec 9 20:14:49 2020 |
gautam | Update | VAC | UPS failure | Summary:
- The (120V) UPS at the vacuum rack is faulty.
- The drypump backing TP2 is faulty.
- Current status of vacuum system:
- The old UPS is now powering the rack again. Sometime ago, I noticed the "replace battery" indicator light on this unit was on. But it is no longer on. So I judged this is the best course of action. At least this UPS hasn't randomly failed before...
- main vol is being pumped by TP1, backed by TP3.
- TP2 remains off.
- The annular volumes are isolated for now while we figure out what's up with TP2.
- The pressure went up to ~1 mtorr (c.f. ~600utorr that is the nominal value with the stuck RV2) during the whole episode but is coming back down now.
- Steve seems to have taken the reliability of the vacuum system with him.
Details:
Around 7pm, the UPS at the vacuum rack seems to have failed. Don't ask me why I decided to check the vacuum screen 10 mins after the failure happened, but the point is, this was a silent failure so the protocols need to be looked into.
Going to the rack, I saw (unsurprisingly) that the 120V UPS was off.
- Pushed the power on button - the LCD screen would briefly light up, say the line voltage was 120 V, and then turned itself off. Not great.
- I traced the power connection to the UPS itself to a power strip under the rack - then I moved the plug from one port to another. Now the UPS stays on. okay...
- but after ~3 mins while I'm hunting for a VGA cable, I hear an incessant beeping. The UPS display has the "Fault" indicator lit up.
- I decided to shift everything back to the old UPS. After the change was made, I was able to boot up the c1vac machine again, and began the recovery process.
- When I tried to start TP2, the drypump was unusually noisy, and I noticed PTP2 bottomed out at ~500 torr (yes torr). So clearly something is not right here. This pump supposedly had its tip-seal replaced by Jordan just 3 months ago. This is not a normal lifetime for the tip seal - we need to investigate more in detail what's going on here...
- Decided that an acceptable config is to pump the main volume (so that we can continue working on other parts of the IFO). The annuli are all <10mtorr and holding, so that's just fine I think.
Questions:
- Are the failures of TP2 drypump and UPS related? Or coincidence? Who is the chicken and who is the egg?
- What's up with the short tip seal lifetime?
- Why did all of this happen without any of our systems catching it and sending an alert??? I have left the UPS connected to the USB/ethernet interface in case anyone wants to remotely debug this.
For now, I think this is a safe state to leave the system in. Unless I hear otherwise, I will leave it so - I will be in the lab another hour tonight (~10pm).
Some photos and a screen-cap of the Vac medm screen attached. |
15722
|
Thu Dec 10 11:07:24 2020 |
Chub | Update | VAC | UPS fault | Is that a fault code that you can decipher in the manual, or just a light telling you nothing but your UPS is dead? |
15723
|
Thu Dec 10 11:17:50 2020 |
Chub | Update | VAC | UPS fault | I can't find anything in the manual that describes the nature of the FAULT message. In fact, it's not mentioned at all. If the unit detects a fault at its output, I would expect a bit more information. This unit does a programmable level of input error protection, too, usually set at 100%. Still, there is no indication in the manual whether an input issue would be described as a fault; that usually means a short or lifted ground at the output.
Quote: |
Is that a fault code that you can decipher in the manual, or just a light telling you nothing but your UPS is dead?
|
|
15724
|
Thu Dec 10 13:05:52 2020 |
Jon | Update | VAC | UPS failure | I've investigated the vacuum controls failure that occurred last night. Here's what I believe happened.
From looking at the system logs, it's clear that there was a sudden loss of power to the control computer (c1vac). Also, the system was actually down for several hours. The syslog shows normal EPICS channel writes (pressure readback updates, etc., and many of them per minute) which suddenly stop at 4:12 pm. There are no error or shutdown messages in the syslog or in the interlock log. The next activity is the normal start-up messaging at 7:39 pm. So this is all consistent with the UPS suddenly failing.
According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.
Preventing this in the future:
First, there are too many electronics on the 1 kVA UPS. The reason I asked us to buy a dual 208/120V UPS (which we did buy) is to relieve the smaller 120V UPS. I envision moving the turbo pumps, gauge controllers, etc. all to the 5 kVA unit and reserving the smaller 1 kVA unit for the c1vac computer and its peripherals. We now have the dual 208/120V UPS in hand. We should make it a priority to get that installed.
Second, there are 1 Hz "blinker" channels exposed for c1vac and all the slow controls machines, each reporting the machine's alive status. I don't think they're being monitored by any auto-notification program (running on a central machine), but they could be. Maybe there already exists code that could be co-opted for this purpose? There is an MEDM screen displaying the slow machine statuses at Sitemap > CDS > SLOW CONTROLS STATUS, pictured in Attachment 2. This is the only way I know to catch sudden failures of the control computer itself. |
15725
|
Thu Dec 10 14:29:26 2020 |
gautam | Update | VAC | UPS failure | I don't buy this story - P2 only briefly burped around GPStime 1291608000 which is around 8pm local time, which is when I was recovering the system.
Today. Jordan talked to Jon Feicht - apparently there is some kind of valve in the TP2 forepump, which only opens ~15-20 seconds after turning the pump on. So the loud sound I was hearing yesterday was just some transient phenomenon. So today morning at ~9am, we turned on TP2. Once again, PTP2 pressure hovered around 500 torr for about 15-20 seconds. Then it started to drop, although both Jordan and I felt that the time it took for the pressure to drop in the range 5 mtorr - 1 mtorr was unusually long. Jordan suspects some "soft-start" feature of the Turbo Pumps, which maybe spins up the pump in a more controlled way than usual after an event like a power failure. Maybe that explains why the pressure dropped so slowly? One thing is for sure - the TP2 controller displayed "TOO HIGH LOAD" yesterday when I tried the first restart (before migrating everything to the older UPS unit). This is what led me to interpret the loud sound on startup of TP2 to indicate some issue with the forepump - as it turns out, this is just the internal valve not being opened.
Anyway, we left TP2 on for a few hours, pumping only on the little volume between it and V4, and PTP2 remained stable at 20 mtorr. So we judged it's okay to open V4. For today, we will leave the system with both TP2 and TP3 backing TP1. Given the lack of any real evidence of a failure from TP2, I have no reason to believe there is elevated risk.
As for prioritising UPS swap - my opinion is that it's better to just replace the batteries in the UPS that has worked for years. We can run a parallel reliability test of the new UPS and once it has demonstrated stability for some reasonable time (>4 months), we can do the swap.
I was able to clear the FAULT indicator on the new UPS by running a "self-test". pressing and holding the "mute" button on the front panel initiates this test according to the manual, and if all is well, it will clear the FAULT indicator, which it did. I'm still not trusting this unit and have left all units powered by the old UPS.
Update 1100 Dec 11: The config remained stable overnight so today I reverted to the nominal config of TP3 pumping the annuli and TP2 backing TP1 which pumps the main volume (through the partially open RV2).
Quote: |
According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.
|
|
15727
|
Thu Dec 10 14:48:00 2020 |
Yehonathan | Update | BHD | Monte Carlo Simulations | I have rebuilt the MCMC simulation in an OOP fashion and incorporated Lance/Pytickle functionality into it. The usage of the MCMC now is much less messy, hopefully.
I made an example that calculates the closed-loop noise-coupling from SRCL sensing and displacement to DARM in A+. I used the control filters that Kevin defined in his controls example.
The resulting noise budget is in attachment 1. The code is in the 40m/bhd git.
I also investigated why aLIGO simulations behave so different than the A+ simulation (See few previous elogs in this thread). That is why aLIGO results are much less variable, and the simulations in aLIGO barely pass the validity checks, while A+ simulations almost always pass.
The way I check for the validity of a kat model is by scanning all the DOFs and checking that the corresponding sensing RFPDs demodulated signals cross zero. Attachment 2 shows these scanning for 3 such RFPDS for 3 cases:
A+ model with maxtem = 2
ALigo model with maxtem = 2
ALigo model with maxtem = 'off'
It seems like the scanning curves for A+ and ALigo with no HOMs are well behaved and look like normal PDH signals, while the ALigo with maxtem = 2 curves look funky. I believe that the aLIGO+HOMS curves indicate that the IFO is not really in a good locking point. All the IFO lockings were done by using the locking methods straight out of the PyKat package. |
15728
|
Thu Dec 10 16:24:13 2020 |
gautam | Update | Equipment loan | Noliac PZT --> Paco | I gave one Noliac PZT from the two spare in the metal PMC kit to Paco. There is one spare left in the kit. |
15729
|
Thu Dec 10 17:12:43 2020 |
Jon | Update | | New SMA cables on order | As requested, I placed an order for an assortment of new RF cables: SMA male-male, RG405.
They're expected to arrive mid next week. |
15730
|
Thu Dec 10 22:45:42 2020 |
gautam | Update | SUS | More spare OSEMs | I acquired several spare OSEMs (in unknown condition) from Paco. They are stored alongside the shipment from UF. |
15731
|
Thu Dec 10 22:46:57 2020 |
gautam | Update | ASC | WFS head assembled | The assembly of the head is nearly complete, I thought I'd do some characterization before packaging everything up too nicely. I noticed that the tapped holes in the base are odd-sized. According to the official aLIGO drawing, these are supposed to be 4-40 tapped, but I find that something in between 2-56 and 4-40 is required - so it's a metric hole? Maybe we used some other DCC document to manufacture these parts - does anyone know the exact drawings used? In the meantime, the circuit is placed inside the enclosure with the back panel left open to allow some tuning of the trim caps. The front panel piece for mounting the SMA feedthroughs hasn't been delivered yet so hardware-wise, that's the last missing piece (apart from the aforementioned screws).
Attachment #1 - the circuit as stuffed for the RF frequencies of relevance to the 40m.
Attachment #2 - measured TF from the "Test Input" to Quadrant #1 "RF Hi" output.
- There is reasonable agreement, but not sure what to make of the gain mismatch at most frequencies.
- The photodiode itself hasn't been installed yet, so there will be some additional tuning required to account for the interaction with the photodiode's junction capacitance.
- I noticed that the Qs of the resonances in between the notches is pretty high in this config, but the SPICE model also predicts this, so I'm hopeful that they will be tamed once the photodiode is installed.
- One thing that is worrying is the feature at ~170 MHz. Could be some oscillation of the LM opamp. All the aLIGO WFS test procedure documentation shows measurements only out to 100 MHz. Should we consider increasing the gain of the preamp from x10 to x20 by swapping the feedback resistor from 453 ohms to 1 kohm? Is this a known issue at the sites?
- Any other comments?
Update 11 Dec: For whatever reason, whoever made this box decided to tap 4-40 holes from the bottom (i.e. on the side of the base plate), and didn't thread the holes all the way through, which is why I was unable to get a 4-40 screw in there. To be fair the drawing doesn't specify the depth of the 4-40 holes to be tapped. All the taps we have in the lab have a maximum thread length of 9/16" whereas we need something with at least 0.8" thread length. I'll ask Joe Benson at the physics workshop if he has something I can use, and if not, I'll just drill a counterbore on the bottom side and use the taps we have to go all the way through and hopefully that does the job.
The front panel I designed for the SMA feedthroughs arrived today. Unfortunately, it is impossible for the D-sub shaped holes in this box to accommodate 8 insulated SMA feedthroughs (2 per quadrant for RF low and RF high) - while the actual SMA connector doesn't occupy so much space, the plastic mold around the connector and the nut to hold it are much too bulky. For the AS WFS application, we will only need 4 so that will work, but if someone wants all 8 outputs (plus an optional 9th for the "Test Input"), a custom molded feedthrough will have to be designed.
As for the 170 MHz feature - my open loop modeling in Spice doesn't suggest a lack of phase margin at that frequency so I'm not sure what the cause is there. If this is true, just increasing the gain won't solve the issue (since there is no instability at least by the phase margin metric). Could be a problem with the "Test Input" path I guess. I confirmed it is present in all 4 quadrants. |
15732
|
Fri Dec 11 09:28:52 2020 |
rana | Update | BHD | Monte Carlo loop coupling Simulations | Cool. Can you give us Bode plots of the open loop gain for each of the 5 length control loops?
Quote: |
I have rebuilt the MCMC simulation in an OOP fashion and incorporated Lance/Pytickle functionality into it. The usage of the MCMC now is much less messy, hopefully.
|
|
15734
|
Mon Dec 14 11:09:28 2020 |
Yehonathan | Update | BHD | Monte Carlo loop coupling Simulations | I spent a few hours monkeying around with the control filters. They are totally made up and also it's my first time trying to design control filters.
The OLTFs of the different length controls are shown in attachment 1.
The open-loop couplings of the DOFS to DARM are shown in attachment 2.
Note that in Lance/Pytickle the convention is that CLTFs are 1/(1 - G). Where G is the OLTF.
Quote: |
Cool. Can you give us Bode plots of the open loop gain for each of the 5 length control loops?
|
|
15735
|
Tue Dec 15 12:38:41 2020 |
gautam | Update | Electronics | DC power strip | I installed a DC power strip (24 V variant, 12 outlets available) on the NW upright of the 1X1 rack. This is for the AS WFS. Seems to work, all outlets get +/- 24 V DC.
The FSS_RMTEMP channel is very noisy after this work. I'll look into it, but probably some Acromag grounding issue.
In the afternoon, Jordan and I also laid out 4x SMA LMR240 cables and 1x DB15 M/F cable from 1X2 to the NE corner of the AP table via the overhead cable trays. |
15736
|
Thu Dec 17 15:23:56 2020 |
gautam | Update | ASC | WFS head characterization | Summary:
I think the WFS head performs satisfactorily.
- The (input-referred) dark noise level at the operating frequency of 55 MHz is ~40pA/rtHz (modelled) and ~60 pA/rtHz (measured, converted to input-referred). See Attachment #1. Attachment #5 has the input referred current noise spectral densities, and a few representative shot noise levels.
- The RF transimpedance gain at the operating frequency is ~500 ohms when driving a 50 ohm load (in good agreement with LTspice model). See Attachment #2 and Attachment #3.
- The resonant gain to notch ratios are all > 30 dB, which is in line with numbers I can find for the WFS installed at the sites (and in good agreement with the LTspice model as well).
- There are a few lines visible in the noise measurement. But these are small enough not to be a show-stopper I think.
Details and remarks:
- Attachment #4 shows a photo of the setup.
- The QPD used was S/N #84.
- The heat sinks have a bunch of washers because the screw holes were not tappe at time of manufacture.
- There isn't space to have 8 SMA feedthroughs in the D-shaped cutouts, so we can only have the 4 "RF HI" outputs without some major metalwork.
- C9 has been remvoed in all channels (to isolate the "TEST INPUT").
- I found that some quadrants displayed a ~35 MHz sine-wave of a few mV pk-pk when I had the back of the enclosure off (for tuning the notches). The hypothesis is that this was due to some kind of stray capacitance effect. Anyways, once I closed everything up, for the noise measurement, this peak was no longer visible. With an HP8447A preamp, I measured an RMS voltage of ~2mV rms on an oscilloscope. After undoing the 20 dB gain of the amplifier, each quadrant has an output voltage noise of ~200 uVrms (as returned by the "measure" utility on the scope, I don't know the specifics of how it computes this). Point is, there wasn't any clear sine-wave oscillations like I saw on two channels when the lid was off.
- Some of the lines are present during some measurement times but not others (e.g. Q4 blue vs red curve in Attachment #1). I was doing this work in the elec-bench area of the lab, right next to the network switches etc so not exactly the quietest environment. But anyway, I don't see anything in these measurements that suggest something is seriously wrong.
- In the transfer function measurements, above 150 MHz, there are all sorts of features. But I think this is a measurement artefact (stray cable capacitance etc) and not anything real in the RF signal path. Koji saw similar effects I believe, and I didn't delve further into it.
- The dark noise of the circuit is such that to be shot noise limited, each quadrant needs 10 mA of DC photocurrent. The light bulb we have has a max current rating of 0.25A, with which I could only get 3 mA DC per quadrant. So the 55 MHz sideband power needed to be shot noise limited is ~50 mW - we will never have such high power. But I think to have better performance will need a major re-work of the circuit design (finite Qs of inductors, capacitors etc).
- Regarding the transimpedance gains - in my earlier plots, I omitted the 50ohm input impedance of the AG4395A network analyzer. The numbers I report here are ~half of those earlier in this thread for this reason. In any case, I think this number is what is important, since the ADT-1-1 on the demod board RF input has an input impedance of 50ohm.
- Regarding grounding - the RF ground on the head is actually isolated from the case pretty well. Two locations of concern are (i) the heat sinks for the voltage regulator ICs and (ii) the DB15 connector shield. I've placed electrically insulating (but thermally conducting) pads from TO220 mounting kits between both sets of objects and the case. However, for the Dsub connector, the shape of the pad doesn't quite fit all the way round the connector. So if I over-tighten the 4-40 mounting bolts, at some point, the case gets shorted to the RF ground, presumably because the connector deforms slightly and touches the case in a spot where I don't have the isolating pad installed. I think I've realized a tightness that is mechanically satisfying but electrically isolating.
- I will do the fitting at my leisure but the eye-fit is already suggesting that things are okay I think.
If the RF experts see some red flags / think there are more tests that need to be performed, please let me know. Big thanks to Chub for patiently supporting this build effort, I'm pleasantly surprised it worked. |
15737
|
Fri Dec 18 10:52:17 2020 |
gautam | Update | CDS | RFM errors | As I was working on the IFO re-alignment just now, the rfm errors popped up again. I don't see any useful diagnostics on the web interface.
Do we want to take this opportunity to configure jumpers and set up the rogue master as Rolf suggested? Of course there's no guarantee that will fix anything, and may possibly make it impossible to recover the current state... |
15739
|
Sat Dec 19 00:25:20 2020 |
Jon | Update | | New SMA cables on order | I re-ordered the below cables, this time going with flexible, double-shielded RG316-DS. Jordan will pick up and return the RG-405 cables after the holidays.
Quote: |
As requested, I placed an order for an assortment of new RF cables: SMA male-male, RG405.
|
|
15740
|
Sat Dec 19 02:42:56 2020 |
Koji | Update | | New SMA cables on order | Our favorite (flexible) RF cable is Belden's 1671J (Jacketed solder-soaked coax cable). It is compatible RG405. I'm not sure if there is off-the-shelf SMA cables with 1671J.
|
15741
|
Sat Dec 19 20:24:25 2020 |
gautam | Update | Electronics | WFS hardware install | I installed 4 chassis in the rack 1X2 (characterization on the E-bench was deemed satisfactory, I will upload the analysis later). I ran out of hardware to make power cables so only 2 of them are powered right now (1 32ch AA chassis and 1 WFS head interface). The current limit on the +24V Sorensens was raised to allow for similar margin to the limit with the increased current draw.
Remaining work:
Make 2 more power cables for ISC whitening chassis and quad demod chassis.
Make a 2x 4pin LEMO-->DB9 cable to digitize the FSS and PMC diagnostic channels with the new AA chassis. If RnD cables has a very short turnaround time, might be worth it to give this to them as well.
Connect ADC1 on c1ioo machine to new AA chassis (transfer SCSI cable from existing AA unit to the new one). This will necessarily involve some model changes as well.
Make a short cable to connect 55 MHz output from RFsource box to the LO input on the quad demod chassis.
- Install the WFS head on the AS table at a suitable location. Probably will need a focusing lens as well.
- Connect WFS head to the signal processing electronics (the cables were already laid out by Jordan and I).
Make the necessary CDS model changes (WFS filters, matrices, servos etc). I personally don't see the need for a new model but if anyone feels strongly about separating the IMC WFS and AS WFS we can set up another model.
- Commission the system.
While I definitely bumped various cables, I don't seem to have done any lasting damage to the CDS system (the RFM errors remain of course). |
15743
|
Mon Dec 21 18:18:03 2020 |
gautam | Update | CDS | Many model changes | The CDS model change required to get the AS WFS signals into the RTCDS system are rather invasive.
- We use VCS for these models. Linus Torvald may question my taste but I also made local backups of the models, just in case...
- Particularly, the ADC1 card on c1ioo is completely reconfigured.
- I also think it's more natural to do all the ASC computations on this machine rather than the c1lsc machine (it is currently done in the c1ass model). So there are many IPC changes as well.
- I have documented everything carefully, and the compile/install went smoothly.
- Taking down all the FE servers at 1830 local time.
- To propagagte the model changes
- To make a hardware change in the c1rfm card in the c1ioo machine to configure it as "ROGUE MASTER 0"
- To clear the RFM errors we are currently suffering from will require a model reboot anyways.
- Recovery was completed by 1930 - the RFM errors are also cleared, and now we have a "ROGUE MASTER 👾" on the network. Pretty smooth, no major issues with the CDS part of the procedure to report.
- The main issue is that in the AA chassis I built, Ch14 (with the first channel as Ch1) has the output saturated to 28V (differential). I'm not sure what kind of overvoltage protection the ADC has - we frequently have the inputs exceed the spec'd +/-20 V (e.g. when the whitening filters are engaged and the cavity is fringing), but pending further investigation, I am removing the SCSI connection from the rear of the AA chassis.
In terms of computational load, the c1ioo model seems to be able to handle the extra load no issues - ~35us/60us per cycle. The RFM model shows no extra computational time.
After this work, the IMC locking and POX/POY locking, and dither alignment servos are working okay. So I have some confidence that my invasive work hasn't completely destroyed everything. There is some hardware around the rear of 1X2 that I will clear tomorrow. |
15744
|
Tue Dec 22 22:11:37 2020 |
gautam | Update | CDS | AA filt repaired and reinstalled | Koji fixed the problematic channel - the issue was a bad solder joint on the input resistors to the THS4131. The board was re-installed. I also made a custom 2x4-pin LEMO-->DB9 cable, so we are now recording the PMC and FSS ERR/CTRL channel diagnostics again (spectra tomorrow). Note that Ch32 is recording some sort of DuoTone signal and so is not usable. This is due to a misconfiguration - ADC0 CH31 is the one which is supposed to be reserved for this timing signal, and not ADC1 as we currently have. When we swap the c1ioo hosts, we should fix this issue.
I also did most of the work to make the MEDM screens for the revised ASC topology, tried to mirror the site screens where possible. The overview screen remains to be done. I also loaded the anti-whitening filters (z:p 150:15) at the demodulated WFS input signal entry points. We don't have remote whitening switching capability at this time, so I'll test the switching manually at some point.
Quote: |
The main issue is that in the AA chassis I built, Ch14 (with the first channel as Ch1) has the output saturated to 28V (differential). I'm not sure what kind of overvoltage protection the ADC has - we frequently have the inputs exceed the spec'd +/-20 V (e.g. when the whitening filters are engaged and the cavity is fringing), but pending further investigation, I am removing the SCSI connection from the rear of the AA chassis.
|
|
15745
|
Wed Dec 23 10:13:08 2020 |
gautam | Update | CDS | Near term upgrades | Summary:
- There appears to be insufficient number of PCIe slots in the new Supermicro servers that were bought for the BHD upgrade.
- Modulo a "riser card", we have all the parts in hand to put one of the end machines on the Dolphin network. If the Rogue Master doesn't improve the situation, we should consider installing a Dolphin card in the c1iscex server and connecting it to the Dolphin network at the next opportunity.
Details:
Last night, I briefly spoke with Koji about the CDS upgrade plan. This is a follow up.
Each server needs a minimum of two peripheral devices added to the PCIe bus:
- A PCIe interface card that connects the server to the Expansion Chassis (copper or optical fiber depending on distance between the two).
- A Dolphin or RFM card that makes the IPC interconnects.
- I'm pretty certain the expansion chasiss cannot support the Dolphin / RFM card (it's only meant to be for ADCs/DACs/BIO cards). At least, all the existing servers in the 40m have at least 2 PCIe cards installed, and I think we have enough to worry about without trying to engineer a hack.
- I attach some photos of new and old Supermicro servers to illustrate my point, see Attachment #1.
As for the second issue, the main question is if we had an open PCIe slot on the c1iscex machine to install a Dolphin card. Looks like the 2 standard slots are taken (see Attachment #1), but a "low profile" slot is available. I can't find what the exact models of the Supermicro servers installed back in 2010 are, but maybe it's this one? It's a good match visually anyways. The manual says a "riser card" is required. I don't know if such a riser is already installed.
Questions I have, Rolf is probably the best person to answer:
- Can we even use the specified host adaptor, HIB25-X4, which is "PCIe Gen2", with the "PCIe Gen3" slots on the new Supermicro servers we bought? Anyway, the fact that the new servers have only 1 PCIe slot probably makes this a moot point.
- Can we overcome this slot limitation by installing the Dolphin / RFM card in the expansion chassis?
- In the short run (i.e. if it is much faster than the full CDS shipment we are going to receive), can we get (from the CDS test stand in Downs or the site)
- A riser card so that we may install the Gen 1 Dolphin card (which we have in hand) in the c1iscex server?
- A compatible (not sure what PCIe generation we need) PCIe host to ECA kit so we can test out the replacement for the Sun Microsystems c1ioo server?
- A spare RFM card (VMIC 5565, also for the above purpose).
- What sort of a test should we run to test the new Dolphin connection? Make a "null channel" differencing the same signal (e.g. TRX) sent via RFM and Dolphin? Or is there some better checksum diagnostic?
|
15747
|
Sun Jan 3 16:26:06 2021 |
Koji | Update | SUS | IMC WFS check (Yet another round of Sat. Box. switcharoo) | I wanted to check the functionality of the IMC WFS. I just turned on the WFS servo loops as they were. For the past two hours, they didn't run away. The servo has been left turned on. I don't think there is no reason to keep it turned off. |
15748
|
Wed Jan 6 15:28:04 2021 |
gautam | Update | VAC | Vac rack UPS batteries replaced | [chub, gautam]
the replacement was done this afternoon. The red "Replace Battery" indicator is no longer on. |
15749
|
Wed Jan 6 16:18:38 2021 |
gautam | Update | Optical Levers | BS Oplev glitchy | As part of the hunt why the X arm IR transmission RIN is anomalously high, I noticed that the BS Oplev Servo periodically kicks the optic around - the summary pages are the best illustration of this happening. Looking back in time, these seem to have started ~Nov 23 2020. The HeNe power output has been degrading, see Attachment #1, but this is not yet at the point where the head usually needs replacing. The RIN spectrum doesn't look anomalous to me, see Attachment #2 (the whitening situation for the quadrants is different for the BS and the TMs, which explains the HF noise). I also measured the loop UGFs (using swept sine) - seems funky, I can't get the same coherence now (live traces) between 10-30 Hz that I could before (reference traces) with the same drive amplitude, and the TF that I do measure has a weird flattening out at higher frequencies that I can't explain, see Attachment #3.
The excess RIN is almost exactly in the band that we expect our Oplevs to stabilize the angular motion of the optics in, so maybe needs more investigation - I will upload the loop suppression of the error point later. So far, I don't see any clean evidence of the BS Oplev HeNe being the culprit, so I'm a bit hesitant to just swap out the head... |
15750
|
Wed Jan 6 19:00:04 2021 |
gautam | Update | LSC | Phase loss in POX/POY loops | I've noticed that there is some phase loss in the POX/POY locking loops - see Attachment #1, live traces are from a recent measurement while the references are from Nov 4 2018. Hard to imagine a true delay being responsible to cause so much phase loss at 100 Hz. Attachment #2 shows my best effort loop modeling, I think I've got all the pieces, but maybe I missed something (I assume the analog whitening / digital anti-whitening are perfectly balanced, anyway this wasn't messed with anytime recently)? The fitter wants to add 560 us (!) of delay, which is almost 10 clock cycles on the RTS, and even so, the fit is poor (I constrain the fitter to a maximum of 600 us delay so maybe this isn't the best diagnostic). Anyway, how can this change be explained? The recent works I can think of that could have affected the LSC sensing were (i) RF source box re-working, and (ii) vent. But I can't imagine how either of these would introduce phase loss in the LSC sensing. Note that the digital demod phase has been tuned to put all the PDH signal in the "I" quadrature, which is the condition in which the measurement was taken.
Probably this isn't gonna affect locking efforts (unless it's symptomatic of some other larger problem). |
15751
|
Wed Jan 6 22:47:41 2021 |
gautam | Update | ALS | Noisy ALS | Summary:
I want to get back to locking the interferometer so I can test out the newly installed AS WFS. However, the ALS noise is far too high, at least the transition of arm length control from IR to ALS fails reliably with the same settings that worked so reliably previously. I worked on investigating it a bit today.
Timeline
In the latter half of last year, I was focused on the air-BHD setup, so I wasn't checking in on the ALS noise as regularly.
- On Aug 17, the noise was fine.
- But on Oct 29, the noise is bad (and it continues to remain so, to the point where I cannot lock the interferometer).
- Koji and Anchal confirmed nothing was touched while they were investigating the ALS system, also on Oct 29. The spectra attached in #15650 don't make any sense to me, the noise at 100 Hz cannot be <100mHz/rtHz. So, inconclusive.
Excess noise:
All tests are done with the arm cavity length locked to the PSL frequency using POX. Then, the EX laser is locked to the arm cavity length using the AUX PDH servo. The fluctuations in the beatnote between the two lasers is what is monitored as a diagnostic. See Attachment #1. The reference traces in the top pane are from a known good time. The large excess noise between ~80-200 Hz is what I'm concerned about.
A separate issue that can improve the noise is to track down the noise in the 20-80 Hz band - probably some IMC frequency noise issue.
Noise budget:
See Attachment #2.
- I am pretty confident the electronics after the beat mouth are not to blame - I injected a 50 MHz signal from a Marconi and adjusted the signal amplitude to mimic what we get from the beat mouth. The trace labelled "DFD electronics noise" is the noise in this config.
- The unsuppressed AUX frequency noise was measured with an SR785 (converted to freqnecy noise units knowing the PDH horn-to-horn voltage and the cavity linewidth). I didn't confirm the sensing noise level (dark noise of the AUX PDH loop), but I figure that at 100 Hz (voltage noise of ~100 uV/rtHz on the SR785), we are above the sensing noise level, and so are truly measuring the in-loop frequency noise of the stabilized AUX laser. I also confirmed that the loop UGF was ~10 kHz and phase margin was ~60 degrees, which are nominal numbers.
- The fact that the excess noise is only in the X arm channel means the PSL frequency is not to blame.
So what could it be? The only things I can think of are (i) the beat mouth photodiode (NF1611) or (ii) excess noise in the fiber carrying the light from EX to the PSL table (but only on this fiber, and not on the EY fiber). Both seem remote to me - I'll test the former by switching the EX and EY fiber inputs to the beat mouth, but apart from this, I'm out of ideas...
To avoid this kind of issue, we should really have scripted locks of all the basic IFO configs and record the data to summary pages or something - maybe something to do once Guardian is installed, it'd be pretty hacky to do cleanly with shell scripts. |
15752
|
Thu Jan 7 19:16:11 2021 |
gautam | Update | ALS | Noisy ALS | I'm also wondering why the error monitors for the X and Y loops report such wildly different spectra for the suppressed frequency noise of the AUX laser relative to the cavity length, see Attachment #1. The y-axis should be approximately Hz/rtHz. In both cases, the servo's error point monitor is connected to the DAQ system via a G=10 SR560. With the SR785, I measure for EX a nice bucket-shaped spectrum, bottoming out at ~10 uV/rtHz around 40 Hz, see Attachment #2. The SR560 should have an input-referred noise much less than this at the G=10 setting. The ADC noise level is only ~5 uV/rtHz, and indeed, the EY spectrum shows the expected shape. So what's up with the EX error mon? Tried swapping out the SR560 for a different unit, no change. And both the SR560 noise, and the ADC noise, check out when everything is checked individually. So some kind of interaction once everything is connected together, but it's only present at EX...
Today, I tried switching the EX and EY fibers going into the beat mouth, but I preserved the channel mapping after the beat mouth by switching the electrical outputs as well (the goal was to make sure that the beat photodiodes weren't the issue here, I think the electronics are already exonerated since driving the channel with a Marconi doesn't produce these noisy features). The EX spectrum remains noisy. I've switched everything back to the nominal configuration now to avoid further confusion. So it would appeat that this is real frequency noise that gets added in the EX fiber somehow. What can I do to fix this? The source of coupling isn't at the PSL table, else the EY channel would also show similar features. Visually, nothing seems wrong to me at EX either. So the problem is somehow in the cable tray along which the 40m of fiber is routed? This is already inside some nice foam/tubing setup, what can be done to improve it? Still doesn't explain why it suddenly became noisy... |
15753
|
Thu Jan 7 20:07:27 2021 |
Koji | Update | ALS | Noisy ALS | How about resurrecting the PSL table green beat for the X arm to see if the non-fiber setup shows the same level of the freq noise (e.g. the PDH locking became super noisy due to misalignment etc). |
15754
|
Thu Jan 7 21:16:22 2021 |
gautam | Update | ALS | Noisy ALS | I thought about it, but wouldn't that show up at the AUX PDH error point? Or because the loop gain is so high there we wouldn't see a small excess? I suppose there could be some clipping on the Faraday or something like that. But the GTRX level and the green REFL DC level when locked are nominal.
Quote: |
How about resurrecting the PSL table green beat for the X arm to see if the non-fiber setup shows the same level of the freq noise (e.g. the PDH locking became super noisy due to misalignment etc).
|
|
15755
|
Thu Jan 7 23:25:19 2021 |
Koji | Update | ALS | Noisy ALS | If the sensing noise level of the end PDH degraded for some reason, it'd make the out of loop stability worse without making the end pdh error level degraded.
It's just speculation.
|
15756
|
Fri Jan 8 20:01:11 2021 |
gautam | Update | ALS | Noisy ALS | I did this test today. The excess noise around 100 Hz doesn't show up in the green beat.
See Attachment #1. The setup was as usual:
- X-Arm cavity length stabilized to PSL frequency using the POX locking loop.
- EX laser frequency locked to the X-Arm cavity length using the AUX PDH loop.
- The "BEATX" channel records frequency fluctuations in the beat sensed on the IR beat photodiode, while the "BEATY" channel records frequency fluctuations in the beat sensed on the Green beat photodiode.
- Since the green beat frequency fluctuations are twice that of the IR beat frequency fluctuations, I scaled the former ASD by a factor of 0.5 so as to compare apples to apples.
- At low frequencies, the green beat is noisier, but that channel doesn't show the excess noise at mid frequencies you see in the IR beat. So the AUX PDH sensing noise is not to blame I think.
So, this excess appears to truly be excess phase noise on the fiber (though I have no idea what the actual mechanicsm could be or what changed between Aug and Oct 2020 that could explain it. Maybe the HEPA?
For this work, I had to spend some time aligning the two green beams onto the beat photodiode. During this time, I shuttered the PSL, disabled feedback via the FSS servo, turned the HEPA high, and kept the EX green locked to the arm so as to have a somewhat stable beat signal I could maximize. Everything has been returned to nominal settings now (obviously, since I locked the arms to get the data).
You may ask, why do we care. In terms of RMS frequency noise, it would appear that this excess shouldn't matter. But in all my trials so far, I've been unable to transition control of the arm cavity lengths from POX/POY to ALS. I suppose we could try using the green beat, but that has excess low frequency noise (which was the whole point of the fiber coupled setup).
Quote: |
How about resurrecting the PSL table green beat for the X arm to see if the non-fiber setup shows the same level of the freq noise (e.g. the PDH locking became super noisy due to misalignment etc).
|
|
15758
|
Mon Jan 11 16:11:51 2021 |
Yehonathan | Update | BHD | Monte Carlo loop coupling Simulations | I dived into the alog to make the OLTFs in the MC_controls example more realistic. I was mainly inspired by these entries:
https://alog.ligo-la.caltech.edu/aLOG/uploads/47116_20190708131007_carmolg_20190702.png
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=18742
https://alog.ligo-wa.caltech.edu/aLOG/index.php?callRep=20466
and Evan's and Dennis's Theses.
Attachment 1 shows the new OLTFs. I tried to make them go like 1/f around the UGF and fall as quickly as possible at higher frequencies. I didn't do more advanced stability checks.
I also noticed that imbalances and detunings in the MC simulation can change the plants significantly. Especially DARM, CARM, and sometimes PRCL. I added the option to fix some OLTFs throughout the simulation. At every iteration, the simulation computes the required control filter to fix the selected OLTFs such that it will match the OLTFs in the undetuned and balanced IFO.
|
15759
|
Mon Jan 11 19:10:10 2021 |
rana | Update | BHD | Monte Carlo loop coupling Simulations |
- looking better, but the CARM plot still looks weird.
- you should plot from 0.01 - 10,000 Hz
- All of the loops should have true integrators below 1 Hz.
- I don't think these loops are stable; the Bode plot is not a good way to check stability for LTI systems since you can be fooled by phase wrapping.
|
15763
|
Thu Jan 14 11:46:20 2021 |
gautam | Update | CDS | Expansion chassis from LHO | I picked the boxes up this morning. The inventory per Fil's email looks accurate. Some comments:
- They shipped the chassis and mounting parts (we should still get rails to mount these on, they're pretty heavy to just be supported on 4 rack nuts on the front). idk if we still need the two empty chassis that were requested from Rich.
- Regarding the fibers - one of the fibers is pre-2012. These are known to fail (according to Rolf). One of the two that LHO shipped is from 2012 (judging by S/N, I can't find an online lookup for the serial number), the other is 2011. IIRC, Rolf offered us some fibers so we may want to take him up on that. We may also be able to use copper cables if the distances b/w server and expansion chassis are short.
- The units are fitted with a +24V DC input power connector and not the AC power supplies that we have on all the rest of the chassis. This is probably just gonna be a matter of convenience, whether we want to stick to this scheme or revert to the AC input adaptor we have on all the other units. idk what the current draw will be from the Sorensen - I tested that the boards get power, and with noi ADCs/DACs/BIOs, the chassis draws ~1A (read off from DCPS display, not measured with a DMM). ~Half of this is for the cooling fans It seems like KT @ LLO has offered to ship AC power supplies so maybe we want to take them up on that offer.
- Without the host side OSSI PCIe card, timing interface board, and supermicro servers that actually have enough PCIe slots, we still can't actually run any meaningful test. I ran just a basic diagnostic that the chassis can be powered on, and the indicator LEDs and cooling fans run.
- Some photos of the contents are here. The units are stored along the east arm pending installation.
> Koji,
>
> Barebones on this order.
>
> 1. Main PCIe board
> 2. Backplane (Interface board)
> 3. Power Board
> 4. Fiber (One Stop) Interface Card, chassis side only
> 5. Two One Stop Fibers
> 6. No Timing Interface
> 7. No Binary Cards.
> 8. No ADC or DAC cards
>
> Fil Clara
>
|
15764
|
Thu Jan 14 12:19:43 2021 |
Jon | Update | CDS | Expansion chassis from LHO | That's fine, we didn't actually request those. We bought and already have in hand new PCIe x4 cables for the chassis-host connection. They're 3 m copper cables, which was based on the assumption of the time that host and chassis would be installed in the same rack.
Quote: |
- Regarding the fibers - one of the fibers is pre-2012. These are known to fail (according to Rolf). One of the two that LHO shipped is from 2012 (judging by S/N, I can't find an online lookup for the serial number), the other is 2011. IIRC, Rolf offered us some fibers so we may want to take him up on that. We may also be able to use copper cables if the distances b/w server and expansion chassis are short.
|
|
15765
|
Thu Jan 14 12:32:28 2021 |
gautam | Update | CDS | Rogue master may be doing something good? | I think the "Rogue Master" setting on the RFM network may be doing some good. 5 mins, ago, all the CDS indicators were green, but I noticed an amber light on the c1rfm screen just now (amber = warning). Seems like at GPS time 1294691182, there was some kind of error on the RFM network. But the network hasn't gone down. I can clear the amber flag by running the global diag reset. Nevertheless, the upgrade of all RT systems to Dolphin should not be de-prioritized I think. |
15766
|
Fri Jan 15 15:06:42 2021 |
Jon | Update | CDS | Expansion chassis from LHO | Koji asked me assemble a detailed breakdown of the parts received from LHO, which I do based on the high-res photos that Gautam posted of the shipment.
Parts in hand:
Qty |
Part |
Note(s) |
2 |
Chassis body |
|
2 |
Power board and cooling fans |
As noted in 15763, these have the standard LIGO +24V input connector which we may want to change |
2 |
IO interface backplane |
|
2 |
PCIe backplane |
|
2 |
Chassis-side OSS PCIe x4 card |
|
2 |
CX4 fiber cables |
These were not requested and are not needed |
Parts still needed:
Qty |
Part |
Note(s) |
2 |
Host-side OSS PCIe x4 card |
These were requested but missing from the LHO shipment |
2 |
Timing slave |
These were not originally requested, but we have recently learned they will be replaced at LHO soon |
Issue with PCIe slots in new FEs
Also, I looked into the mix-up regarding the number of PCIe slots in the new Supermicro servers. The motherboard actually has six PCIe slots and is on the CDS list of boards known to be compatible. The mistake (mine) was in selecting a low-profile (1U) chassis that only exposes one of these slots. But at least it's not a fundamental limitation.
One option is to install an external PCIe expansion chassis that would be rack-mounted right above the FE. It is automatically configured by the system BIOS, so doesn't require any special drivers. It also supports hot-swapping of PCIe cards. There are also cheap ribbon-cable riser cards that would allow more cards to be connected for testing, although this is not as great for permanent mounting.
It may still be better to use the machines offered by Keith Thorne from LLO, as they're more powerful anyway. But if there is going to be an extended delay before those can be received, we should be able to use the machines we already have in conjunction with one of these PCIe expansion options. |
15767
|
Fri Jan 15 16:54:57 2021 |
gautam | Update | CDS | Expansion chassis from LHO | Can you please provide a link to this "list of boards"? The only document I can find is T1800302. In that, under "Basic Requirements" (before considering specific motherboards), it is specified that the processor should be clocked @ >3GHz. The 3 new supermicros we have are clocked at 1.7 GHz. X10SRi-F boards are used according to that doc, but the processor is clocked at 3.6 or 3.2 GHz.
Please also confirm that there are no conflicts w.r.t. the generation of PCIe slots, and the interfaces (Dolphin, OSSI) we are planning to use - the new machines we have are "PCIe 2.0" (though i have no idea if this is the same as Gen 2).
Quote: |
The motherboard actually has six PCIe slots and is on the CDS list of boards known to be compatible.
|
As for the CX4 cable - I still think it's good to have these on hand. Not good to be in a situation later where FE and expansion chassis have to be in different racks, and the copper cable can't be used. |
15768
|
Fri Jan 15 17:04:45 2021 |
gautam | Update | LSC | Messed up LSC sensing | I want to lock the PRFPMI again (to commission AS WFS). Have had some success - but in doing characterization, I find that the REFL port sensing is completely messed up compared to what I had before. Specifically, MICH and PRCL DoFs have no separation in either the 1f or 3f photodiodes.
- A sensing line driven in PRCL doesn't show up in the AS55 photodiode signal - this is good and as expected.
- For MICH - I set the MICH--->PRM actuation matrix element so as to minimize the height of the peak at the MICH drive frequency that shows up at the PRCL error point. My memory is that I used to be able to pretty much null this signal in the past, but I can't find a DTT spectrum in the elog as evidence. Anyways, the best effort nulling I can achieve now still results in a large peak at the PRCL error point. Since the sensing matrix doesn't actually make any sense, idk if it is meaningful to even try and calibrate the above qualitative statement into quantitative numbers of cross coupling in meters.
- With the PRMI locked on 1f error signals (ETMs misaligned, PRCL sensed with REFL11_I, MICH sensed with AS55_Q) - I tried tweaking the digital demod phase of the REFL33 and REFL165 signals. But I find that the MICH and PRCL peaks move in unison as I tweak the demod phase. This suggests to me that both signals are arriving optically in phase at the photodiode, which is weird.
- The phenomenon is seen also in the REFL11 signal.
I did make considerable changes to the RF source box, and so now the relative phase between the 11 MHz and 55 MHz signals is changed compared to what it was before. But do we really expect any effect even in the 1f signal? I am not able to reproduce this effect in simulation (Finesse), though I'm using a simplified model. I attach two sensing matrices to illustrate what i mean:
- Attachment #1 is in the PRFPMI state, with the IFO on RF control (CARM on REFL11, PRCL on REFL165_I, MICH on REFL165_Q, DARM on AS55_Q).
- Attachment #2 is between the transition to RF control (CARM and DARM on ALS, PRCL on REFL165_I, MICH on REFL165_Q). The CARM offset is ~4nm (c.f. the linewidth of ~20pm), so the circulating power in the arm cavities is low.
|
15769
|
Sat Jan 16 18:59:44 2021 |
gautam | Update | LSC | Modulation depth measurement | I decided to analyze the data I took in December more carefully to see if there are any clues about the weird LSC sensing.
Attachment #1 shows the measurement setup.
- The PSL shutter was closed. All feedback to both lasers was disconnected during the measurement. I also disabled the input switch to the FSS Box - so the two laser beams being interfered shouldn't have any modulations on them other than the free running NPRO noise and the main IFO modulations.
- Everything is done in fiber as I had the beams already coupled into collimators and this avoided having to optimize any mode matching on the beat photodiode.
- The pickoff of the PSL is from the collimator placed after the triply resonant EOM that was installed for the air BHD experiment.
- The other beam is the EX laser beam, arriving at the PSL table via the 40m long fiber from the end (this is the usual beam used for ALS).
- I didn't characterize precisely the PLL loop shape. But basically, I wasn't able to increase the SR560 gain any more without breaking the PLL lock. Past experience suggests that the UGF is ~20 kHz, and I was able to get several averages on the AG4395 without the lock being disturbed.
Attachment #2 shows the measured spectrum with the PSL and EX laser frequency offset locked via PLL.
- The various peaks are identified.
- There are several peaks which I cannot explain - any hypothesis for what these might be? Some kind of Sorensen pollution? They aren't any multiples of any of the standard RF sources. They are also rather prominent (and stationary during the measurement time, which I think rules out the cause being some leakage light from the EY beam, which I had also left connected to the BeatMouth during the measurement).
- In the previous such characterization done by Koji, such spurious extra peaks aren't seen.
- Also, I can't really explain why some multiples of the main modulation are missing (could also be that my peak finding missed the tiny peaks)?
- The measuremet setup is very similar to what he had - important differences are
- Much of the optical path was fiber coupled.
- Beat photodiode is NF1611, which is higher BW than the PDA10CF.
- The second laser source was the Innolight EX NPRO as opposed to the Lightwave that was used.
- The RF source has been modified, so relative phasing between 11 MHz and 55 MHz is different.
Fitting the measured sideband powers (up to n=7, taking the average of the measured upper and lower sideband powers to compute a least squares fit if both are measured, else just that of the one sideband measured) agains those expected from a model, I get the following best fit parameters:

To be explicit, the residual at each datapoint was calculated as
.
The numbers compare favourably with what Koji reported I think - the modulation depths are slightly increased, consistent with the RF power out of the RF box being slightly increased after I removed various attenuators etc. Note the large uncertainty on the relative phase between the two modulations - I think this is because there are relatively few sidebands (one example is n=3) which has a functional dependence that informs on phi - most of the others do not directly give us any information about this parameter (since we are just measuring powers, not the actual phase of the electric field).
Attachment #3 shows a plot of the measured modulation profile, along with the expected heights plugging the best fit parameters into the model. The size of the datapoint markers is illustrative only - the dependence on the model parameters is complicated and the full covariance would need to be taken into account to put error bars on those markers, which I didn't do.
Attachment #4 shows a time domain measurement of the relative phasing between the 11 MHz and 55 MHz signals at the EOM drive outputs on the RF source box. I fit a model there and get a value for the relative phase that is totally inconsistent from what I get with this fit. |
15770
|
Tue Jan 19 13:19:24 2021 |
Jon | Update | CDS | Expansion chassis from LHO | Indeed T1800302 is the document I was alluding to, but I completely missed the statement about >3 GHz speed. There is an option for 3.4 GHz processors on the X10SRi-F board, but back in 2019 I chose against it because it would double the cost of the systems. At the time I thought I had saved us $5k. Hopefully we can get the LLO machines in the near term---but if not, I wonder if it's worth testing one of these to see whether the performance is tolerable.
Can you please provide a link to this "list of boards"? The only document I can find is T1800302....
|
I confirm that PCIe 2.0 motherboards are backwards compatible with PCIe 1.x cards, so there's no hardware issue. My main concern is whether the obsolete Dolphin drivers (requiring linux kernel <=3.x) will work on a new system, albeit one running Debian 8. The OSS PCIe card is automatically configured by the BIOS, so no external drivers are required for that one.
Please also confirm that there are no conflicts w.r.t. the generation of PCIe slots, and the interfaces (Dolphin, OSSI) we are planning to use - the new machines we have are "PCIe 2.0" (though i have no idea if this is the same as Gen 2).
|
|
15773
|
Wed Jan 20 10:13:06 2021 |
gautam | Update | Electronics | HV Power supply bypassing | Summary:
Installing 10uF bypass capacitors on the High Voltage power supply line for the HV coil driver circuit doesn't improve the noise. The excess bump around a few hundred Hz is still present. How do we want to proceed?
Details
- The setup is schematically shown in Attachment #1.
- Physically, the capacitors were packaged into a box, see Attachment #2.
- This box is installed between the HVPS and the 2U chassis in which the circuit is housed, see Attachment #3.
- I measured the noise, (using the same setup as shown here to avoid exposing the SR785 input to any high voltage), for a variety of drive currents. To make a direct comparison, I took two sets of measurements today, one with the decoupling box installed and one without.
- The results are shown in Attachment #4. You can see there is barely any difference between the two cases. I've also plotted the expected noise per a model, and the measured Johnson noise of one of the 25kohm resistors being used (Ohmite, wirewound). I just stuck the two legs of the resistor into the SR785 and measured the differential voltage noise. There is a slight excess in the measured Johnson noise compared to what we would expect from the Fluctuation Dissipation theorem, not sure if this is something to be worried about or if it's just some measurement artefact.
Discussion:
So what do we do about this circuit? For the production version, I can make room on the PCB to install two 10uF film capacitors on the board itself, though that's unlikely to help. I think we've established that
- The excess noise is not due to the Acromag or the input Acromag noise filtering stage of the circuit, since the excess is present even when the input to the HV stage is isolated and shorted to ground.
- There was some evidence of coherence between the supply rails and the output of the HV stage (with input isolated and shorted to ground). The coherence had the "right shape" to explain the excess noise, but the maximum value was only ~0.1 (could have been because I was not measuring directly at the PA95's supply rail pins due to space constraints).
- The impedance of 10uF at 100Hz is ~150 ohms. idk what the impedance of the supply pins of the PA95 are at this frequency (this will determine the coupling of ripples in the HVPS output to the PA95 itself.
Do we have any better bipolar HV supply that I can use to see if that makes any difference? I don't want to use the WFS supplies as it's not very convenient for testing.
Not really related directly to this work but since we have been talking about current requirements, I attach the output of the current determining script as Attachment #5. For the most part, having 220ohm resistances on the new HAM-A coil driver boards will lead to ~half the DAC range being eaten up for the slow alignment bias. For things like MC1/MC3, this is fine. But for PRM/SRM/BS, we may need to use 100ohms. Chub has ordered all manner of resistances so we should have plenty of choices to pick from. |
|