If the sensing noise level of the end PDH degraded for some reason, it'd make the out of loop stability worse without making the end pdh error level degraded.
It's just speculation.
I thought about it, but wouldn't that show up at the AUX PDH error point? Or because the loop gain is so high there we wouldn't see a small excess? I suppose there could be some clipping on the Faraday or something like that. But the GTRX level and the green REFL DC level when locked are nominal.
How about resurrecting the PSL table green beat for the X arm to see if the non-fiber setup shows the same level of the freq noise (e.g. the PDH locking became super noisy due to misalignment etc).
I'm also wondering why the error monitors for the X and Y loops report such wildly different spectra for the suppressed frequency noise of the AUX laser relative to the cavity length, see Attachment #1. The y-axis should be approximately Hz/rtHz. In both cases, the servo's error point monitor is connected to the DAQ system via a G=10 SR560. With the SR785, I measure for EX a nice bucket-shaped spectrum, bottoming out at ~10 uV/rtHz around 40 Hz, see Attachment #2. The SR560 should have an input-referred noise much less than this at the G=10 setting. The ADC noise level is only ~5 uV/rtHz, and indeed, the EY spectrum shows the expected shape. So what's up with the EX error mon? Tried swapping out the SR560 for a different unit, no change. And both the SR560 noise, and the ADC noise, check out when everything is checked individually. So some kind of interaction once everything is connected together, but it's only present at EX...
Today, I tried switching the EX and EY fibers going into the beat mouth, but I preserved the channel mapping after the beat mouth by switching the electrical outputs as well (the goal was to make sure that the beat photodiodes weren't the issue here, I think the electronics are already exonerated since driving the channel with a Marconi doesn't produce these noisy features). The EX spectrum remains noisy. I've switched everything back to the nominal configuration now to avoid further confusion. So it would appeat that this is real frequency noise that gets added in the EX fiber somehow. What can I do to fix this? The source of coupling isn't at the PSL table, else the EY channel would also show similar features. Visually, nothing seems wrong to me at EX either. So the problem is somehow in the cable tray along which the 40m of fiber is routed? This is already inside some nice foam/tubing setup, what can be done to improve it? Still doesn't explain why it suddenly became noisy...
I want to get back to locking the interferometer so I can test out the newly installed AS WFS. However, the ALS noise is far too high, at least the transition of arm length control from IR to ALS fails reliably with the same settings that worked so reliably previously. I worked on investigating it a bit today.
In the latter half of last year, I was focused on the air-BHD setup, so I wasn't checking in on the ALS noise as regularly.
All tests are done with the arm cavity length locked to the PSL frequency using POX. Then, the EX laser is locked to the arm cavity length using the AUX PDH servo. The fluctuations in the beatnote between the two lasers is what is monitored as a diagnostic. See Attachment #1. The reference traces in the top pane are from a known good time. The large excess noise between ~80-200 Hz is what I'm concerned about.
A separate issue that can improve the noise is to track down the noise in the 20-80 Hz band - probably some IMC frequency noise issue.
See Attachment #2.
So what could it be? The only things I can think of are (i) the beat mouth photodiode (NF1611) or (ii) excess noise in the fiber carrying the light from EX to the PSL table (but only on this fiber, and not on the EY fiber). Both seem remote to me - I'll test the former by switching the EX and EY fiber inputs to the beat mouth, but apart from this, I'm out of ideas...
To avoid this kind of issue, we should really have scripted locks of all the basic IFO configs and record the data to summary pages or something - maybe something to do once Guardian is installed, it'd be pretty hacky to do cleanly with shell scripts.
I've noticed that there is some phase loss in the POX/POY locking loops - see Attachment #1, live traces are from a recent measurement while the references are from Nov 4 2018. Hard to imagine a true delay being responsible to cause so much phase loss at 100 Hz. Attachment #2 shows my best effort loop modeling, I think I've got all the pieces, but maybe I missed something (I assume the analog whitening / digital anti-whitening are perfectly balanced, anyway this wasn't messed with anytime recently)? The fitter wants to add 560 us (!) of delay, which is almost 10 clock cycles on the RTS, and even so, the fit is poor (I constrain the fitter to a maximum of 600 us delay so maybe this isn't the best diagnostic). Anyway, how can this change be explained? The recent works I can think of that could have affected the LSC sensing were (i) RF source box re-working, and (ii) vent. But I can't imagine how either of these would introduce phase loss in the LSC sensing. Note that the digital demod phase has been tuned to put all the PDH signal in the "I" quadrature, which is the condition in which the measurement was taken.
Probably this isn't gonna affect locking efforts (unless it's symptomatic of some other larger problem).
As part of the hunt why the X arm IR transmission RIN is anomalously high, I noticed that the BS Oplev Servo periodically kicks the optic around - the summary pages are the best illustration of this happening. Looking back in time, these seem to have started ~Nov 23 2020. The HeNe power output has been degrading, see Attachment #1, but this is not yet at the point where the head usually needs replacing. The RIN spectrum doesn't look anomalous to me, see Attachment #2 (the whitening situation for the quadrants is different for the BS and the TMs, which explains the HF noise). I also measured the loop UGFs (using swept sine) - seems funky, I can't get the same coherence now (live traces) between 10-30 Hz that I could before (reference traces) with the same drive amplitude, and the TF that I do measure has a weird flattening out at higher frequencies that I can't explain, see Attachment #3.
The excess RIN is almost exactly in the band that we expect our Oplevs to stabilize the angular motion of the optics in, so maybe needs more investigation - I will upload the loop suppression of the error point later. So far, I don't see any clean evidence of the BS Oplev HeNe being the culprit, so I'm a bit hesitant to just swap out the head...
the replacement was done this afternoon. The red "Replace Battery" indicator is no longer on.
I wanted to check the functionality of the IMC WFS. I just turned on the WFS servo loops as they were. For the past two hours, they didn't run away. The servo has been left turned on. I don't think there is no reason to keep it turned off.
I don't have omnigraffle - what about uploading the source doc in a format that the excellent (and free) draw.io can handle? I think we can do a much better job of making this diagram reflect reality. There should also be a corresponding diagram for the Acromag system (but that doesn't have to be tied to this task). Megatron (scripts machine) and nodus should be added to that diagram as well.
Please send me any omissions or corrections to the layout.
Last night, I briefly spoke with Koji about the CDS upgrade plan. This is a follow up.
Each server needs a minimum of two peripheral devices added to the PCIe bus:
As for the second issue, the main question is if we had an open PCIe slot on the c1iscex machine to install a Dolphin card. Looks like the 2 standard slots are taken (see Attachment #1), but a "low profile" slot is available. I can't find what the exact models of the Supermicro servers installed back in 2010 are, but maybe it's this one? It's a good match visually anyways. The manual says a "riser card" is required. I don't know if such a riser is already installed.
Questions I have, Rolf is probably the best person to answer:
Koji fixed the problematic channel - the issue was a bad solder joint on the input resistors to the THS4131. The board was re-installed. I also made a custom 2x4-pin LEMO-->DB9 cable, so we are now recording the PMC and FSS ERR/CTRL channel diagnostics again (spectra tomorrow). Note that Ch32 is recording some sort of DuoTone signal and so is not usable. This is due to a misconfiguration - ADC0 CH31 is the one which is supposed to be reserved for this timing signal, and not ADC1 as we currently have. When we swap the c1ioo hosts, we should fix this issue.
I also did most of the work to make the MEDM screens for the revised ASC topology, tried to mirror the site screens where possible. The overview screen remains to be done. I also loaded the anti-whitening filters (z:p 150:15) at the demodulated WFS input signal entry points. We don't have remote whitening switching capability at this time, so I'll test the switching manually at some point.
The main issue is that in the AA chassis I built, Ch14 (with the first channel as Ch1) has the output saturated to 28V (differential). I'm not sure what kind of overvoltage protection the ADC has - we frequently have the inputs exceed the spec'd +/-20 V (e.g. when the whitening filters are engaged and the cavity is fringing), but pending further investigation, I am removing the SCSI connection from the rear of the AA chassis.
The CDS model change required to get the AS WFS signals into the RTCDS system are rather invasive.
In terms of computational load, the c1ioo model seems to be able to handle the extra load no issues - ~35us/60us per cycle. The RFM model shows no extra computational time.
After this work, the IMC locking and POX/POY locking, and dither alignment servos are working okay. So I have some confidence that my invasive work hasn't completely destroyed everything. There is some hardware around the rear of 1X2 that I will clear tomorrow.
Attached is the layout for the "intermediate" CDS upgrade option, as was discussed on Wednesday. Under this plan:
Existing FEs stay where they are (they are not moved to a single rack)
Dolphin IPC remains PCIe Gen 1
RFM network is entirely replaced with Dolphin IPC
I just want to point out that if you move all the FEs to the same rack they can all be connected to the Dolphin switch via copper, and you would only have to string a single fiber to every IO rack, rather than the multiple now (for network, dolphin, timing, etc.).
I installed 4 chassis in the rack 1X2 (characterization on the E-bench was deemed satisfactory, I will upload the analysis later). I ran out of hardware to make power cables so only 2 of them are powered right now (1 32ch AA chassis and 1 WFS head interface). The current limit on the +24V Sorensens was raised to allow for similar margin to the limit with the increased current draw.
While I definitely bumped various cables, I don't seem to have done any lasting damage to the CDS system (the RFM errors remain of course).
Our favorite (flexible) RF cable is Belden's 1671J (Jacketed solder-soaked coax cable). It is compatible RG405. I'm not sure if there is off-the-shelf SMA cables with 1671J.
I re-ordered the below cables, this time going with flexible, double-shielded RG316-DS. Jordan will pick up and return the RG-405 cables after the holidays.
As requested, I placed an order for an assortment of new RF cables: SMA male-male, RG405.
As I was working on the IFO re-alignment just now, the rfm errors popped up again. I don't see any useful diagnostics on the web interface.
Do we want to take this opportunity to configure jumpers and set up the rogue master as Rolf suggested? Of course there's no guarantee that will fix anything, and may possibly make it impossible to recover the current state...
I think the WFS head performs satisfactorily.
Details and remarks:
If the RF experts see some red flags / think there are more tests that need to be performed, please let me know. Big thanks to Chub for patiently supporting this build effort, I'm pleasantly surprised it worked.
I installed a DC power strip (24 V variant, 12 outlets available) on the NW upright of the 1X1 rack. This is for the AS WFS. Seems to work, all outlets get +/- 24 V DC.
The FSS_RMTEMP channel is very noisy after this work. I'll look into it, but probably some Acromag grounding issue.
In the afternoon, Jordan and I also laid out 4x SMA LMR240 cables and 1x DB15 M/F cable from 1X2 to the NE corner of the AP table via the overhead cable trays.
I spent a few hours monkeying around with the control filters. They are totally made up and also it's my first time trying to design control filters.
The OLTFs of the different length controls are shown in attachment 1.
The open-loop couplings of the DOFS to DARM are shown in attachment 2.
Note that in Lance/Pytickle the convention is that CLTFs are 1/(1 - G). Where G is the OLTF.
Cool. Can you give us Bode plots of the open loop gain for each of the 5 length control loops?
I have rebuilt the MCMC simulation in an OOP fashion and incorporated Lance/Pytickle functionality into it. The usage of the MCMC now is much less messy, hopefully.
The assembly of the head is nearly complete, I thought I'd do some characterization before packaging everything up too nicely. I noticed that the tapped holes in the base are odd-sized. According to the official aLIGO drawing, these are supposed to be 4-40 tapped, but I find that something in between 2-56 and 4-40 is required - so it's a metric hole? Maybe we used some other DCC document to manufacture these parts - does anyone know the exact drawings used? In the meantime, the circuit is placed inside the enclosure with the back panel left open to allow some tuning of the trim caps. The front panel piece for mounting the SMA feedthroughs hasn't been delivered yet so hardware-wise, that's the last missing piece (apart from the aforementioned screws).
Attachment #1 - the circuit as stuffed for the RF frequencies of relevance to the 40m.
Attachment #2 - measured TF from the "Test Input" to Quadrant #1 "RF Hi" output.
Update 11 Dec: For whatever reason, whoever made this box decided to tap 4-40 holes from the bottom (i.e. on the side of the base plate), and didn't thread the holes all the way through, which is why I was unable to get a 4-40 screw in there. To be fair the drawing doesn't specify the depth of the 4-40 holes to be tapped. All the taps we have in the lab have a maximum thread length of 9/16" whereas we need something with at least 0.8" thread length. I'll ask Joe Benson at the physics workshop if he has something I can use, and if not, I'll just drill a counterbore on the bottom side and use the taps we have to go all the way through and hopefully that does the job.
The front panel I designed for the SMA feedthroughs arrived today. Unfortunately, it is impossible for the D-sub shaped holes in this box to accommodate 8 insulated SMA feedthroughs (2 per quadrant for RF low and RF high) - while the actual SMA connector doesn't occupy so much space, the plastic mold around the connector and the nut to hold it are much too bulky. For the AS WFS application, we will only need 4 so that will work, but if someone wants all 8 outputs (plus an optional 9th for the "Test Input"), a custom molded feedthrough will have to be designed.
As for the 170 MHz feature - my open loop modeling in Spice doesn't suggest a lack of phase margin at that frequency so I'm not sure what the cause is there. If this is true, just increasing the gain won't solve the issue (since there is no instability at least by the phase margin metric). Could be a problem with the "Test Input" path I guess. I confirmed it is present in all 4 quadrants.
I acquired several spare OSEMs (in unknown condition) from Paco. They are stored alongside the shipment from UF.
They're expected to arrive mid next week.
I gave one Noliac PZT from the two spare in the metal PMC kit to Paco. There is one spare left in the kit.
I made an example that calculates the closed-loop noise-coupling from SRCL sensing and displacement to DARM in A+. I used the control filters that Kevin defined in his controls example.
The resulting noise budget is in attachment 1. The code is in the 40m/bhd git.
I also investigated why aLIGO simulations behave so different than the A+ simulation (See few previous elogs in this thread). That is why aLIGO results are much less variable, and the simulations in aLIGO barely pass the validity checks, while A+ simulations almost always pass.
The way I check for the validity of a kat model is by scanning all the DOFs and checking that the corresponding sensing RFPDs demodulated signals cross zero. Attachment 2 shows these scanning for 3 such RFPDS for 3 cases:
A+ model with maxtem = 2
ALigo model with maxtem = 2
ALigo model with maxtem = 'off'
It seems like the scanning curves for A+ and ALigo with no HOMs are well behaved and look like normal PDH signals, while the ALigo with maxtem = 2 curves look funky. I believe that the aLIGO+HOMS curves indicate that the IFO is not really in a good locking point. All the IFO lockings were done by using the locking methods straight out of the PyKat package.
I don't buy this story - P2 only briefly burped around GPStime 1291608000 which is around 8pm local time, which is when I was recovering the system.
Today. Jordan talked to Jon Feicht - apparently there is some kind of valve in the TP2 forepump, which only opens ~15-20 seconds after turning the pump on. So the loud sound I was hearing yesterday was just some transient phenomenon. So today morning at ~9am, we turned on TP2. Once again, PTP2 pressure hovered around 500 torr for about 15-20 seconds. Then it started to drop, although both Jordan and I felt that the time it took for the pressure to drop in the range 5 mtorr - 1 mtorr was unusually long. Jordan suspects some "soft-start" feature of the Turbo Pumps, which maybe spins up the pump in a more controlled way than usual after an event like a power failure. Maybe that explains why the pressure dropped so slowly? One thing is for sure - the TP2 controller displayed "TOO HIGH LOAD" yesterday when I tried the first restart (before migrating everything to the older UPS unit). This is what led me to interpret the loud sound on startup of TP2 to indicate some issue with the forepump - as it turns out, this is just the internal valve not being opened.
Anyway, we left TP2 on for a few hours, pumping only on the little volume between it and V4, and PTP2 remained stable at 20 mtorr. So we judged it's okay to open V4. For today, we will leave the system with both TP2 and TP3 backing TP1. Given the lack of any real evidence of a failure from TP2, I have no reason to believe there is elevated risk.
As for prioritising UPS swap - my opinion is that it's better to just replace the batteries in the UPS that has worked for years. We can run a parallel reliability test of the new UPS and once it has demonstrated stability for some reasonable time (>4 months), we can do the swap.
I was able to clear the FAULT indicator on the new UPS by running a "self-test". pressing and holding the "mute" button on the front panel initiates this test according to the manual, and if all is well, it will clear the FAULT indicator, which it did. I'm still not trusting this unit and have left all units powered by the old UPS.
Update 1100 Dec 11: The config remained stable overnight so today I reverted to the nominal config of TP3 pumping the annuli and TP2 backing TP1 which pumps the main volume (through the partially open RV2).
According to the Tripp Lite manual, the FAULT icon indicates "the battery-supported outlets are overloaded." The failure of the TP2 dry pump appears to have caused this. After the dry pump failure, the rising pressure in the TP2 foreline caused TP2's current draw to increase way above its normal operating range. Attachment 1 shows anomalously high TP2 current and foreline pressure in the minutes just before the failure. The critical system-wide failure is that this overloaded the UPS before overloading TP2's internal protection circuitry, which would have shut down the pump, triggering interlocks and auto-notifications.
I've investigated the vacuum controls failure that occurred last night. Here's what I believe happened.
From looking at the system logs, it's clear that there was a sudden loss of power to the control computer (c1vac). Also, the system was actually down for several hours. The syslog shows normal EPICS channel writes (pressure readback updates, etc., and many of them per minute) which suddenly stop at 4:12 pm. There are no error or shutdown messages in the syslog or in the interlock log. The next activity is the normal start-up messaging at 7:39 pm. So this is all consistent with the UPS suddenly failing.
First, there are too many electronics on the 1 kVA UPS. The reason I asked us to buy a dual 208/120V UPS (which we did buy) is to relieve the smaller 120V UPS. I envision moving the turbo pumps, gauge controllers, etc. all to the 5 kVA unit and reserving the smaller 1 kVA unit for the c1vac computer and its peripherals. We now have the dual 208/120V UPS in hand. We should make it a priority to get that installed.
Second, there are 1 Hz "blinker" channels exposed for c1vac and all the slow controls machines, each reporting the machine's alive status. I don't think they're being monitored by any auto-notification program (running on a central machine), but they could be. Maybe there already exists code that could be co-opted for this purpose? There is an MEDM screen displaying the slow machine statuses at Sitemap > CDS > SLOW CONTROLS STATUS, pictured in Attachment 2. This is the only way I know to catch sudden failures of the control computer itself.
I can't find anything in the manual that describes the nature of the FAULT message. In fact, it's not mentioned at all. If the unit detects a fault at its output, I would expect a bit more information. This unit does a programmable level of input error protection, too, usually set at 100%. Still, there is no indication in the manual whether an input issue would be described as a fault; that usually means a short or lifted ground at the output.
Is that a fault code that you can decipher in the manual, or just a light telling you nothing but your UPS is dead?
Around 7pm, the UPS at the vacuum rack seems to have failed. Don't ask me why I decided to check the vacuum screen 10 mins after the failure happened, but the point is, this was a silent failure so the protocols need to be looked into.
Going to the rack, I saw (unsurprisingly) that the 120V UPS was off.
For now, I think this is a safe state to leave the system in. Unless I hear otherwise, I will leave it so - I will be in the lab another hour tonight (~10pm).
Some photos and a screen-cap of the Vac medm screen attached.
As discussed at the meeting, I decided to effect a satellite box swap for the misbehaving MC1 unit. I looked back at the summary pages Vmon for the SRM channels, and found that in the last month or so, there wasn't any significant evidence of glitchiness. So I decided to effect that swap at ~4pm today. The sequence of steps was:
One thing I was reminded of is that the motion of the MC1 optic by controlling the bias sliders is highly cross-coupled in pitch and yaw, it is almost diagonal. If this is true for the fast actuation path too, that's not great. I didn't check it just now.
While I was working on this, I took the opportunity to also check the functionality of the RF path of the IMC WFS. Both WFS heads seem to now respond to angular motion of the IMC mirror - I once again dithered MC2 and looked at the demodulated signals, and see variation at the dither frequency, see Attachment #1. However, the signals seem highly polluted with strong 60 Hz and harmonics, see the zoomed-in time domain trace in Attachment #2. This should be fixed. Also, the WFS loop needs some re-tuning. In the current config, it actually makes the MC2T RIN worse, see Attachment #3 (reference traces are with WFS loop enabled, live traces are with the loop disabled - sorry for the confusing notation, I overwrote the patched version of DTT that I got from Erik that allows the user legend feature, working on getting that back).
The MC1 suspension has begun to show evidence of glitches again, from Friday/Saturday. You can look at the suspension Vmon tab a few days ago and see that the excess fuzz in the Vmon was not there before. The extra motion is also clearly evident on the MCREFL spot. I noticed this on Saturday evening as I was trying to recover the IMC locking, but I thought it might be Millikan so I didn't look into it further. Usually this is symptomatic of some Satellite box issues. I am not going to attempt to debug this anymore.
I suspect what happened here is that the IP didn't get updated when we went from the 131.215.113.xxx system to 192.168.113.xxx system. I fixed it now and can access the web interface. This system is now ready for remote debugging (from inside the martian network obviously). The IP is 192.168.113.90.
Managed to pull this operation off without crashing the RFM network, phew.
BTW, a windows laptop that used to be in the VEA (I last remember it being on the table near MC2 which was cleared sometime to hold the spare suspensions) is missing. Anyone know where this is ?
Continuting the IFO recovery - I am unable to recover similar levels of TRX RIN as I had before. Attachment #1 shows that the TRX RIN is ~4x higher in RMS than TRY RIN (the latter is commensurate with what we had previously). The excess is dominated by some low frequency (~1 Hz) fluctuations. The coherence structure is confusing - why is TRY RIN coherent with IMC transmission at ~2 Hz but not TRX? But anyways, doesn't look like its intensity fluctuations on the incident light (unsurprisingly, since the TRY RIN was okay). I thought it may be because of insufficient low-frequency loop gain - but the loop shape is the same for TRX and TRY. I confirmed that the loop UGF is similar now (red trace in Attachment #2) as it was ~1 month ago (black trace in Attachment #2). Seismometers don't suggest excess motion at 1 Hz. I don't think the modulation depth at 11 MHz is to blame either. As I showed earlier, the spectrum of the error point is comparable now as it was previously.
What am I missing?
The ITMX Oplev (installed in March 2019) was near end of life judging by the SUM channel (see Attachment #1). I replaced it yesterday evening with a new HeNe head. Output power was ~3.25 mW. The head was labelled appropriately and the Oplev spot was recentered on its QPD. The lifetime of ~20 months is short but recall that this HeNe had already been employed as a fiber illuminator at EX and so maybe this is okay.
Loop UGFs and stability margins seem acceptable to me, see Attachment #2-#3.
I updated the ndscope on rossa to a bleeding edge version (0.7.9+dev0) which has many of the fixes I've requested in recent times (e.g. direct PDF export, see Attachment #1). As usual if you find issue, report it on the issue tracker. The basic functionality for looking at signals seems to be okay so this shouldn't adversely impact locking efforts.
In hindsight - I decided to roll-back to 0.7.9, and have the bleeding edge as a separate binary. So if you call ndscope from the command line, you should still get 0.7.9 and not the bleeding edge.
I measured the modulation depth at 11 MHz andf 55 MHz using an optical beat + PLL setup. Both numbers are ~0.2 rad, which is consistent with previous numbers. More careful analysis forthcoming, but I think this supports my claim that the optical gain for the PDH locking loops should not have decreased.
In favor of keeping the same servo gains, I tuned the digital demod phases for the POX and POY photodiode signals to put as much of the PDH error signal in the _I quadrature as possible. The changes are summarized below:
The old locking settings seem to work fine again. This setting isn't set by the ifoconfigure scripts when they do the burt restore - do we want it to be?
Attachments #1 and #2 show some spectra and TFs for the POX/POY loops. In Attachment #2, the reference traces are from the past, while the live traces are from today. In fact, to have the same UGF as the reference traces (from ~1 year ago), I had to also raise the digital servo loop gain by ~20%. Not sure if this can be put down to a lower modulation depth - at least, at the output on the freq ref box, I measured the same output power (at the 0dB variable attenuator gain setting we nominally run in) before and after the changes. But I haven't done an optical measurement of the modulation depth yet. There is also a hint of lesser phase available at ~100 Hz now compared to a year ago.
There seems to be significant phase loss in the TTFSS path, which is limiting the IMC OLTF to <100 kHz.
See Attachment #1 and #2. The former shows the phase loss, while the latter is just to confirm that the optical gain of the error point is roughly the same, since I noticed this after working on and replacing the RF frequency distribution unit. Unfortunately there have been many other changes also (e.g. the work that Rana and Koji did at the IMC rack, swapping of backplane controls etc etc - maybe they have an OLTF measurement from the time they were working?) so I don't know which is to blame. Off the top of my head, I don't see how the RF source can change the phase lag of the IMC servo at 100 kHz. The only part of the IMC RF chain that I touched was the short cable inside the unit that routes the output of the Wenzel source to the front panel SMA feedthrough. I confirmed with a power meter that the power level of the 29.5 MHz signal at that point is the same before and after my work.
The time domain demod monitor point signals appear somewhat noisier in todays measurement compared to some old data I had from 2018, but I think this isn't significant. Once the SR785 becomes available, I will measure the error point spectrum as well to confirm. One thing I noticed was that like many of our 1U/2U chassis units, the feedthrough returns are shorted to the chassis on the RF source box (and hence presumably also to the rack). The design doc for this box makes many statements about the precautions taken to avoid this, but stops short of saying if the desired behavior was realized, and I can't find anything about it in the elog. Can someone confirm that the shields of all the connectors on the box were ever properly isolated? My suspicion is that the shorting is happening where the all-metal N-feedthroughs touch the drilled surfaces on the front panel - while the front and back surfaces of the panel are insulating, the machined surfaces are not.
This is an unacceptable state but no clear ideas of how to troubleshoot quickly (without going piece by piece into the IMC servo chain) occur to me. I still don't understand how the freq source work could have resulted in this problem but I'm probably overlooking something basic. I'm also wondering why the differential receiving at the TTFSS error point did not require a gain adjustment of the IMC servo? Shouldn't the differential-receiving-single-ended-sending have resulted in an overall x0.5 gain?
Update 8 Dec 1200: To test the hypothesis, I bypassed the SR560 based differential receiving and restored the original config. I am then able to run with the original gain settings, and you see in Attachment #4 that the IMC OLTF UGF is back above 100 kHz. It is still a little lower than it was in June 2019, not sure why. There must be some saturation issues somewhere in the signal chain because I cannot preserve the differential receiving and retain 100 kHz UGF, either by raising the "VCO gain" on the MC servo board, setting the SR560 to G=2, or raising the "Common Gain Adjust" on the FSS box by 6 dB. I don't have a good explanation for why this worked for some weeks and failed now - maybe some issue with the SR560? We don't have many working units so I didn't try switching it.
So either there is a whole mess of lines or the frequency noise suppression is limited. Sigh.
This work is now complete. The box was characterized and re-installed in 1X2. I am able to (briefly) lock the IMC and see PDH fringes in POX and POY so the lowest order checks pass.
Even though I did not deliberately change anything in the 29.5 MHz path, and I confirmed that the level at the output is the expected 13 dBm, I had to lower then IN1 gain of the IMC servo by 2dB to have a stable lock - should confirm if this is indeed due to higher optical gain at the IMC error point, or some electrical funkiness. I'm not delving into a detailed loop characterization today - but since my work involved all elements in the RF modulation chain, some detailed characterization of all the locking loops should be done - I will do this in the coming week.
After tweaking the servo gains for the POX/POY loops, I am able to realize the single arm locks as well (though I haven't dont the characterization of the loops yet).
I'm leaving the PSL shutter open, and allowing the IMC autolocker to run. The WFS loops remain disabled for now until I have a chance to check the RF path as well.
Unrelated to this work: Koji's swapping back of the backplane cards seems to have fixed the WFS2 issue - I now see the expected DC readbacks. I didn't check the RF readbacks tonight.
Update 7 Dec 2020 1 pm: A ZHL-2 with heat sink attached and a 11.06 MHz Wenzel source were removed from the box as part of this work (the former was no longer required and the latter wasn't being used at all). They have been stored in the RF electronics cabinet along the east arm.
This turned out to be a much more involved project than I expected. The layout is complete now, but I found several potentially damaged sections of cabling (the stiff cables don't have proper strain relief near the connectors). I will make fresh cables tomorrow before re-installing the unit in the rack. Several changes have been made to the layout so I will post more complete details after characterization and testing.
I was poring over minicircuits datasheets today, and I learned that the minicircuits bandpass filters (SBP10.7 and SBP60) are not bi-directional! The datasheet clearly indicates that the Male SMA connector is the input and the Female SMA connector is the output. Almost all the filters were installed the other way around 😱 . I'll install them the right way around now.
I have the setup built for the AA/AI board testing around the PD testing area. Please let me leave it like that for a week or so.
12/4 TF Tested 5 PCBs
12/6 TF Tested 19 PCBs (12min/PCB) - found 1 failure (S2001479 CH1) -> Fixed 12/11
12/8 TF Tested 16 PCBs (12min/PCB)
PSD Tested 4 PCBs (11min/PCB)
12/11 TF Tested 10 PCBs + 1 fixed channel (All channels checked)
PSD Tested 10 PCBs (11min/PCB)
12/14 PSD Tested 4 PCBs (6.5min/PCB) fixed noise issue of 2 ch, TF issue of 1 ch
12/15 PSD Tested 32 PCBs (6.5min/PCB) fixed noise issue of 1ch
Temp dependence measurement
I checked the backplane connection for IMC WFS2 and found that the cables for IMC WFS2 and the IMC demod were swapped during my IMC noise hunting activities. I reverted it just now.
But we need to check if this damaged anything such as the WFS2 head, the WFS2 demod, etc, once the IMC locking is back.
1. That's true. But we are already in that regime with the Var attn at 0dB, aren't we? We can reduce the input to the amp by 1-2dBm sacrificing the EOM out by that amount (we can compensate this for the demo out by removing the 1dB attn).
2. Not 100% sure but one possible explanation is that we wanted to keep the Marconi output large (or as large as possible) to keep the SNR between the signal and the noise of the driver in Marconi. The attenuator is less noisy compared to the driver noise.
3. My guess is that theoretically we were supposed to have 13dBm input and 20dBm output in design. However, the actual input was as such. We can restore it to the 13dBm input.
I'm open to either approach. If the full replacement requires a lot of machining, maybe I will stick to just the 55 MHz line. But if only a couple of new holes are required, it might be advantageous to do the revamp while we have the box out? What do you think?
BTW, now that I look more closely at the RF chain, I have several questions:
I guess it is feasible to have +17 dBm of 55 MHz signal to plug into the Quad Demod chassis - e.g. drive the 55 MHz input with 20 dBm, pick off 3dBm to the front panel for ASC. Then we can even have several "spare" 55 MHz outputs and still satisfy the 9 dBm input that the ZHL-2 in the 55 MHz chain wants (though again, isn't this dangerously close to the 1dB compression point?). The design doc claims to have done some Optickle modeling, so I guess there isn't really any issue?
Are you going to full replacement of the 55MHz system? Or just remove the 7dBm and then implement the proposed modification for the 55MHz line?
Let's use RG405 for better shielding. It is not too stiff. The bending (just once) does not break the cable.
I removed the Frequency Generation box from the 1X2 rack. For the time being, the PSL shutter is closed, since none of the cavities can be locked without the RF modulation source anyways.
Prior to removal, I did the following:
One thing I noticed was that we're using very stiff coax cabling (RG405) inside this box? Do we need to stick with this option? Or can we use the more flexible RG316? I guess RG405 is lower loss, so it's better. I can't actually find any measurement of the shielding performance in my quick google searching but I think the claim on the call yesterday was that RG405 with its solder soaked braids offer superior shielding.
Before doing any modification you should check how much the distributed powers are at the ports.
Also your modification will change the relative phase between 11MHz and 55MHz.
Can you characterize how much phase difference you have between them, maybe using the modulation of the main marconi? And you might want to adjust it to keep the previous value (or any new value) after the modification by adding a cable inside?