40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 327 of 354  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
ID Date Author Typeup Category Subject
  15850   Sun Feb 28 22:53:22 2021 gautamUpdateLSCmore PRMI checks here: what it is ain't exactly clear

I looked into this a bit more and crossed off some of the points Rana listed. In order to use REFL 55 as a sensor, I had to fix the frequent saturations seen in the MICH signals, at the nominal (flat) whitening gain of +18 dB. The light level on the REFL55 photodiode (13 mW), its transimpedance (400 ohm), and this +18dB (~ x8) gain, cannot explain signal saturation (0.7A/W * 400 V/A * 8 ~ 2.2kV/W, and the PRCL PDH fringe should be ~1 MW/m, so the PDH fringe across the 4nm linewidth of the PRC should only be a couple of volts). Could be some weird effect of the quad LT1125. Anyway, the fix that has worked in the past, and also this time, is detailed here. Note that the anomalously high noise of the REFL55_Q channel in particular remains a problem. After taking care of that, I did the following:

  1. PRMI (ETMs misaligned) locking with sidebands resonant in the PRC was restored - REFL55_I was used for PRCL sensing and REFL55_Q was used for MICH sensing. The locks are acquired nearly instantaneously if the alignment is good, and they are pretty robust, see Attachment #1 (the lock losses were IMC related and not really any PRC/MICH problem).
  2. Measured the loop OLTFs using the usual IN1/IN2 technique. The PRCL loop looks just fine, but the MICH loop UGF is very low apparently. I can't just raise the loop gain because of the feature at ~600 Hz. Not sure what the origin of this is, it isn't present in the analogous TF measurement when the PRMI is locked with carrier resonant (REFL11_I for PRCL sensing, AS55_Q for MICH sensing). I will post the loop breakdown later. 
  3. Re-confirmed that the MICH-->PRCL coupling couldn't be nulled completely in this config either.
    • The effect is a geometric one - then 1 unit change in MICH causes a 1/sqrt(2) change in PRCL. 
    • The actual matrix element that best nulls a MICH drive in the PRCL error point is -0.34 (this has not changed from the PRMI resonant on carrier locking). Why should it be that we can't null this element, if the mechanical transfer functions (see next point) are okay?
  4. Looked at the mechanical actuator TFs are again (since we forgot to save plots on Friday), by driving the BS and PRM with sine waves (311.1 Hz), one at a time, and looking at the response in REFL55_I and REFL55_Q. Some evidence of some funkiness here already. I can't find any configuration of digital demod phase that gives me a PRCL/MICH sensing ratio of ~100 in REFL55_I, and simultaneously, a MICH/PRCL sensing ratio of ~100 in REFL55_Q. The results are in Attachments #5
  5. Drove single frequency lines in MICH and PRCL at 311.1 and 313.35 Hz respectively, for 5 minutes, and made the radar plots in Attachments #2 and #3. Long story short - even in the "nominal" configuration where the sidebands are resonant in the PRC and the carrier is rejected, there is poor separation in sensing. 
    • Attachments #2 is with the digital REFL55 demod phase set to 35 degrees - I thought this gave the best PRCL sensing in REFL55_I (eyeballed it roughly by looking at ndscope free-swinging PDH fringes).
    • But the test detailed in bullet #4, and Attachments #2 itself, suggested that PRCL was actually being sensed almost entirely in the Q phase signal.
    • So I changed the digital demod phase to -30 degrees (did a more quantitative estimate with free-swinging PDH fringes on ndscope, horn-to-horn voltages etc).
    • The same procedure of sine-wave-driving now yields Attachments #3. Indeed, now PRCL is sensed almost perfectly in REFL55_I, but the MICH signal is also nearly in REFL55_I. How can the lock be so robust if this is really true? 
  6. Attachments #4 shows some relevant time domain signals in the PRMI lock with the sidebands resonant. 
    • REFL11_I hovers around 0 when REFL55_I is used to sense and lock PRCL - good. The m/ct calibration for REFL11_I and REFL55_I are different so this plot doesn't directly tell us how good the PRCL loop is based on the out-of-loop REFL11_I sensor.
    • ASDC is nearly 0, good.
    • POP22_I is ~200cts (and POP22_Q is nearly 0) - I didn't see any peak at the drive frequency when driving PRCL with a sine wave, so no linear coupling of PRCL to the f1 sideband buildup, which would suggest there is no PRCL offset.
    • Couldn't do the analogous test for AS110 as I removed that photodiode for the AS WFS - it is pretty simple to re-install it, but the ASDC level already doesn't suggest anything crazy here.

Rana also suggested checking if the digital demod phase that senses MICH in REFL55_Q changes from free-swinging Michelson (PRM misaligned), to PRMI aligned - we can quantify any macroscopic length mismatch in the PRC length using this measurement. I couldn't see any MICH signal in REFL55_Q with the PRM misaligned and the Michelson fringing. Could be that +18dB is insufficient whitening gain, but I ran out of time this afternoon, so I'll check later. But not sure if the double attenuation by the PRM makes this impossible.

  15853   Mon Mar 1 16:27:17 2021 gautamUpdateLSCPRM violin filter excessive?

The PRM violin filter seems very suboptimal - the gain peaking shows up in the MICH OLTF, presumably due to the MICH-->PRM LSC output matrix. I plot the one used for the BS in comparison in Attachment #1, seems much more reasonable. Why does the PRM need so many notches? Is this meant to cover some violin modes of PR2/PR3 as well? Do we really need that? Are the PR2/PR3 violin modes really so close in frequency to that for the 3" SOS? I suppose it could be since the suspension wire is thinner and the mass is lighter, and the two effects nearly cancel, but we don't actuate on PR2/PR3? According to the earlier elog in this thread, this particular filter wasn't deemed offensive and was left on.

Indeed, as shown in Attachment #2, I can realize a much healthier UGF for the MICH loop with just a single frequency notch (black reference trace) rather than using the existing "PRvio1,2" filter (FM2), (live red trace). The PR violins are eating so much phase at ~600 Hz.

Quote:

We turned off many excessive violin mode bandstop filters in the LSC.

  15854   Tue Mar 2 13:39:31 2021 ranaUpdateLSCPRM violin filter excessive?

agreed, seems excessive. I always prefer bandstop over notch in case the eigenfrequency wanders, but the bandstop could be made to be just a few Hz wide.

 

  15855   Tue Mar 2 19:52:46 2021 gautamUpdateLSCREFL55 demod board rework

There were multiple problems with the REFL55 demod board. I fixed them and re-installed the board. The TFs and noise measured on the bench now look more like what is expected from a noise model. The noise in-situ also looked good. After this work, my settings for the PRMI sideband lock don't work anymore so I probably have to tweak things a bit, will look into it tomorrow.

  15856   Wed Mar 3 11:51:07 2021 YehonathanUpdateSUSOSEM testing for SOSs

I finished testing the OSEMs. I put all the OSEMs back in the box. The OSEMS were divided into several bags. I put the OSEM box next to the south flow bench on the floor.

I have uploaded the OSEM catalog to the wiki. I will upload the LED spot images later.

In summary:

Total 64 OSEMS, 31 long, 33 short.

Perfectly centered LED spots, ready for C&B OSEMS: 30, 12 long, 18 short.

Perfectly centered LED spots, need some work (missing pigtails, weird screws) OSEMS: 7, 5 long, 2 short.

Slightly off-centered (subjective) LED spots, ready for C&B OSEMS: 20, 7 long, 13 short.

Slightly off-centered (subjective) LED spots, need some work (missing pigtails, weird screws) OSEMS: 4 long

Defective OSEMS or LED spot way off-center: 3.

  15859   Wed Mar 3 22:13:05 2021 gautamUpdateLSCREFL55 demod board rework

After this work, I measured that the orthogonality was poor. I confirmed on the bench that the PQW-2-90 was busted, pin 2 (0 degree output) showed a sensible signal half of the input, but pin 6 had far too small an output and the phase difference was more like 45 degrees and not 90 degrees. I can't find any spares of this part in the lab - however, we do have the equivalent part used in the aLIGO demodulator. Koji has kindly agreed to do the replacement (it requires a bit of jumper wiring action because the pin mapping between the two parts isn't exactly identical - in fact, the circuit schematic uses a transformer to do the splitting, but at some unknown point in time, the change to the minicircuits part was made. Anyway, until this is restored, I defer the PRMI sideband locking.

Quote:

There were multiple problems with the REFL55 demod board. I fixed them and re-installed the board. The TFs and noise measured on the bench now look more like what is expected from a noise model. The noise in-situ also looked good. After this work, my settings for the PRMI sideband lock don't work anymore so I probably have to tweak things a bit, will look into it tomorrow.

  15860   Wed Mar 3 23:23:58 2021 gautamUpdateALSArm cavity scan

I see no evidence of anything radically different from my PSL table optical characterization in the IMC transmitted beam, see Attachment #1. The lines are just a quick indicator of what's what and no sophisticated peak fitting has been done yet (so the apparent offset between the transmission peaks and some of the vertical lines are just artefacts of my rough calibration I believe). The modulation depths recovered from this scan are in good agreement with what I report in the linked elog, ~0.19 for f1 and ~0.24 for f2. On the bright side, the ALS just worked and didn't require any electronics fudgery from me. So the mystery continues.

  15864   Thu Mar 4 23:16:08 2021 KojiUpdateLSCREFL55 demod board rework

A new hybrid splitter (DQS-10-100) was installed. As the amplification of the final stage is sufficient for the input level of 3dBm, I have bypassed the input amplification (Attachment 1). One of the mixer was desoldered to check the power level. With a 1dB ATTN, the output of the last ERA-5 was +17.8dBm (Attachment 2). (The mixer was resoldered.)

With LO3dBm. RF0dBm, and delta_f = 30Hz, the output Vpp of 340mV and the phase difference is 88.93deg. (Attachment 3/4, the traces were averaged)

  15867   Fri Mar 5 13:53:57 2021 gautamUpdateLSCREFL55 demod board rework

0 dBm ~ 0.63 Vpp. I guess there is ~4dB total loss (3dB from splitter and 1dB from total excess loss above theoretical from various components) between the SMA input and each RF input of the JMS-1-H mixer, which has an advertised conversion loss of ~6dB. So the RF input to each mixer, for 0dBm to the front panel SMA is ~-4dBm (=0.4 Vpp), and the I/F output is 0.34Vpp. So the conversion loss is only ~-1.5 dB? Seems really low? I assume the 0.34 Vpp is at the input to the preamp? If it's after the preamp, then the numbers still don't add up, because with the nominal 6dB conversion loss, the output. should be ~2Vpp? I will check it later.

Quote:

With LO3dBm. RF0dBm, and delta_f = 30Hz, the output Vpp of 340mV and the phase difference is 88.93deg. (Attachment 3/4, the traces were averaged)

  15869   Fri Mar 5 15:31:23 2021 KojiUpdateLSCREFL55 demod board rework

Missed to note: The IF test was done at TP7 and TP6 using pomona clips i.e. brefore the preamp.

 

  15871   Fri Mar 5 16:24:24 2021 gautamUpdateLSCREFL55 demod board re-installed in 1Y2

I don't have a good explanation why, but I too measured similar numbers to what Koji measured. The overall conversion gain for this board (including the +20dB gain from the daughter board) was measured to be ~5.3 V/V on the bench, and ~16000 cts/V in the CDS system (100Hz offset from the LO frequency). It would appear that the effective JMS-1-H conversion loss is <2dB. Seems fishy, but I can't find anything else obviously wrong with the circuit (e.g. a pre-amp for the RF signal that I missed, there is none).

I also attach the result of the measured noise at the outputs of the daughter board (i.e. what is digitized by the ADC), see Attachment #2. Apart from the usual forest of lines of unknown origin, there is still a significant excess above the voltage noise of the OP27, which is expected to be the dominant noise source in this configuration. Neverthelesss, considering that we have only 40dB of whitening gain, it is not expected that we see this noise directly in the digitized signal (above the ADC noise of ~1uV/rtHz). Note that the measured noise today, particularly for the Q channel,  is significantly lower than before the changes were made

  15872   Fri Mar 5 17:48:25 2021 JonUpdateCDSFront-end testing

Today I moved the c1bhd machine from the control room to a new test area set up behind (west of) the 1X6 rack. The test stand is pictured in Attachment 1. I assembled one of the new IO chassis and connected it to the host.

I/O Chassis Assembly

  • LIGO-style 24V feedthrough replaced with an ATX 650W switching power supply
  • Timing slave installed
  • Contec DO-1616L-PE card installed for timing control
  • One 16-bit ADC and one 32-channel DO module were installed for testing

The chassis was then powered on and LED lights illuminated indicating that all the components have power. The assembled chassis is pictured in Attachment 2.

Chassis-Host Communications Testing

Following the procedure outlined T1900700, the system failed the very first test of the communications link between chassis and host, which is to check that all PCIe cards installed in both the host and the expansion chassis are detected. The Dolpin host adapter card is detected:

07:06.0 PCI bridge: Stargen Inc. Device 0102 (rev 02) (prog-if 00 [Normal decode])
    Flags: bus master, fast devsel, latency 0
    Bus: primary=07, secondary=0e, subordinate=0e, sec-latency=0
    I/O behind bridge: 00002000-00002fff
    Prefetchable memory behind bridge: 00000000c0200000-00000000c03fffff
    Capabilities: [40] Power Management version 2
    Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
    Capabilities: [60] Express Downstream Port (Slot+), MSI 00
    Capabilities: [80] Subsystem: Device 0000:0000
    Kernel driver in use: pcieport

However the OSS PCIe adapter card linking the host to the IO chassis was not detected, nor were any of the cards in the expansion chassis. Gautam previously reported that the OSS card was not detected by the host (though it was not connected to the chassis then). Even now connected to the IO chassis, the card is still not detected. On the chassis-side OSS card, there is a red LED illuminated indicating "HOST CARD RESET" as pictured in Attachment 3. This may indicate a problem with the card on the host side. Still more debugging to be done.

  15873   Fri Mar 5 22:25:13 2021 gautamUpdateLSCPRMI 1f SB locking recovered

Now that the REFL55 signal chain is capable of providing balanced, orthogonal readout of the two quadratures, I was able to recover the 1f SB resonant lock pretty easily. Ran sensing lines for ~5mins, still looks weird. But I didn't try to optimize anything / do other checks (e.g. actuate MICH using ITMs instead of BS) tonight, and I'm craving the Blueberry pie Rana left me. Will continue to do more systematic tests in the next days.

  15874   Sat Mar 6 12:34:18 2021 gautamUpdateLSCSensing matrix settings messed with

To my dismay, I found today that somebody had changed the oscillator frequencies for the sensing matrix infrastructure we have. The change happened 2 days and 2 hours ago (I write this at ~1230 on Saturday, 3/6), i.e. ~1030am on Thursday. According to the elog, this is when Anchal and Paco were working on the interferometer, but I can find no mention of these settings being changed. Not cool guys 😒 .

This was relatively easy to track down but I don't know what else may have been messed with. I don't understand how anything that was documented in the elog can lead to this weird doubling of the frequencies.

I have now restored the correct settings. The "sensing matrix" I posted last night is obviously useless.

  15875   Sun Mar 7 15:26:10 2021 gautamUpdateLSCHousekeeping + more PRMI
  1. Beam pointing into PMC was tweaked to improve transmission.
  2. AS110 photodiode was re-installed on the AS table - I picked off 30% of the light going to the AS WFS using a beamsplitter and put it on the AS110 photodiode.
  3. Adjusted ASDC whitening gain - we have been running nominally with +18dB, but after Sept 2020 vent, there is ~x3 amount of light incident on the AS55 RFPD (from which the ASDC signal is derived). I want to run the dither alignment servos that use this PD using the same settings as before, hence this adjustment.
  4. Adjusted digital demod phases of POP22, POP110 and AS110 signals with the PRMI locked (sideband resonant). I want these to be useful to debug the PRMI. the phases were adjusted so that AS110_Q, POP22_I and POP110_I contain the signal (= sideband buildup) when the PRMI is locked.
  5. Ran the actuator calibration routine for BS, ITMX and ITMY - i'll try and do the PRM and ETMs as well later.
  6. With the PRMI locked (sidebands resonant), looked at the sideband power buildup. POP22 and POP110 remain stable, but there is some low frequency variation in the AS110_Q channel (but not the I channel, so this is really a time varying transmission of the f2 sideband to the dark port). What's that about? Also unsure about those abrupt jumps in the POP22/POP110 signals, see Attachment #1 (admittedly these are slow channels). I don't see any correlation in the MICH control signal.
  7. Measured the loop shapes of the MICH (UGF ~90 degrees, PM~30 degrees) and PRCL (UGF~110 Hz, PM~30 degreees) loops - stability margins and loop UGFs seem reasonable to me.
  8. Tried nulling the MICH-->PRCL coupling by adjusting the MICH-->PRM matrix element - as has been the case for a while, unable to do any better, and I can't null that line as we expect to be able to.
  9. Not expecting to get anything sensible, but ran some sensing matrix lines (at the correct frequencies this time).
  10. Tried locking the PRMI with MICH actuation to an ITM instead of the BS - I can realize the lock but the loop OLTF I measure with this configuration is very weird, needs more investigation. I may look into this later today evening.

I was also reminded today of the poor reliability of the LSC whitening electronics. Basically, there may be hidden saturations in all the channels that have a large DC value (e.g. the photodiode DC mon channels) due to the poor design of the cascaded gain stages. I was thinking about using the REFL DC channel to estimate the mode-matching into the PRC, but this has a couple of problems. Electronically, there may be some signal distortion due to the aforementioned problem. But in addition, optically, the estimation of mode-matching into the PRC by comparing REFL DC levels in single bounce off the PRM and the PRMI locked has the problem that the mode-matching is degenerate with the intra-cavity loss, which is of the same order as the mode mismatch (a percent or two I claiM). If Koji or someone else can implement the fix suggested by Hartmut for all the LSC whitening channels, that'd give us more faith in the signals. It may be less work than just replacing all the whitening filters with a better design (e.g. the aLIGO ISC whitening filter which implements the cascaded gain stages using single OP27s and more importantly has a 1 kohm series resistance with the input to the op amp (so the preceeding stage never has to drive > 10V/1kohms ~10mA of DC current) would presumably reduce distortion.

  15876   Sun Mar 7 19:56:27 2021 AnchalUpdateLSCSensing matrix settings messed with

I understand this mst be frustrating for you. But we did not change these settings, knowingly atleast. We have documented all the things we did there. The only thing I can think of which could possibly change any of those channels are the scripts that we ran that are mentioned and the burt restore that we did on all channels (which wasn't really necessary). We promise to be more vigilant of changes that occur when we are present in future.

Quote:

To my dismay, I found today that somebody had changed the oscillator frequencies for the sensing matrix infrastructure we have. The change happened 2 days and 2 hours ago (I write this at ~1230 on Saturday, 3/6), i.e. ~1030am on Thursday. According to the elog, this is when Anchal and Paco were working on the interferometer, but I can find no mention of these settings being changed. Not cool guys 😒 .

This was relatively easy to track down but I don't know what else may have been messed with. I don't understand how anything that was documented in the elog can lead to this weird doubling of the frequencies.

I have now restored the correct settings. The "sensing matrix" I posted last night is obviously useless.

 

  15879   Mon Mar 8 12:54:54 2021 gautamUpdateEquipment loan40m-->Cryo
  1. Busby box
  2. SR554 transformer preamplifier
  15880   Mon Mar 8 17:09:29 2021 gautamUpdateSUSPRM coil actuators heavily imbalanced

I realized I hadn't checked the PRM actuator as thoroughly as I had the others. I used the Oplev as a sensor to check the coil balancing, and I noticed that while all 4 coils show up with the expected 1/f^2 profile at the Oplev error point, the actuator gains seem imbalanced by a factor of ~5. The phase isn't flat because of some filters in the Oplev electronics I guess. The Oplev loops were disabled for the measurement, and the excitations were small enough that the beam stayed reasonably well centered on the QPD throughout. This seems very large to me - the values in the coil output filter gains lead me to expect more like a ~10% mismatch in the actuation strenghts, and similar tests on other optics in the past, e.g. ETMY, have yielded much more balanced results. I'm collecting some free-swinging PRM data now as an additional check. I verified that all the coils seem actuatable at least, by applying a 500 ct step at the offset of the coil output FM, and saw that the optic moved (it was such a test that revealed that MC1 had a busted actuator some time ago). If the eigenmode spectra look as expected, I think we can rule out broken magnets, but I suppose the magnets could still be not well matched in strength?

  15883   Mon Mar 8 22:01:26 2021 gautamUpdateLSCMore PRMI

There are still many mysteries remaining - e.g. the MICH-->PRCL contribution still can't be nulled. But for now, I have the settings that keep the PRMI locked fairly robustly with REFL55I/Q or REFL165I/Q (I quadrature for PRCL, Q for MICH in both cases), see Attachment #1 and Attachment #2 respectively. For the 1f locking, the REFL55 digital demod phase was fine-tuned to minimize the frequency noise (generated by driving MC2) coupling to the Michelson readout (as the Michelson is supposed to be immune) - the coupling was measured to be ~60dB larger at the PRCL error point than MICH. There was still nearly unity coherence between my MC2 drive and the MICH error point demodulated at the drive frequency, but I was not able to null it any better than this. With the PRMI (ETMs misaligned) locked on the 1f signals, I measured Attachment #1 and used it to determine the demod phase that would best enable REFL165_I to be a PRCL sensor. Rana thinks that if there is some subtle effect in the marginally stable PRC, we would not see it unless we do a mode scan (time consuming to set up and execute). So I'm just going to push on with the PRFPMI locking - let's see if the clean arm mode forces a clean TEM00 mode to be resonant in the PRC, and if that can sort out the lack of orthogonality between MICH/PRCL in the 1f sensors (after all, we only care about the 3f signals in as much as they allow us to lock the interferometer). I'll try the PRMI with arms on ALS tomorrow eve.

I have no idea what to make of how the single frequency lines I am driving in MICH and PRCL show up in REFL11 and REFL33 - the signals are apparently completely degenerate (in optical quadrature). How this is possible even though the PRMI remains stably locked, POP22/POP110/AS110 report stable sideband buildup is not clear to me.

  15886   Tue Mar 9 14:30:22 2021 YehonathanUpdateSUSOSEM testing for SOSs

I finished ranking the OSEMS on the OSEM wiki page.

I also moved the OSEM data folder from /home/export/home to /users/public_html and created a soft link instead. I have done the same for the 40m_TIS folder that I uploaded there a while ago.

  15888   Tue Mar 9 15:19:03 2021 KojiUpdateSUSOSEM testing for SOSs

How were the statistics of them? i.e. # of Good OSEMs, # of OK OSEMs, etc...

  15890   Tue Mar 9 16:52:47 2021 JonUpdateCDSFront-end testing

Today I continued with assembly and testing of the new front-ends. The main progress is that the IO chassis is now communicating with the host, resolving the previously reported issue.

Hardware Issues to be Resolved

Unfortunately, though, it turns out one of the two (host-side) One Stop Systems PCIe cards sent from Hanford is bad. After some investigation, I ultimately resolved the problem by swapping in the second card, with no other changes. I'll try to procure another from Keith Thorne, along with some spares.

Also, two of the three switching power supplies sent from Livingston (250W Channel Well PSG400P-89) appear to be incompatible with the Trenton BPX6806 PCIe backplanes in these chassis. The power supply cable has 20 conductors and the connector on the board has 24. The third supply, a 650W Antec EA-650, does have the correct cable and is currently powering one of the IO chassis. I'll confirm this situation with Keith and see whether they have any more Antecs. If not, I think these supplies can still be bought (not obsolete).

I've gone through all the hardware we've received, checked against the procurement spreadsheet. There are still some missing items:

  • 18-bit DACs (Qty 14; but 7 are spares)
  • ADC adapter boards (Qty 5)
  • DAC adapter boards (Qty 9)
  • 32-channel DO modules (Qty 2/10 in hand)

Testing Progress

Once the PCIe communications link between host and IO chassis was working, I carried out the testing procedure outlined in T1900700. This performs a series checks to confirm basic operation/compatibility of the hardware and PCIe drivers. All of the cards installed in both the host and the expansion chassis are detected and appear correctly configured, according to T1900700. In the below tree, there is one ADC, one 16-ch DIO, one 32-ch DO, and one DolphinDX card:

+-05.0-[05-20]----00.0-[06-20]--+-00.0-[07-08]----00.0-[08]----00.0  Contec Co., Ltd Device 86e2
|                               +-01.0-[09]--
|                               +-03.0-[0a]--
|                               +-08.0-[0b-15]----00.0-[0c-15]--+-02.0-[0d]--
|                               |                               +-03.0-[0e]--
|                               |                               +-04.0-[0f]--
|                               |                               +-06.0-[10-11]----00.0-[11]----04.0  PLX Technology, Inc. PCI9056 32-bit 66MHz PCI <-> IOBus Bridge
|                               |                               +-07.0-[12]--
|                               |                               +-08.0-[13]--
|                               |                               +-0a.0-[14]--
|                               |                               \-0b.0-[15]--
|                               \-09.0-[16-20]----00.0-[17-20]--+-02.0-[18]--
|                                                               +-03.0-[19]--
|                                                               +-04.0-[1a]--
|                                                               +-06.0-[1b]--
|                                                               +-07.0-[1c]--
|                                                               +-08.0-[1d]--
|                                                               +-0a.0-[1e-1f]----00.0-[1f]----00.0  Contec Co., Ltd Device 8632
|                                                               \-0b.0-[20]--
\-08.0-[21-2a]--+-00.0  Stargen Inc. Device 0101
                \-00.1-[22-2a]--+-00.0-[23]--
                                +-01.0-[24]--
                                +-02.0-[25]--
                                +-03.0-[26]--
                                +-04.0-[27]--
                                +-05.0-[28]--
                                +-06.0-[29]--
                                \-07.0-[2a]--

Standalone Subnet

Before I start building/testing RTCDS models, I'd like to move the new front ends to an isolated subnet. This is guaranteed to prevent any contention with the current system, or inadvertent changes to it.

Today I set up another of the Supermicro servers sent by Livingston in the 1X6 test stand area. The intention is for this machine to run a cloned, bootable image of the current fb1 system, allowing it to function as a bootserver and DAQ server for the FEs on the subnet.

However, this hard disk containing the fb1 image appears to be corrupted and will not boot. It seems to have been sitting disconnected in a box since ~2018, which is not a stable way to store data long term. I wasn't immediately able to recover the disk using fsck. I could spend some more time trying, but it might be most time-effective to just make a new clone of the fb1 system as it is now.

  15891   Tue Mar 9 18:49:28 2021 YehonathanUpdateSUSOSEM testing for SOSs

29 Good OSEMs, of which 1 is questionable (089) with PD voltage of 1.5V, 5 need some work (pigtailing, replace/remove/add screws). We have 4 pigtails. Schematics.

20 OK OSEMs (Slightly off-centered LED spot), of which 3 need some work (pigtailing, replace/remove/add screws).

13 Bad OSEMS (Way off-centered LED spot)

2 Defunct OSEMs

-------

Ed: KA
Good: 23 complete OSEMs +  5 good ones, which need soldering work (there are 4 pigtails and take one from a defunct OSEM).
OK:  Use good 7 OSEMs for the sides. And keep some functional OSEMs as spares.

 

  15892   Wed Mar 10 00:32:03 2021 gautamUpdateLSCPRFPMi

The interferometer can nearly be locked again. I was unable to fully hand off control from ALS-->RF, I suspect I may be using the wrong sign on the AO path (or some such other sub-optimal CM board settings). I'll hook up the SR785 and take some TFs tomorrow, that should give more insight into what's what. With the arms held off resonance, the PRMI acquires lock nearly instantly (REFL165 I for PRCL, REFL165 Q for MICH), and can stay locked nearly indefinitely, which is what I need so I can get the RF lock going. However the sensing matrix (for vertex DoFs, arms held off resonance) still makes no sense to me. The MICH loop has ~50 Hz UGF and the PRCL loop ~150 Hz. I think the MICH loop shape can be optimized a little for better low frequency suppression, but this isn't the show-stopper at the moment. For record-keeping, the ALS performance was excellent and other subsystems were nominal tonight.

  15894   Wed Mar 10 11:55:22 2021 gautamUpdateSUSPRM suspension suspect

The procedure is that the optic is kicked to excite it, and allowed to ring down for ~1ksec, with damping turned off. The procedure is repeated 15 times for some averaging. 

Attachment #1 - sensor spectra from yesterday.

Attachment #2 - peaks using the naive diagonalization matrix from yesterday.

Attachment #3 - Data from ~1 year ago. 

The y-axis in all plots is labelled as "cts/rtHz" but these are the DQed channels, which come after a "cts2um" CDS filter - so if that filter is accurate, them the y-axes may be read as um/rtHz.

I wonder if the September 2020 earthquake somehow damaged the PRM suspension, as this experiment would suggest that the problem is not only with the actuation. The data was gathered with the neutral position of the PRM (between kicks) being well aligned for PRMI, and the DC values of all the shadow sensors in this position is close to half-light (~1V, except for side which was more like 4V). Hard to say what exactly is happening since only the PIT DoF has the weird asymmetric peak shape instead of the expected Lorentzian - I would have thought that a damaged wire or broken magnet would affect all 4 DoFs but the F.C. spring experience on ETMY showed that anything is possible.

  15898   Wed Mar 10 17:35:47 2021 gautamUpdateSUSSpooky action at a distance

As I am sitting in the control room, the PRM suspension watchdog tripped again. This time, there is clearly no seismic activity. Yet, the BS suspension also shows a slight disturbance at the same time as the PRM. ITMY shows no perturbation though. My best hypothesis here is that the problem is electrical. In Attachment #1, you can see that all of the Sensors go to -6000 cts (whut?) for ~30 seconds. Zooming in to that segment in Attachment #2, it would appear that the light detected by the LED changed dramatically (went dark?) on all 5 coils. The 4 face coils have the same time constant but the side has a different one, but in any case, this level of light change in half a second is clearly not physical. Then the watchdog trips because this huge apparent motion elicits a kick from the damping loops.

The plots I attach are for the DQed sensor channels, so there is some digital filtering involved. But I confirmed that the signal doesn't go negative if I disable the input to the filter module. So it would seem that the voltage input to the ADC really chanegd polarity, seems unphysical. Could be Satellite Box or whitening electronics I suppose - I think we can exclude bad cabling, as that would just lead to the signals going to 0, whereas it would appear here that they did really change sign (confirmed by looking at the ULPDmon channel, which is digitized by Acromag, which reports -10 V at the time of glitch). But why should the BS care about the PRM electronics going wonky?

In addition to an exorcist, we need functioning electronics!


This optic has been hampering my locking attempts all evening. I switched the PRM and SRM satellite boxes, but then I remembered PRM has the Al foil "hats" to attenuate scattered light. of course the Al foil is conducting and can short the OSEM leads. I put some kapton pieces in between OSEM and foil to try and mitigate this issue but I suppose over time it could have slipped, and is making some intermittent contact, shorting PD anode and cathode (that would explain the PD reporting -10 V instead of some physical value).

If this is the problem we would need a vent to address it. In the daytime I'll measure L and R of the coils to see if the actuator imbalance I reported is also due to the same problem...

  15899   Wed Mar 10 19:58:27 2021 gautamUpdateLSCSR785 hooked up to CM board

In preparation for later today evening. The TT alignment wasn't visibly disturbed.

  15900   Thu Mar 11 01:45:42 2021 gautamUpdateLSCPRFPMi
  1. PRM satellite box indeed seems to have been the culprit - shortly after I swapped it to the SRM, its shadow sensors went dark. I leave the watchdog tripped.
  2. I still was unable to realize the RF only IFO
    • Clearly my old settings don't work, so I tried to go about it systematically. First, try and transition CARM to RF, leave DARM on ALS.
    • As usual, I can realize the state were the arm powers are ~100, and the two paths are blended. 
    • But I'm not able to completely turn off the CARM_A path without blowing the lock.

Pity really, I was hoping to make it much further tonight. I think I'll have to go back to the high BW POX/POY lock, and also check out the conversion efficiency / noise of the daughter board on the REFL11 demod board. Compared to before my work on the RF source, the demod phase for the PRMI lock using REFL11 as an error signal has basically necessitated a change of the digital demod phase by 180 degrees - so I made the appropriate polarity changes in the CM_SLOW and AO paths (the assumption is that CARM in REFL11 would require the same change in digital demod phase, and I think this is a reasonable assumption - indeed, with the arm powers somewhat stable ~100, if I look at the PDH signal in REFL11 I and Q, it does seem to show up largely in the I quadrature (pre digital phase rotation). Anyway, with so many weird effects (wonky PRM suspension, strange PRMI sensing etc etc, who knows what's going on. This will take a systematic effort.

I defer the electronics characterization for the daytime (if I feel like I need it tomorrow I'll do it, else. Koji has said he can do it on Friday).

Quote:

 I was unable to fully hand off control from ALS-->RF, I suspect I may be using the wrong sign on the AO path (or some such other sub-optimal CM board settings). I'll hook up the SR785 and take some TFs tomorrow, that should give more insight into what's what. 

  15902   Thu Mar 11 08:13:24 2021 Paco, AnchalUpdateSUSIMC First Free Swing Test failed due to typo, restarting now

[Paco, Anchal]

The triggered code went on at 5:00 am today but a last minute change I made yesterday to increase number of repititions had an error and caused the script to exit putting everything back to normal. So as we came in the morning, we found the mode cleaner locked continuously after one free swing attempt at 5:00 am. I've fixed the script and ran it for 2 hours starting at 8;10 am. Our plan is to get some data atleast to play with when we are here. If the duration is not long enough, we'll try to run this again tomorrow morning. The new script is running on same tmux session 'MCFreeSwingTest' on Rossa

10:13 the script finished and IMC recovered lock.

Thu Mar 11 10:58:27 2021

The test ran succefully with the mode cleaner optics coming back to normal in the end of it. We wrote some scripts to read data and analyze it. More will come in future posts. No other changes were made today to the systems.

  15903   Thu Mar 11 14:03:02 2021 gautamUpdateLSCAO path

There is some evidence of weird saturation but the gain balancing (0.8dB) and orthogonality (~89 deg) for the daughter board on the REFL11 demod board that generates the AO path error signal seem reasonable. This board would probably benefit from the AD797-->Op27 and thick-film-->thin film swap but i don't think this is to blame for being unable to execute the RF transition.

  15904   Thu Mar 11 14:27:56 2021 gautamUpdateCDStimesync issue?

I have recently been running into hitting the 4MB/s data rate limit on testpoints - basically, I can't run DTT TF and spectrum measurements that I was able to while locking the interferometer, which I certainly was able to this time last year. AFAIK, the major modification made was the addition of 4 DQ channels for the in-air BHD experiment - assuming the data is transmitted as double precision numbers, i estimate the additional load due to this change was ~500KB/s. Probably there is some compression so it is a bit more efficient (as this naive calc would suggest we can only record 32 channels and I counted 41 full rate channels in the model), but still, can't think of anything else that has changed. Anyway, I removed the unused parts and recompiled/re-installed the models (c1lsc and c1omc). Holding off on a restart until I decide I have the energy to recover the CDS system from the inevitable crash. For documentation I'm also attaching screenshot of the schematic of the changes made.

Anyway, the main point of this elog is that at the compilation stage, I got a warning I've never seen before:

Building front-end Linux kernel module c1lsc...
make[1]: Warning: File 'GNUmakefile' has modification time 13 s in the future
make[1]: warning:  Clock skew detected.  Your build may be incomplete.

This prompted me to check the system time on c1lsc and FB - you can see there is a 1 minute offset (it is not a delay in me issuing the command to the two machines)! I am suspecting this NTP action is the reason. So maybe a model reboot is in order. Sigh

  15905   Thu Mar 11 18:46:06 2021 gautamUpdateCDScds reboot

Since Koji was in the lab I decided to bite the bullet and do the reboot. I've modified the reboot script - now, it prompts the user to confirm that the time recognized by the FEs are the same (use the IOP model's status screen, the GPSTIME is updated live on the upper right hand corner). So you would do sudo date --set="Thu 11 Mar 2021 06:48:30 PM UTC" for example, and then restart the IOP model. Why is this necessary? Who knows. It seems to be a deterministic way of getting things back up and running for now so we have to live with it. I will note that this was not a problem between 2017 and 2020 Oct, in which time I've run the reboot script >10 times without needing to take this step. But things change (for an as of yet unknown reason) and we must adapt. Once the IOPs all report a green "DC" status light on the CDS overview screen, you can let the script take you the rest of the way again.

The main point of this work was to relax the data rate on the c1lsc model, and this worked. It now registers ~3.2 MB/s, down from the ~3.8 MB/s earlier today. I can now measure 2 loop TFs simultaneously. This means that we should avoid adding any more DQ channels to the c1lsc model (without some adjustment/downsampling of others).

Quote:

 Holding off on a restart until I decide I have the energy to recover the CDS system from the inevitable crash.

  15906   Thu Mar 11 20:18:00 2021 gautamUpdateLSCHigh bandwidth POY

I repeated the high bandwidth POY locking experiment.

  1. The "Q" demod output (SMA) was routed to the common mode board (it appears in the past I used the LEMO "MON" output instead but that shouldn't be a meaningful change).
  2. As usual, slow actuation --> ETMY, fast actuation --> IMC error point.
  3. Loop UGF measurement suggests that bandwidth ~25kHz, with ~25 degrees phase margin. Anyway the lock was pretty stable.

One thing I am not sure is - when looking at the in-loop error point spectra, the Y-arm error point did not get suppressed to the CM board's sensing noise floor - I would have thought that with the huge amount of gain at ~16 Hz, the usual structure we see in the spectra between 10-30Hz would be completely squished. Need to think about if this is signalling something wrong, because the loop TF measurements seemed as expected to me.

1020pm: plots uploaded. As I made the plot of the spectrum, I realized that I don't have the calibration for the Y-arm error point into displacement noise units, so it's in unphysical units for now. But I think the comment about the hump around 16 Hz not being crushed to some sort of flat electronics noise floor. For the TF plots, when the loop gain is high, this IN1/IN2 technique isn't the best (due to saturation issues) but I don't think there's anything controversial about getting the UGF this way, and the fact that the phase evolves as expected when the various gains are cranked up / boosts enabled makes me think that the CM board is itself just fine.


10am 12 March: i realized that the "Y-arm error point" plotted below is not the true error point - that would be the input to the CM board (before boosts etc), which we don't monitor digitally. The spectra are plotted for the CM_SLOW input which already has some transfer function applied to it. In the past, I routed the LEMO "MON" connector on the demod board to the CM board input, and hence, had the usual SMA outputs from the demod board going to the digital system. I hypothesize that plotting the spectra for that signal would have showed this expected suppression to the electronics noise floor.

In summary, on the basis of this test, I don't see any red flags with the CM board.

  15908   Fri Mar 12 03:22:45 2021 KojiUpdateGeneralGaussmeter in the electronics drawer

For magnet strength measurement: There is a gaussmeter in the flukes' drawer (2nd from the top). It turns on and reacts to a whiteboard magnet.

  15909   Fri Mar 12 03:23:37 2021 KojiUpdateBHDDO card (DO-32L-PE) brought from WB

I've brought 4 DO-32L-PE cards from WB for BHD upgrade for Jon.

  15910   Fri Mar 12 03:28:51 2021 KojiUpdateCDScds reboot

I want to emphasize the followings:

  • FB's RTC (real-time clock/motherboard clock) is synchronized to NTP time.
  • The other RT machines are not synchronized to NTP time.
  • My speculation is that the RT machine timestamp is produced from the local RT machine time because there is no IRIG-B signal provided. -> If the RTC is more than 1 sec off from the FB time, the timestamp inconsistency occurs. Then it induces "DC" error.
  • For now, we have to use "date" command to match the local RTC time to the FB's time.
     
  • So: If NTP synchronization is properly implemented for the RT machines, we will be free from this silly manual time adjustment.

 

  15911   Fri Mar 12 11:02:38 2021 gautamUpdateCDScds reboot

I looked into this a bit today morning. I forgot exactly what time we restarted the machines, but looking at the timesyncd logs, it appears that the NTP synchronization is in fact working (log below is for c1sus, similar on other FEs):

-- Logs begin at Fri 2021-03-12 02:01:34 UTC, end at Fri 2021-03-12 19:01:55 UTC. --
Mar 12 02:01:36 c1sus systemd[1]: Starting Network Time Synchronization...
Mar 12 02:01:37 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Mar 12 02:01:37 c1sus systemd[1]: Started Network Time Synchronization.
Mar 12 02:02:09 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).

So, the service is doing what it is supposed to (using the FB, 192.168.113.201, as the ntpserver). You can see that the timesync was done a couple of seconds after the machine booted up (validated against "uptime"). Then, the service is periodically correcting drifts. idk what it means that the time wasn't in sync when we check the time using timedatectl or similar. Anyway, like I said, I have successfully rebooted all the FEs without having to do this weird time adjustment >10 times

I guess what I am saying is, I don't know what action is necessary for "implementing NTP synchronization properly", since the diagnostic logfile seems to indicate that it is doing what it is supposed to.

More worryingly, the time has already drifted in < 24 hours.

Quote:

I want to emphasize the followings:

  • FB's RTC (real-time clock/motherboard clock) is synchronized to NTP time.
  • The other RT machines are not synchronized to NTP time.
  • My speculation is that the RT machine timestamp is produced from the local RT machine time because there is no IRIG-B signal provided. -> If the RTC is more than 1 sec off from the FB time, the timestamp inconsistency occurs. Then it induces "DC" error.
  • For now, we have to use "date" command to match the local RTC time to the FB's time.
     
  • So: If NTP synchronization is properly implemented for the RT machines, we will be free from this silly manual time adjustment.
  15912   Fri Mar 12 11:44:53 2021 Paco, AnchalUpdatetrainingIMC SUS diagonalization in progress

[Paco, Anchal]

- Today we spent the morning shift debugging SUS input matrix diagonalization. MC stayed locked for most of the 4 hours we were here, and we didn't really touch any controls.

  15917   Fri Mar 12 19:44:31 2021 gautamUpdateLSCDelay line

I may want to use the delay line phase shifter in 1Y2 to allow remote actuation of the REFL11 demod phase (for the AO path, not the low bandwidth one). I had this working last Feb, but today, I am unable to remotely change the delay. @Koji, it would be great if you could fix this the next time you are in the lab - I bet it's a busted latch IC or some such thing. I did the non-invasive tests - cable is connected, control bits are changing (at least according to the CDS BIO indicators) and the switch controlling remote/local switching is set correctly. The local switching works just fine.

In the meantime, I will keep trying - I am unconvinced we really need the delay line.

  15918   Fri Mar 12 21:15:19 2021 gautamUpdateLSCcoronaversary PRFPMi

Attachment #1 - proof that the lock is RF only (A paths are ALS, B paths are RF).

Attachment #2 - CARM OLTF.

Some tuning can be done, the circulating power can be made ~twice as high with some ASC. The vertex is still on 3f control. I didn't get any major characterization done tonight but it's nice to be back here, a year on i guess.

  15920   Mon Mar 15 20:22:01 2021 gautamUpdateASCc1rfm model restarted

On Friday, I felt that the ASC performance when the PRFPMI was locked was not as good as it used to be, so I looked into the situation a bit more. As part of my ASC model revamp in December, I made a bunch of changes to the signal routing, and my suspicion was that the control signals weren't even reaching the ETMs. My log says that I recompiled and reinstalled the c1rfm model (used to pipe the ASC control signals to the ETMs), and indeed, the file was modified on Dec 21. But for whatever reason, the C1RFM.ini (=Dolphin receiver since the ASC control signals are sent to this model over the Dolphin network from the c1ioo machine which hosts the C1:ASC- namespace, and RFM sender to the ETMs, but this path already existed) file never picked up the new channels. Today, I recompiled, re-installed, and restarted the models, and confirmed that the control signals actually make it to the ETMs. So now we can have the QPD-based ASC loops engaged once again for the PRFPMI lock. The CDS system did not crash 🎉 . See Attachments #1-3.

I checked the loop performance in the POX/POY locked config by first deliberately misaligning the ETMs, and then engagin the loops - seems to work (Attachment #4). The loop shapes have to be tweaked a bit and I didn't engage the integrators, hence the DC pointing wasn't recovered. Also, added a line to the script that turns the ASC loops on to set limits for all the loops - in the testing process, one of the loops ran away and I tripped the ETMY watchdog. It has since been recovered. I SDFed a limit of 100cts just to be on the conservative side for model reboot situations - the value in the script can be raised/lowered as deemed necessary (sorry, I don't know the cts-->urad number off the top of my head).

But the hope is this improves the power buildup, and provides stability so that I can begin to commission the AS WFS system a bit.

  15922   Tue Mar 16 14:37:36 2021 YehonathanUpdateBHDSOS SmCo magnets Inspection

In the cleanroom, I opened the nickel-plated SmCo magnet box to take a closer look. I handled the magnets with tweezers. I wrapped the tips of the tweezers with some Kapton tape to prevent scratching and magnetization.

I put some magnets on a razor blade and took some close-up pictures of the face of the magnets on both sides. Most of them look like attachment 1.

Some have worn off plating on the edges. The most serious case is shown in attachment 2. Maybe it doesn't matter if we are going to sand them?

I measure the magnetic flux of the magnets by just attaching the gaussmeter flat head to the face of the magnet and move it around until the maximum value is reached.

For envelope #1 out of 3 the values are: (The magnet ordering is in attachment 3):

Magnet # Max Magnetic Field (kG)
1 2.57
2 2.54
3 2.57
4 2.57
5 2.55
6 2.61
7 2.55
8 2.52
9 2.64
10

2.58

Going to continue tomorrow with the rest of the magnets. I left the magnet box and the gaussmeter under the flow bench in the cleanroom.

  15923   Tue Mar 16 16:02:33 2021 KojiUpdateLSCREFL11 demod retrofitting

I'm going to remove REFL11 demod for the noise check/circuit improvement.

----

  • The module was removed (~4pm). Upon removal, I had to loosen AS110 LO/I out/Q out. Check the connection and tighten their SMAs upon restoration of REFL11.
  • REFL11 configuration / LO: see below, PD: a short thick SMA cable, I OUT: Whitening CH3, Q OUT: Whitening CH4, I MON daughterboard: CM board IN1 (BNC cable)
  • The LO cable for REFL11 was made of soft coax cable (Belden 9239 Low Noise Coax). The vendor specifies that this cable is for audio signals and NOT recommended for RF purposes [Link to Technical Datasheet (PDF)].
    I'm going to measure the delay of the cable and make a replacement.
  • There is a bunch of PD RF Mon cables connected to many of the demo modules. I suppose that they are connected to the PD calibration system which hasn't been used for 8 years. And the controllers are going to be removed from the rack soon.
    I'm going to remove these cables.

----

First I checked the noise levels and the transfer functions of the daughterboard preamp were checked. The CH-1 of the SR785 seemed funky (I can't comprehensively tell yet how it was), so the measurement maybe unreliable.

For the replacement of AD797, I tested OP27 and TLE2027. TLE2027 is similar to OP27, but slightly faster, less noisy, and better in various aspects.

The replacement of the AD797 and whatever-film resistors with LTE2027 and thin-film Rs were straightforward for the I phase channel, while the stabilization of the Q phase channel was a struggle (no matter I used OP27 or TLE2027). It seems that the 1st stage has some kind of instability and I suffered from 3Hz comb up to ~kHz. But the scope didn't show obvious 3Hz noise.

After a quite bit of struggle, I could tame this strange noise by adjusting the feedback capacitor of the 1st stage. The final transfer functions and the noise levels were measured. (To be analyzed later)

----

Now the REFL11 LO cable was replaced from the soft low noise audio coax (Belden 9239) to jacketed solder-soaked coax cable (Belden 1671J - RG405 compatible). The original cable indicated the delay of -34.3deg (@11MHz, 8.64ns) and the loss of 0.189dB.

I took 80-inch 1671J cable and measured the delay to be ~40deg. The length was adjusted using this number and the resulting cable indicated the delay of -34.0deg (@11MHz, 8.57ns) and the loss of 0.117dB.

The REFL11 demod module was restored and the cabling around REFL11 and AS110 were restored, tightened, and checked.

----

I've removed the PD mon cables from the NI RF switch. The open ports were plugged with 50Ohm temirnators.

----

I ask commissioners to make the final check of the REFL11 performance using CDS.

  15924   Tue Mar 16 16:27:22 2021 JonUpdateCDSFront-end testing

Some progress today towards setting up an isolated subnet for testing the new front-ends. I was able to recover the fb1 backup disk using the Rescatux disk-rescue utility and successfully booted an fb1 clone on the subnet. This machine will function as the boot server and DAQ server for the front-ends under test. (None of these machines are connected to the Martian network or, currently, even the outside Internet.)

Despite the success with the framebuilder, front-ends cannot yet be booted locally because we are still missing the DHCP and FTP servers required for network booting. On the Martian net, these processes are running not on fb1 but on chiara. And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.

For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success. The repair tool I used to recover the fb1 disk does not find a problem with the chiara disk. However, the chiara disk is an external USB drive, so I suspect there could be a compatibility problem with these old (~2010) machines. Some of them don't even recognize USB keyboards pre-boot-up. I may try booting the USB drive from a newer computer.

Edit: I removed one of the new, unused Supermicros from the 1Y2 rack and set it up in the test stand. This newer machine is able to boot the chiara USB disk without issue. Next time I'll continue with the networking setup.

  15925   Tue Mar 16 19:04:20 2021 gautamUpdateCDSFront-end testing

Now that I think about it, I may only have backed up the root file system of chiara, and not/home/cds/ (symlinked to /opt/ over NFS). I think we never revived the rsync backup to LDAS after the FB fiasco of 2017, else that'd have been the most convenient way to get files. So you may have to resort to some other technique (e.g. configure the second network interface of the chiara clone to be on the martian network and copy over files to the local disk, and then disconnect the chiara clone from the martian network (if we really want to keep this test stand completely isolated from the existing CDS network) - the /home/cds/ directory is rather large IIRC, but with 2TB on the FB clone, you may be able to get everything needed to get the rtcds system working). It may then be necessary to hook up a separate disk to write frames to if you want to test that part of the system out.

Good to hear the backup disk was able to boot though!

Quote:

And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.

For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success.

  15926   Tue Mar 16 19:13:09 2021 Paco, AnchalUpdateSUSFirst success in Input Matric Diagonalization

After jumping through few hoops, we have one successful result in diagonalizing the input matrix for MC1, MC2 and MC3.


Code:

  • Attachment 2 has the code file contained. For now, we can only guarantee it to work on Donatella in conda base environment. Our code is present in scripts/SUS/InMatCalc
  • We mostly follow the steps mentioned in 4886 and the matlab codes in scripts/SUS/peakFit.
  • Data is first multiplied with currently used inpur matrix to get time series data in DOF (POS, PIT, YAW, SIDE) basis.
  • Then, the peak frequencies of each resonance are identified.
  • For getting these results, we did not attempt to fit the peaks with lorentzians and took the maxima point of the PSD to get the peak positions. This is only possible if the current input matrix is good enough. We have to adjust some parameters so that our fitting code works always.
  • TF estimate between the sensor data w.r.t UL sensor is taken and the values around the peak frequencies of oscillations are averaged to get the sensing matrix.
  • This matrix is normalized along DOF axis (columns in our case) and then inverted.
  • After inversion, another normaliation is done along DOF axis (now rows).
  • Finally we plot the comparison of ASD in DOF basis when using current input matrix and when using our calculated inpur matrix (diagonalizing).
  • You can notice in Attachment 1 that after the diagonalization, each DOF shows resonance at only one and its own resonance frequency while earlier there was some mixing shown.
  • Absolute value of the calculated DOF might have changed and we need to calibrate them or apply appropriate gain factors in the DOF basis filter chains.

Next steps:

  • We'll complete our scripts and make them more general to be used for any optic.
  • We'll combine all of them into one single script which can be called by medm.
  • In parallel, we'll start onwards from step 2 in 15881.
  • Anything else that folks can suggest on our first result. Did we actually do it or are we fooling ourselves?
  15927   Wed Mar 17 00:05:26 2021 gautamUpdateLSCDelay line BIO remote control

While Koji is working on the REFL 11 demod board, I took the opportunity to investigate the non-remote-controllability of the delay line in 1Y2, since the TTs have already been disturbed. Here is what I found today.

  1. First, I brought over the spare delay line from the rack Chiara sits in over to 1Y2. 
    • Connected a Marconi to the input, monitored a -3dB pickoff and the delay line output simultaneously on a 300MHz scope.
    • With the front panel selector set to "Internal", verified that local (i.e. toggling front panel switches) switchability seems to work 👍 
    • Set the front panel switch to "External", and connected the D25 cable from the BIO card in 1Y3 to the back panel of the delay line unit - found that I could not remotely change the delay 😒 
    • I thought it'd be too much of a coincidence if both delay lines have the same failure mode for the remote switching part only, so I decided to investigate further up the signal chain.
  2. BIO switching - the CDS BIO bit status MEDM screen seems to respond, indicating that the bits are getting set correctly in software at least. I don't know of any other software indicator for this functionality further down the signal processing chain. So it would seem the BIO card is not actually switching.
  3. The Contec DO cards don't actually source the voltage - they just provide a path for current to flow (or isolate said path). I checked that pin 12 of the rear panel D25 connector is at +5 V DC relative to ground as indicated in the schematic (see P1 connector - this connector isn't a Dsub, it is IDE24, so the mapping to the Dsub pins isn't one-to-one, but pin 23 on the former corresponds to pin 12 on the latter), suggesting that the pull up resistors have the necessary voltage applied to them.
  4. Made a little LED tester breakout board, and saw no swtiching when I toggled the status of some random bits.
  5. Noted that the bench power supply powering this setup (hacky arrangement from 2015 that never got unhacked) shows a current draw of 0A.
    • I am not sure what the quiescent draw of these boards is - the datasheet says "Power consumption: 3.3VDC, 450mA", but the recommended supply voltage is "12-24V DC (+/-10%)" not 3.3VDC, so not sure what to make of that.
    • To try and get some insight, I took one of the new Contec-32L-PE cards we got from near Jon's CDS test stand (I've labelled the one I took lest there be some fault with it in the future), and connected it to a bench supply (pin 18 = +15V DC, pin1 = GND). But in this condition, the bench supply reports 0A current draw.
  6. Ruled out the wrong cable being plugged in - I traced the cable over the cable tray, and seems like it is in fact connecting the BIO card in the c1lsc expansion chassis to the delay line.

So it would seem something is not quite right with this BIO card. The c1lsc expansion chassis, in which this card sits, is notoriously finicky, and this delay line isn't very high priority, so I am deferring more invasive investigation to the next time the system crashes.

* I forgot we have the nice PCB Contec tester board with LEDs - the only downside is that this board has D37 connectors on both ends whereas the delay line wants a D25, necessitating some custom ribbon cable action. But maybe there is a way to use this.

As part of this work, I was in various sensitive areas (1Y3, chiara rack, FE test stand etc) but as far as I can tell, all systems are nominal.

  15929   Wed Mar 17 10:52:48 2021 JordanUpdateSUS3" Ring Adpater for SOS

I have added a .1", 45deg chamfer to the bottom of the adapter ring. This was added for a new placement of the eq stops, since the barrel screws are hard to access/adjust.

This also required a modification to the eq stop bracket, D960008-v2, with 1/4-20 screws angled at 45 deg to line up with the chamfer.

The issue I am running into is there needs to be a screw on the backside of the ring as well, otherwise the ring would fall backwards into the OSEMs in the event of an earthquake. The only two points of contact are these front two angled screws, a third is needed on the opposite side of the CoM for stability. This would require another bracket mounted at the back of the SOS tower, but there is very little open real estate because of the OSEMs.

 

Instead of this whole chamfer route, is it possible/easier to just swap the screws for the barrel eq stops? Instead of a socket head cap screw, a SS thumb screw such as this, will provide more torque when turning, and remove the need to use a hex wrench to turn.

 

  15930   Wed Mar 17 11:57:54 2021 Paco, AnchalUpdateSUSTested New Input Matrix for MC1

[Paco, Anchal]

Paco accidentally clicked on C1:SUS-MC1_UL_TO_COIL_SW_1_1 (MC1 POS to UL Coil Switch) and clicked it back on. We didn't see any loss of lock or anything significant on the large monitor on left.

Testing the new calculated input matrix

  • Switched off the PSL shutter (C1:PSL-PSL_ShutterRqst)
  • Switched off IMC autolocker (C1:IOO-MC_LOCK_ENABLE)
  • Uploaded the same input matrix as the current one to check writing function in scripts/SUS/InMatCalc/testingInMat.py . We have created backup text file for current settings in backupMC1InMat.txt .
  • Uploaded the new input matrix in normalized form. To normalize, we first made each row vector unit vector and then multiplied by the norm of current input matrix's row vectors (see scripts/SUS/InMatCalc/normalizeNewInputMat.py)
  • Switched ON the PSL shutter (C1:PSL-PSL_ShutterRqst)
  • Switched ON IMC autolocker (C1:IOO-MC_LOCK_ENABLE)
  • Locked was caught immediately. The wavefront sensor of MC1 shows usual movement, nothing crazy.
  • So the new input matrix is digestable by the system, but what's the efficacy of it?

< Two inspection people taking pictures of ceiling and portable AC unit passed. They rang the doorbell but someone else let them in. They walked out the back door.>

Testing how good the input matrix for MC1 is:

 

  • We loaded the input matrix butterfly row in C1:SUS-MC1_LOCKIN_INMATRX_1_4 to 8. This matrix is multiplied by C1:SUS-MC1_UL_SEN_IN and so on channels before the calibration to um and application of toher filters.
  • We tried to look around on how to load the same filter banks on the signal chain of LOCKIN1 of MC1 but couldn't, so we just manually added gain value of 0.09 in this chain to simulate the calibration factor at the very least.
  • We started the oscillator on LOCKIN 1 on MC1 with amplitude 1 and frequency 6 Hz.
  • We added butterfly mode actuation output column (UL:1, UR:-1, LL:-1, LR:1), nothing happened to the lock of probably because of low amplitude we put in.
  • Now, we plot the ASD of channels like C1:SUS-MC1_SUSPOS_IN1 (for POS, PIT, YAW, SIDE) to see if we see a corresponding peak there. No we don't. See attachment 1.

Restoring the system:

  • Added 0 to the LOCKIN1 column in MC1 output matrix.
  • Made LOCK1 oscillator 0 Amplitude, 0 Hz.
  • Changed back gain on signal chain of LOCKIN1 on MC1.
  • Added 0 to C1:SUS-MC1_LOCKIN_INMATRX_1_4 to 8.
  • Switched off the PSL shutter (C1:PSL-PSL_ShutterRqst)
  • Switched off IMC autolocker (C1:IOO-MC_LOCK_ENABLE)
  • Wrote back the old matrix by scripts/SUS/InMatCalc/testingInMat3.py which used the backup we created.
  • Switched ON the PSL shutter (C1:PSL-PSL_ShutterRqst)
  • Switched ON IMC autolocker (C1:IOO-MC_LOCK_ENABLE)
  15931   Wed Mar 17 14:40:39 2021 YehonathanUpdateBHDSOS SmCo magnets Inspection

Continuing with envelope number 2

Magnet number Magnetic field (kG)
1 2.89
2 2.85
3 2.92
4 2.75
5 2.95
6 2.91
7 2.93
8 2.9
9 2.93
10 2.9
11 2.85
12 2.89
13 2.85
14 2.88
15 2.92
16 2.75
17 2.97
18 2.88
19 2.85
20 2.87
21 2.93
22 2.9
23 2.9
24 2.89
25 2.88
26 2.88
27 2.95
28 2.88
29 2.88
30 2.9
31 2.96
32 2.91
33 2.93
34 2.9
35 2.9
36 3.03
37 2.84
38 2.95
39 2.89
40 2.88
41 2.88
42 2.93
43 2.97
44 2.74
45 2.84
46 2.85
47 2.85
48 2.87
49 2.88
50 2.8

I think I have to redo envelope 1 tomorrow.

ELOG V3.1.3-