40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 232 of 339  Not logged in ELOG logo
ID Date Authorup Type Category Subject
  15894   Wed Mar 10 11:55:22 2021 gautamUpdateSUSPRM suspension suspect

The procedure is that the optic is kicked to excite it, and allowed to ring down for ~1ksec, with damping turned off. The procedure is repeated 15 times for some averaging. 

Attachment #1 - sensor spectra from yesterday.

Attachment #2 - peaks using the naive diagonalization matrix from yesterday.

Attachment #3 - Data from ~1 year ago. 

The y-axis in all plots is labelled as "cts/rtHz" but these are the DQed channels, which come after a "cts2um" CDS filter - so if that filter is accurate, them the y-axes may be read as um/rtHz.

I wonder if the September 2020 earthquake somehow damaged the PRM suspension, as this experiment would suggest that the problem is not only with the actuation. The data was gathered with the neutral position of the PRM (between kicks) being well aligned for PRMI, and the DC values of all the shadow sensors in this position is close to half-light (~1V, except for side which was more like 4V). Hard to say what exactly is happening since only the PIT DoF has the weird asymmetric peak shape instead of the expected Lorentzian - I would have thought that a damaged wire or broken magnet would affect all 4 DoFs but the F.C. spring experience on ETMY showed that anything is possible.

Attachment 1: PRM_sensorSpectra.pdf
PRM_sensorSpectra.pdf
Attachment 2: PRM_pkFitNaive.pdf
PRM_pkFitNaive.pdf
Attachment 3: PRM_diagComp.pdf
PRM_diagComp.pdf
  15895   Wed Mar 10 15:00:16 2021 gautamSummaryIMCIMC free swinging prep

Did you fix this issue? It is helpful to post a screenshot of the offending MEDM screen in addition to witticisms. The elog says "sitemap>Shutter>PSL" but I can't find PSL under the dropdown for shutters from Sitemap.

# Moving on to IMC suspensions characterization:
- Closed the PSL shutter, to our suprise, the MC was still locked. We thought this would take away any light from IMC but it doesn't. Maybe the IFO Overview needs to show the schematic in a way where this doesn't happen: "No light from any laser entering the MC but it still is locked with a resonating field inside."

  15898   Wed Mar 10 17:35:47 2021 gautamUpdateSUSSpooky action at a distance

As I am sitting in the control room, the PRM suspension watchdog tripped again. This time, there is clearly no seismic activity. Yet, the BS suspension also shows a slight disturbance at the same time as the PRM. ITMY shows no perturbation though. My best hypothesis here is that the problem is electrical. In Attachment #1, you can see that all of the Sensors go to -6000 cts (whut?) for ~30 seconds. Zooming in to that segment in Attachment #2, it would appear that the light detected by the LED changed dramatically (went dark?) on all 5 coils. The 4 face coils have the same time constant but the side has a different one, but in any case, this level of light change in half a second is clearly not physical. Then the watchdog trips because this huge apparent motion elicits a kick from the damping loops.

The plots I attach are for the DQed sensor channels, so there is some digital filtering involved. But I confirmed that the signal doesn't go negative if I disable the input to the filter module. So it would seem that the voltage input to the ADC really chanegd polarity, seems unphysical. Could be Satellite Box or whitening electronics I suppose - I think we can exclude bad cabling, as that would just lead to the signals going to 0, whereas it would appear here that they did really change sign (confirmed by looking at the ULPDmon channel, which is digitized by Acromag, which reports -10 V at the time of glitch). But why should the BS care about the PRM electronics going wonky?

In addition to an exorcist, we need functioning electronics!


This optic has been hampering my locking attempts all evening. I switched the PRM and SRM satellite boxes, but then I remembered PRM has the Al foil "hats" to attenuate scattered light. of course the Al foil is conducting and can short the OSEM leads. I put some kapton pieces in between OSEM and foil to try and mitigate this issue but I suppose over time it could have slipped, and is making some intermittent contact, shorting PD anode and cathode (that would explain the PD reporting -10 V instead of some physical value).

If this is the problem we would need a vent to address it. In the daytime I'll measure L and R of the coils to see if the actuator imbalance I reported is also due to the same problem...

Attachment 1: PRMtrip.png
PRMtrip.png
Attachment 2: PRMtrip_zoom.png
PRMtrip_zoom.png
  15899   Wed Mar 10 19:58:27 2021 gautamUpdateLSCSR785 hooked up to CM board

In preparation for later today evening. The TT alignment wasn't visibly disturbed.

  15900   Thu Mar 11 01:45:42 2021 gautamUpdateLSCPRFPMi
  1. PRM satellite box indeed seems to have been the culprit - shortly after I swapped it to the SRM, its shadow sensors went dark. I leave the watchdog tripped.
  2. I still was unable to realize the RF only IFO
    • Clearly my old settings don't work, so I tried to go about it systematically. First, try and transition CARM to RF, leave DARM on ALS.
    • As usual, I can realize the state were the arm powers are ~100, and the two paths are blended. 
    • But I'm not able to completely turn off the CARM_A path without blowing the lock.

Pity really, I was hoping to make it much further tonight. I think I'll have to go back to the high BW POX/POY lock, and also check out the conversion efficiency / noise of the daughter board on the REFL11 demod board. Compared to before my work on the RF source, the demod phase for the PRMI lock using REFL11 as an error signal has basically necessitated a change of the digital demod phase by 180 degrees - so I made the appropriate polarity changes in the CM_SLOW and AO paths (the assumption is that CARM in REFL11 would require the same change in digital demod phase, and I think this is a reasonable assumption - indeed, with the arm powers somewhat stable ~100, if I look at the PDH signal in REFL11 I and Q, it does seem to show up largely in the I quadrature (pre digital phase rotation). Anyway, with so many weird effects (wonky PRM suspension, strange PRMI sensing etc etc, who knows what's going on. This will take a systematic effort.

I defer the electronics characterization for the daytime (if I feel like I need it tomorrow I'll do it, else. Koji has said he can do it on Friday).

Quote:

 I was unable to fully hand off control from ALS-->RF, I suspect I may be using the wrong sign on the AO path (or some such other sub-optimal CM board settings). I'll hook up the SR785 and take some TFs tomorrow, that should give more insight into what's what. 

  15903   Thu Mar 11 14:03:02 2021 gautamUpdateLSCAO path

There is some evidence of weird saturation but the gain balancing (0.8dB) and orthogonality (~89 deg) for the daughter board on the REFL11 demod board that generates the AO path error signal seem reasonable. This board would probably benefit from the AD797-->Op27 and thick-film-->thin film swap but i don't think this is to blame for being unable to execute the RF transition.

Attachment 1: IMG_9127.HEIC
  15904   Thu Mar 11 14:27:56 2021 gautamUpdateCDStimesync issue?

I have recently been running into hitting the 4MB/s data rate limit on testpoints - basically, I can't run DTT TF and spectrum measurements that I was able to while locking the interferometer, which I certainly was able to this time last year. AFAIK, the major modification made was the addition of 4 DQ channels for the in-air BHD experiment - assuming the data is transmitted as double precision numbers, i estimate the additional load due to this change was ~500KB/s. Probably there is some compression so it is a bit more efficient (as this naive calc would suggest we can only record 32 channels and I counted 41 full rate channels in the model), but still, can't think of anything else that has changed. Anyway, I removed the unused parts and recompiled/re-installed the models (c1lsc and c1omc). Holding off on a restart until I decide I have the energy to recover the CDS system from the inevitable crash. For documentation I'm also attaching screenshot of the schematic of the changes made.

Anyway, the main point of this elog is that at the compilation stage, I got a warning I've never seen before:

Building front-end Linux kernel module c1lsc...
make[1]: Warning: File 'GNUmakefile' has modification time 13 s in the future
make[1]: warning:  Clock skew detected.  Your build may be incomplete.

This prompted me to check the system time on c1lsc and FB - you can see there is a 1 minute offset (it is not a delay in me issuing the command to the two machines)! I am suspecting this NTP action is the reason. So maybe a model reboot is in order. Sigh

Attachment 1: timesync.png
timesync.png
Attachment 2: c1lscMods.png
c1lscMods.png
  15905   Thu Mar 11 18:46:06 2021 gautamUpdateCDScds reboot

Since Koji was in the lab I decided to bite the bullet and do the reboot. I've modified the reboot script - now, it prompts the user to confirm that the time recognized by the FEs are the same (use the IOP model's status screen, the GPSTIME is updated live on the upper right hand corner). So you would do sudo date --set="Thu 11 Mar 2021 06:48:30 PM UTC" for example, and then restart the IOP model. Why is this necessary? Who knows. It seems to be a deterministic way of getting things back up and running for now so we have to live with it. I will note that this was not a problem between 2017 and 2020 Oct, in which time I've run the reboot script >10 times without needing to take this step. But things change (for an as of yet unknown reason) and we must adapt. Once the IOPs all report a green "DC" status light on the CDS overview screen, you can let the script take you the rest of the way again.

The main point of this work was to relax the data rate on the c1lsc model, and this worked. It now registers ~3.2 MB/s, down from the ~3.8 MB/s earlier today. I can now measure 2 loop TFs simultaneously. This means that we should avoid adding any more DQ channels to the c1lsc model (without some adjustment/downsampling of others).

Quote:

 Holding off on a restart until I decide I have the energy to recover the CDS system from the inevitable crash.

Attachment 1: CDSoverview.png
CDSoverview.png
  15906   Thu Mar 11 20:18:00 2021 gautamUpdateLSCHigh bandwidth POY

I repeated the high bandwidth POY locking experiment.

  1. The "Q" demod output (SMA) was routed to the common mode board (it appears in the past I used the LEMO "MON" output instead but that shouldn't be a meaningful change).
  2. As usual, slow actuation --> ETMY, fast actuation --> IMC error point.
  3. Loop UGF measurement suggests that bandwidth ~25kHz, with ~25 degrees phase margin. Anyway the lock was pretty stable.

One thing I am not sure is - when looking at the in-loop error point spectra, the Y-arm error point did not get suppressed to the CM board's sensing noise floor - I would have thought that with the huge amount of gain at ~16 Hz, the usual structure we see in the spectra between 10-30Hz would be completely squished. Need to think about if this is signalling something wrong, because the loop TF measurements seemed as expected to me.

1020pm: plots uploaded. As I made the plot of the spectrum, I realized that I don't have the calibration for the Y-arm error point into displacement noise units, so it's in unphysical units for now. But I think the comment about the hump around 16 Hz not being crushed to some sort of flat electronics noise floor. For the TF plots, when the loop gain is high, this IN1/IN2 technique isn't the best (due to saturation issues) but I don't think there's anything controversial about getting the UGF this way, and the fact that the phase evolves as expected when the various gains are cranked up / boosts enabled makes me think that the CM board is itself just fine.


10am 12 March: i realized that the "Y-arm error point" plotted below is not the true error point - that would be the input to the CM board (before boosts etc), which we don't monitor digitally. The spectra are plotted for the CM_SLOW input which already has some transfer function applied to it. In the past, I routed the LEMO "MON" connector on the demod board to the CM board input, and hence, had the usual SMA outputs from the demod board going to the digital system. I hypothesize that plotting the spectra for that signal would have showed this expected suppression to the electronics noise floor.

In summary, on the basis of this test, I don't see any red flags with the CM board.

Attachment 1: OLGevolution.pdf
OLGevolution.pdf
Attachment 2: inLoopSpec.pdf
inLoopSpec.pdf
  15911   Fri Mar 12 11:02:38 2021 gautamUpdateCDScds reboot

I looked into this a bit today morning. I forgot exactly what time we restarted the machines, but looking at the timesyncd logs, it appears that the NTP synchronization is in fact working (log below is for c1sus, similar on other FEs):

-- Logs begin at Fri 2021-03-12 02:01:34 UTC, end at Fri 2021-03-12 19:01:55 UTC. --
Mar 12 02:01:36 c1sus systemd[1]: Starting Network Time Synchronization...
Mar 12 02:01:37 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Mar 12 02:01:37 c1sus systemd[1]: Started Network Time Synchronization.
Mar 12 02:02:09 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).

So, the service is doing what it is supposed to (using the FB, 192.168.113.201, as the ntpserver). You can see that the timesync was done a couple of seconds after the machine booted up (validated against "uptime"). Then, the service is periodically correcting drifts. idk what it means that the time wasn't in sync when we check the time using timedatectl or similar. Anyway, like I said, I have successfully rebooted all the FEs without having to do this weird time adjustment >10 times

I guess what I am saying is, I don't know what action is necessary for "implementing NTP synchronization properly", since the diagnostic logfile seems to indicate that it is doing what it is supposed to.

More worryingly, the time has already drifted in < 24 hours.

Quote:

I want to emphasize the followings:

  • FB's RTC (real-time clock/motherboard clock) is synchronized to NTP time.
  • The other RT machines are not synchronized to NTP time.
  • My speculation is that the RT machine timestamp is produced from the local RT machine time because there is no IRIG-B signal provided. -> If the RTC is more than 1 sec off from the FB time, the timestamp inconsistency occurs. Then it induces "DC" error.
  • For now, we have to use "date" command to match the local RTC time to the FB's time.
     
  • So: If NTP synchronization is properly implemented for the RT machines, we will be free from this silly manual time adjustment.
Attachment 1: timeDrift.png
timeDrift.png
  15913   Fri Mar 12 12:32:54 2021 gautamSummarySUSCoil Rs & Ls for PRM/BS/SRM

For consistency, today, I measured both the BS and PRM actuator balancing using the same technique and don't find as serious an imbalance for the BS as in the PRM case. The Oplev laser source is common for both BS and PRM, but the QPDs are of course distinct.

BTW, I thought the expected resistance of the coil windings of the OSEM is ~13 ohms, while the BS/PRM OSEMs report ~1-2 ohms. Is this okay?

Quote:
 
  • All the PRM coils look well-matched in terms of the inductance. Also, I didn't find a significant difference from BS coils.
Attachment 1: BS_actuator.pdf
BS_actuator.pdf
Attachment 2: PRMact.pdf
PRMact.pdf
  15915   Fri Mar 12 13:48:53 2021 gautamSummarySUSCoil Rs & Ls for PRM/BS/SRM

I didn't repeat Koji's measurement, but he reports the expected ~3.2mH per coil on all the BS and PRM coils.

Quote:

ugh. sounds bad - maybe a short. I suggest measuring the inductance; thats usually a clearer measurement of coil health

  15917   Fri Mar 12 19:44:31 2021 gautamUpdateLSCDelay line

I may want to use the delay line phase shifter in 1Y2 to allow remote actuation of the REFL11 demod phase (for the AO path, not the low bandwidth one). I had this working last Feb, but today, I am unable to remotely change the delay. @Koji, it would be great if you could fix this the next time you are in the lab - I bet it's a busted latch IC or some such thing. I did the non-invasive tests - cable is connected, control bits are changing (at least according to the CDS BIO indicators) and the switch controlling remote/local switching is set correctly. The local switching works just fine.

In the meantime, I will keep trying - I am unconvinced we really need the delay line.

  15918   Fri Mar 12 21:15:19 2021 gautamUpdateLSCcoronaversary PRFPMi

Attachment #1 - proof that the lock is RF only (A paths are ALS, B paths are RF).

Attachment #2 - CARM OLTF.

Some tuning can be done, the circulating power can be made ~twice as high with some ASC. The vertex is still on 3f control. I didn't get any major characterization done tonight but it's nice to be back here, a year on i guess.

Attachment 1: PRFPMI.png
PRFPMI.png
Attachment 2: CARM_OLTF.pdf
CARM_OLTF.pdf
  15920   Mon Mar 15 20:22:01 2021 gautamUpdateASCc1rfm model restarted

On Friday, I felt that the ASC performance when the PRFPMI was locked was not as good as it used to be, so I looked into the situation a bit more. As part of my ASC model revamp in December, I made a bunch of changes to the signal routing, and my suspicion was that the control signals weren't even reaching the ETMs. My log says that I recompiled and reinstalled the c1rfm model (used to pipe the ASC control signals to the ETMs), and indeed, the file was modified on Dec 21. But for whatever reason, the C1RFM.ini (=Dolphin receiver since the ASC control signals are sent to this model over the Dolphin network from the c1ioo machine which hosts the C1:ASC- namespace, and RFM sender to the ETMs, but this path already existed) file never picked up the new channels. Today, I recompiled, re-installed, and restarted the models, and confirmed that the control signals actually make it to the ETMs. So now we can have the QPD-based ASC loops engaged once again for the PRFPMI lock. The CDS system did not crash 🎉 . See Attachments #1-3.

I checked the loop performance in the POX/POY locked config by first deliberately misaligning the ETMs, and then engagin the loops - seems to work (Attachment #4). The loop shapes have to be tweaked a bit and I didn't engage the integrators, hence the DC pointing wasn't recovered. Also, added a line to the script that turns the ASC loops on to set limits for all the loops - in the testing process, one of the loops ran away and I tripped the ETMY watchdog. It has since been recovered. I SDFed a limit of 100cts just to be on the conservative side for model reboot situations - the value in the script can be raised/lowered as deemed necessary (sorry, I don't know the cts-->urad number off the top of my head).

But the hope is this improves the power buildup, and provides stability so that I can begin to commission the AS WFS system a bit.

Attachment 1: RFM.png
RFM.png
Attachment 2: CDSoverview.png
CDSoverview.png
Attachment 3: RFMchans.png
RFMchans.png
Attachment 4: ASCloops.png
ASCloops.png
  15925   Tue Mar 16 19:04:20 2021 gautamUpdateCDSFront-end testing

Now that I think about it, I may only have backed up the root file system of chiara, and not/home/cds/ (symlinked to /opt/ over NFS). I think we never revived the rsync backup to LDAS after the FB fiasco of 2017, else that'd have been the most convenient way to get files. So you may have to resort to some other technique (e.g. configure the second network interface of the chiara clone to be on the martian network and copy over files to the local disk, and then disconnect the chiara clone from the martian network (if we really want to keep this test stand completely isolated from the existing CDS network) - the /home/cds/ directory is rather large IIRC, but with 2TB on the FB clone, you may be able to get everything needed to get the rtcds system working). It may then be necessary to hook up a separate disk to write frames to if you want to test that part of the system out.

Good to hear the backup disk was able to boot though!

Quote:

And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.

For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success.

  15927   Wed Mar 17 00:05:26 2021 gautamUpdateLSCDelay line BIO remote control

While Koji is working on the REFL 11 demod board, I took the opportunity to investigate the non-remote-controllability of the delay line in 1Y2, since the TTs have already been disturbed. Here is what I found today.

  1. First, I brought over the spare delay line from the rack Chiara sits in over to 1Y2. 
    • Connected a Marconi to the input, monitored a -3dB pickoff and the delay line output simultaneously on a 300MHz scope.
    • With the front panel selector set to "Internal", verified that local (i.e. toggling front panel switches) switchability seems to work 👍 
    • Set the front panel switch to "External", and connected the D25 cable from the BIO card in 1Y3 to the back panel of the delay line unit - found that I could not remotely change the delay 😒 
    • I thought it'd be too much of a coincidence if both delay lines have the same failure mode for the remote switching part only, so I decided to investigate further up the signal chain.
  2. BIO switching - the CDS BIO bit status MEDM screen seems to respond, indicating that the bits are getting set correctly in software at least. I don't know of any other software indicator for this functionality further down the signal processing chain. So it would seem the BIO card is not actually switching.
  3. The Contec DO cards don't actually source the voltage - they just provide a path for current to flow (or isolate said path). I checked that pin 12 of the rear panel D25 connector is at +5 V DC relative to ground as indicated in the schematic (see P1 connector - this connector isn't a Dsub, it is IDE24, so the mapping to the Dsub pins isn't one-to-one, but pin 23 on the former corresponds to pin 12 on the latter), suggesting that the pull up resistors have the necessary voltage applied to them.
  4. Made a little LED tester breakout board, and saw no swtiching when I toggled the status of some random bits.
  5. Noted that the bench power supply powering this setup (hacky arrangement from 2015 that never got unhacked) shows a current draw of 0A.
    • I am not sure what the quiescent draw of these boards is - the datasheet says "Power consumption: 3.3VDC, 450mA", but the recommended supply voltage is "12-24V DC (+/-10%)" not 3.3VDC, so not sure what to make of that.
    • To try and get some insight, I took one of the new Contec-32L-PE cards we got from near Jon's CDS test stand (I've labelled the one I took lest there be some fault with it in the future), and connected it to a bench supply (pin 18 = +15V DC, pin1 = GND). But in this condition, the bench supply reports 0A current draw.
  6. Ruled out the wrong cable being plugged in - I traced the cable over the cable tray, and seems like it is in fact connecting the BIO card in the c1lsc expansion chassis to the delay line.

So it would seem something is not quite right with this BIO card. The c1lsc expansion chassis, in which this card sits, is notoriously finicky, and this delay line isn't very high priority, so I am deferring more invasive investigation to the next time the system crashes.

* I forgot we have the nice PCB Contec tester board with LEDs - the only downside is that this board has D37 connectors on both ends whereas the delay line wants a D25, necessitating some custom ribbon cable action. But maybe there is a way to use this.

As part of this work, I was in various sensitive areas (1Y3, chiara rack, FE test stand etc) but as far as I can tell, all systems are nominal.

  15932   Wed Mar 17 15:02:06 2021 gautamUpdatesafetyDoor to outside from control room was unlocked

I came into the lab a few mins ago and found the back door open. I closed it. Nothing obvious seems amiss.

Caltech security periodically checks if this door is locked but it's better if we do it too if we use this door for entry/exit.

  15933   Wed Mar 17 15:04:20 2021 gautamUpdateElectronicsRibbon cable for chassis

I had asked Chub to order 100ft ea of 9, 15 and 25 conductor ribbon cable. These arrived today and are stored in the VEA alongside the rest of the electronics/chassis awaiting assembly.

Attachment 1: IMG_9139.jpg
IMG_9139.jpg
  15935   Thu Mar 18 01:12:31 2021 gautamUpdateLSCPRFPMi
  1. Integrated >1 hour at RF only control, high circulating powers tonight.
    • All of the locklosses were due to me typing a wrong number / turning on the wrong filter.
    • So the lock seems pretty stable, at least on the 20 minute timescale.
    • No idea why given the various known broken parts.
  2. Did a bunch of characterization.
    • DARM OLTF - Attachment #1. The reference is when DARM is under ALS control.
    • CARM OLTF - Attachment #2. Seems okay.
    • Sensing matrix - Attachment #3. The CARM and DARM phases seem okay. Maybe the CARM phase can be tuned a bit with the delay line, but I think we are within 10 degrees.
  3. TRX/TRY between 300-400, with large fluctuations mostly angular. So PRG ~17-22, to answer Koji's question in the meeting today.
    • This is similar to what I had before the vent of Sep 2020.
    • Not surprising to me, since I claim that we are in the regime where the recycling gain is limited by the flipped folding mirrors.
  4. Tried to tweak the ASC (QPD only) by looking at the step responses, but I could never get the loop gains such that I could close an integrator on all the loops.

I need to think a little bit about the ASC commissioning strategy. On the positive side

  1. REFL11 board seems to perform at least as well as before.
  2. ALS performance made me (as Pep would say), so so happy.
  3. Whole lock acquisiton sequence takes ~5mins if the PRMI catches lock quickly (5/7 times tonight).
  4. Process seems repeatable.

Things to think about:

  1. How to get the AS WFS in the picture?
  2. What does the (still) crazy sensing matrix mean? I think it's not possible to transfer vertex control to 1f signals with this kind of sensing.
  3. What does it mean that the PRM actuation seems to work, even though the coils are imabalnced by a factor of 3-5, and the coil resistances read out <2 ohms???
  4. What's going on at the ALS-->CARM transition? The ALS noise is clearly low enough that I can sit inside the CARM linewidth. Yet, there seems to be some offset between what ALS thinks is the resonant point, and what the REFL11 signal thinks is the resonant point. I am kind of able to "power through" this conflict, but the IMC error point (=AO path) is not very happy during the transition. It worked 8/8 times tonight, but would be good to figure out how to make this even more robust.
Attachment 1: DARM_OLTF_20210317.pdf
DARM_OLTF_20210317.pdf
Attachment 2: CARMTF_20210317.pdf
CARMTF_20210317.pdf
Attachment 3: PRFPMI_Mar_17sensMat.pdf
PRFPMI_Mar_17sensMat.pdf
  15940   Thu Mar 18 13:12:39 2021 gautamUpdateComputer Scripts / ProgramsOmnigraffle vs draw.io

What is the advantage of Omnigraffle c.f. draw.io? The latter also has a desktop app, and for creating drawings, seems to have all the functionality that Omnigraffle has, see for example here. draw.io doesn't require a license and I feel this is a much better tool for collaborative artwork. I really hate that I can't even open my old omnigraffle diagrams now that I no longer have a license.

Just curious if there's some major drawback(s), not like I'm making any money off draw.io.

Quote:

After Anchal left for his test, I took the time to set up the iMAC station so that Stephen (and others) can remote desktop into it to use Omnigraffle.

  15941   Thu Mar 18 18:06:36 2021 gautamUpdateElectronicsModified Sat Amp and Coil Driver

I uploaded the annotated schematics (to be more convenient than the noise analysis notes linked from the DCC page) for the HAM-A coil driver and Satellite Amplifier.

  15944   Fri Mar 19 11:18:25 2021 gautamUpdateLSCPRMI investigations: what IS the matrix??

From Finesse simulation (and also analytic calcs), the expected PRCL optical gain is ~1 MW/m (there is a large uncertainty, let's say a factor of 5, because of unknown losses e.g. PRC, Faraday, steering mirrors, splitting fractions on the AP table between the REFL photodiodes). From the same simulation, the MICH optical gain in the Q-phase signal is expected to be a factor of ~10 smaller. I measured the REFL55 RF transimpedance to be ~400 ohms in June last year, which was already a little lower than the previous number I found on the wiki (Koji's?) of 615 ohms. So we expect, across the ~3nm PRCL linewidth, a PDH horn-to-horn voltage of ~1 V (equivalently, the optical gain in units of V/m for PRCL is ~0.3 GV/m).

In the measurement, the MICH gain is indeed ~x10 smaller than the PRCL gain. However, the measured optical gain (~0.1GV/m, but this is after the x10 gain of the daughter board) is ~10 times smaller than what is expected (after accounting for the various splitting fractions on the AS table between REFL photodiodes). We've established that the modulation depth isn't to blame I think. I will check (i) REFL55 transimpedance, (ii) cable loss between AP table and 1Y2 and (iii) is the beam well centered on the REFL55 photodiode.

Basically, with the 400ohm transimpedance gain, we should be running with a whitening gain of 0dB before digitization as we expect a signal of O(1V). We are currently running at +18dB.

Quote:

Then I put the RF signal directly into the scope and saw that the 55 MHz signal is ~30 mVpp into 50 Ohms. I waited a few minutes with triggering to make sure I was getting the largest flashes. Why is the optical/RF signal so puny? surprise This is ~100x smaller than I think we want...its OK to saturate the RF stuff a little during lock acquisition as long as the loop can suppress it so that the RMS is < 3 dBm in the steady state.

  15949   Fri Mar 19 22:24:54 2021 gautamUpdateLSCPRMI investigations: what IS the matrix??

I did all these checks today. 

Quote:

I will check (i) REFL55 transimpedance, (ii) cable loss between AP table and 1Y2 and (iii) is the beam well centered on the REFL55 photodiode.

  1. The transimpedance was measured to be ~420 ohms at 55 MHz (-4.3 dB relative to the assumed 700V/A of the NF1611), so close to what I measured in June (the data download didn't work apparently and so I don't have a plot but it can readily be repeated). The DC levels also checked out - with 20mA drive current for the Jenne laser, I measured ~2.3 V on the NF1611 (10kohm DC transimpedance) vs ~13mV on the DC output of the REFL55 PD (50 ohm DC transimpedance).
  2. Time domain confirmation of the above statement is seen in Attachment #1. The Agilent was used to drive the Jenne laser with 0dBm RF signal @ 55 MHz. Ch1 (yellow) is the REFL55 PD output, Ch2 (blue) is the NF1611 RFPD, measured at the AP table (sorry for the confusing V/div setting).
  3. Re-connected the cabling at the AP table, and measured the signal at 1Y2 using the scope Rana conveniently left there, see Attachment #2. Though the two scopes are different, the cable+connector loss estimated from the Vpp of the signal at the AP table vs that at 1Y2 is 1.5 dB, which isn't outrageous I think.
  4. For the integrated test, I left the AM laser incident on the REFL55 photodiode, reconnected all the cabling to the CDS system, and viewed the traces on ndscope, see Attachment #3. Again, I think all the numbers are consistent. 
    • REFL55 demod board has an overall conversion gain (including the x10 gain of the daughter board preamp) of ~5V I/F per 1V RF.
    • There is a flat 18 dB whitening gain.
    • The digitized signal was ~13000 ctspp - assuming 3276.8 cts/V, that's ~4Vpp. Undoing the flat whitening gain and the conversion efficiency, I get 13000 / 3276.8 / (10^(18/20)) / 5 ~ 100 mVpp, which is in good agreement with Attachment #3 (pardon the thin traces, I didn't realize it looked so bad until I closed everything).

So it would seem that there is nothing wrong with the sensing electronics. I also think we can rule out any funkiness with the modulation depths since they have been confirmed with multiple different measurements.

One thing I checked was the splitting ratios on the AP table. Jenne's diagram is still accurate (assuming the components are labelled correctly). Let's assume 0.8 W makes it through the IMC to the PRM - then, I would expect, according to the linked diagram, 0.8 W * 0.8 * (1-5.637e-2) * 0.8 * 0.1 * 0.5 * 0.9 ~ 22 mW to make it onto the REFL55 PD. With the PRM aligned and the beam centered on the PD (using DC monitor but I also looked through an IR viewer, looked pretty well centered), I measured 500 mV DC level. Assuming 50 ohm DC transimpedance, that's 500 / 50 / 0.8 ~ 12.5 mW of power on this photodiode, which while is consistent with what's annotated on Jenne's diagram, is ~50% off from expectation. Is the uncertainty in the Faraday transmission and IMC transmission enough to account for this large deviation?

If we want more optical gain, we'd have to put more light on this PD. I suppose we could have ~10x the power since that's what is on IMC REFL when the MC is unlocked? If we want x100 increase in optical gain, we'd also have to increase the transimpedance by 10. I'll double check the simulation but I"m inclined to believe that the sensing electronics are not to blame.


Unconnected to this work but I feel like I'm flying blind without the wall StripTool traces so I restored them on zita (ran /opt/rtcds/caltech/c1/scripts/general/startStrip.sh).

Attachment 1: IMG_9140.jpg
IMG_9140.jpg
Attachment 2: IMG_9141.jpg
IMG_9141.jpg
Attachment 3: REFL55.png
REFL55.png
  15953   Mon Mar 22 16:29:17 2021 gautamUpdateASCSome prep for AS WFS commissioning
  1. Added rough cts2mW calibration filters to the quadrants, see Attachment #1. The number I used is:
              0.85 A/W         *       500 V/A            *          10 V/V                              *         1638.4 cts/V
    (InGaAs responsivity)     (RF transimpedance)  (IQ demod conversion gain)      (ADC calibration)
  2. Recovered FPMI locking. Once the arms are locked on POX / POY, I lock MICH using AS55_Q as a sensor and BS as an actuator with ~80 Hz UGF.
  3. Phased the digital demod phases such that while driving a sine wave in ETMX PIT, I saw it show up only in the "I" quadrant signals, see Attachment #2.

The idea is to use the FPMI config, which is more easily accessed than the PRFPMI, to set up some tests, measure some TFs etc, before trying to commission the more complicated optomechanical system.

Attachment 1: AS_WFS_head.png
AS_WFS_head.png
Attachment 2: WFSquads.pdf
WFSquads.pdf
  15956   Wed Mar 24 00:51:19 2021 gautamUpdateLSCSchnupp asymmetry

I used the Valera technique to measure the Schnupp asymmetry to be \approx 3.5 \, \mathrm{cm}, see Attachment #1. The data points are points, and the zero crossing is estimated using a linear fit. I repeated the measurement 3 times for each arm to see if I get consistent results - seems like I do. Subtle effects like possible differential detuning of each arm cavity (since the measurement is done one arm at a time) are not included in the error analysis, but I think it's not controversial to say that our Schnupp asymmetry has not changed by a huge amount from past measurements. Jamie set a pretty high bar with his plot which I've tried to live up to. 

Attachment 1: Lsch.pdf
Lsch.pdf
  15958   Wed Mar 24 15:24:13 2021 gautamUpdateLSCNotes on tests

For my note-taking:

  1. Lock PRMI with ITMs as the MICH actuator. Confirm that the MICH-->PRCL contribution cannot be nulled. ✅  [15960]
  2. Lock PRMI on REFL165 I/Q. Check if transition can be made smoothly to (and from?) REFL55 I/Q.
  3. Lock PRMI. Turn sensing lines on. Change alignment of PRM / BS and see if we can change the orthogonality of the sensing.
  4. Lock PRMI. Put a razor blade in front of an out-of-loop photodiode, e.g. REFL11 or REFL33. Try a few different masks (vertical half / horizontal half and L/R permutations) and see if the orthogonality (or lack thereof) is mask-dependent.
  5. Double check the resistance/inductance of the PRM OSEMs by measuring at 1X4 instead of flange. ✅  [15966]
  6. Check MC spot centering.

If I missed any of the tests we discussed, please add them here.

  15960   Wed Mar 24 22:54:49 2021 gautamUpdateLSCNew day, new problems

I thought I'd get started on some of the tests tonight. But I found that this problem had resurfaced. I don't know what's so special about the REFL55 photodiode - as far as I can tell, other photodiodes at the REFL port are running with comparable light incident on it, similar flat whitening gain, etc etc. The whitening electronics are known to be horrible because they use the quad LT1125 - but why is only this channel problematic? To describe the problem in detail:

  • I had checked the entire chain by putting an AM field on the REFL 55 photodiode, and corroborating the pk-to-pk (counts) value measured in CDS with the "nominal" setting of +18dB flat whitening gain against the voltage measured by a "reference" PD, in this case a fiber coupled NF1611.
  • In the above test, I confirmed that the measured signal was consistent with the value reported by the NF1611.
  • So, at least on Friday, the entire chain worked just fine. The PRMI PDH fringes were ~6000cts-pp in this condition.
  • Today, I found that while trying to acquire PRMI lock, the PDH fringes witnessed in REFL55 were saturating the ADC. I lowered the whitening gain to 0 dB (so a factor of 8). Then the PDH fringes were ~20,000cts-pp. So, overall, the gain of the chain seems to have gone up by a factor of ~25. 
  • Given my NF1611 based test, the part of the chain I am most suspicious of is the whitening filter. But I don't have a good mechanism that explains this observation. Can't be as simple as the input impedance of the LT1125 being lowered due to internal saturations, because that would have the opposite effect, we would measure a tiny signal instead of a huge one

I request Koji to look into this, time permitting, tomorrow. In slightly longer term, we cannot run the IFO like this - the frequency of occurrence is much too high and the "fix" seems random to me, why should sweeping the whitening gain fix the problem? There was some suggestion of cutting the PCB trace and putting a resistor to limit the current draw on the preceeding stage, but this PCB is ancient and I believe some traces are buried in internal layers. At the same time, I am guessing it's too much work to completely replace the whitening electronics with the aLIGO style units. Anyone have any bright ideas?


Anyway, I managed to lock the PRMI (ETMs misaligned) using REFL165I/Q. Then, instead of using the BS as the MICH actuator, I used the two ITMs (equal magnitude, opposite sign in the LSC output matrix).

  • The digital demod phase in this config is different from what is used when the arm cavities are in play (under ALS control). Probably the difference is telling us something about the reflectivity of the arm cavity for various sideband fields, from which we can extract some useful info about the arm cavity (length, losses etc). But that's not the focus here - the correct digital demod phase was 11 degrees. See Attachment #1 for the sensing matrix. I've annotated it with some remarks.
  • The signals appear much more orthogonal when actuating on the ITMs. However, I was still only able to null the MICH line sensed in the PRCL sensor to a ratio of 1/5 (while looking at peaks on DTT). I was unable to do better by fine tuning either the digital demod phase, or the relative actuation strength on each ITM
  • The PRCL loop had a UGF of ~120 Hz, MICH loop ~60 Hz.
  • With the PRMI locked in this config, I tried to measure the appropriate loop gain and sign if I were to use the REFL55 photodiode instead of REFL165 - but I didn't have any luck. Unsurprising given the known electronics issues I guess...

I didn't get around to running any of the other tests tonight, will continue tomorrow.


Update Mar 26: Attachments #2 and #3 show that there is clearly something wrong with the whitening electronics associated with REFL55 channels - with the PSL shutter closed (so the only "signal" being digitized should be the electronics noise at the input of the whitening stage), the I and Q channels don't show similar profiles, and moreover, are not consistent (the two screenshots are from two separate sweeps). I don't know what to make of the parts of the sweep that don't show the expected "steps". Until ndscope gets a log-scaled y-axis option, we have to live with the poor visualization of the gain steps which are dB (rather than linearly) spaced. For this particular case, StripTool isn't an option either because the Q channel as a negative offset, and I opted agains futzing with the cabling at 1Y2 to give a small fixed positive voltage instead. I will emphasize that on Friday, this problem was not present, because the gain balance of the I and Q channels was good to within 1dB.

Attachment 1: PRMI3f_noArmssensMat.pdf
PRMI3f_noArmssensMat.pdf
Attachment 2: REFL55_whtGainStepping.png
REFL55_whtGainStepping.png
Attachment 3: REFL55_whtGainStepping2.png
REFL55_whtGainStepping2.png
  15963   Thu Mar 25 14:16:33 2021 gautamUpdateCDSc1auxey assembly

It might be a good idea to configure this box for the new suspension config - modern Satellite Amp, HV coil driver etc. It's a good opportunity to test the wiring scheme, "cross-connect" type adapters etc.

Quote:

Next, the feedthroughs need to be wired and the channels need to be bench tested.

  15965   Thu Mar 25 15:31:24 2021 gautamUpdateIOOWFS servos

The servos are almost certainly not optimal - but we have the IFO sort of working, so before we make any changes, let's make a strong case for it. Once the loop TFs and noises (e.g. the sensing noise reinjection you maybe saw) are fully characterized and a new loop is shown to perform better, then we can make the changes, but until then, let's continue using the "nominal" configuration and keep all the WFS loops on wink. I turned everything back on.

BTW, MC2_ASCPIT_IN1 isn't the correct channel to measure the sensing noise re-injection, you need some other sensor, e.g. is the MC transmission (de)stabilized. 0-20 Hz is where I expect the WFS is actually measuring above the sensing noise.

  15966   Thu Mar 25 16:02:15 2021 gautamSummarySUSRepeated measurement of coil Rs & Ls for PRM/BS

Method

Since I am mainly concerned with the actuator part of the OSEM, I chose to do this measurement at the output cables for the coil drivers in 1X4. See schematic for pin-mapping. There are several parts in between my measurement point and the actual coils but I figured it's a good check to figure out if measurements made from this point yield sensible results. The slow bias voltages were ramped off under damping (to avoid un-necessarily kicking the optics when disconnecting cables) and then the suspension watchdogs were shutdown for the duration of the measurement.

I used an LCR meter to measure R and L - as prescribed by Koji, the probe leads were shorted and the readback nulled to return 0. Then for R, I corroborated the values measured with the LCR meter against a Fluke DMM (they turned out to be within +/- 0.5 ohms of the value reported by the BK Precision LCR meter which I think is reasonable).

Result

                   PRM
Pin1-9 (UL)   / R = 30.6Ω / L=3.23mH  
Pin2-10 (LL)  / R = 30.3Ω / L=3.24mH
Pin3-11 (UR) / R = 30.6Ω / L=3.25mH
Pin4-12 (LR) / R = 31.8Ω / L=3.22mH
Pin5-13 (SD) / R = 30.0Ω / L=3.25mH

                       BS
Pin1-9 (UL)   / R = 31.7Ω / L=3.29mH
Pin2-10 (LL)  / R = 29.7Ω / L=3.26mH
Pin3-11 (UR) / R = 29.8Ω / L=3.30mH
Pin4-12 (LR) / R = 29.7Ω / L=3.27mH
Pin5-13 (SD) / R = 29.0Ω / L=3.24mH

Conclusions

On the basis of this measurement, I see no problems with the OSEM actuators - the wire resistances to the flange seem comparable to the nominal OSEM resistance of ~13 ohms, but this isn't outrageous I guess. But I don't know how to reconcile this with Koji's measurement at the flange - I guess I can't definitively rule out the wire resistance being 30 ohms and the OSEMs being ~1 ohm as Koji measured. How to reconcile this with the funky PRM actuator measurement? Possibilities, the way I see it, are:

  1. Magnets on PRM are weird in some way. Note that the free-swinging measurement for the PRM showed some unexpected features.
  2. The imbalance is coming from one of the drive chain - could be a busted current buffer for example.
  3. The measurement technique was wrong.
  15967   Thu Mar 25 17:39:28 2021 gautamUpdateComputer Scripts / ProgramsSpot position measurement scripts "modernized"

I want to measure the spot positions on the IMC mirrors. We know that they can't be too far off centerBasically I did the bare minimum to get these scripts in /opt/rtcds/caltech/c1/scripts/ASS/MC/ running on rossa (python3 mainly). I confirmed that I get some kind of spot measurement from this, but not sure of the data quality / calibration to convert the demodulated response into mm of decentering on the MC mirrors. Perhaps it's something the MC suspension team can look into - seems implausible to me that we are off by 5mm in PIT and YAW on MC2? The spot positions I get are (in mm from the center):

MC1 P          MC2P           MC3P           MC1Y          MC2Y           MC3Y

0.640515    -5.149050    0.476649    -0.279035    5.715120    -2.901459

A future iteration of the script should also truncate the number of significant figures per a reasonable statistical error estimation.

Attachment 1: MCdecenter202103251735_mcmirror0.pdf
MCdecenter202103251735_mcmirror0.pdf
Attachment 2: MCdecenter202103251735_mcdecenter0.pdf
MCdecenter202103251735_mcdecenter0.pdf
  15968   Thu Mar 25 18:05:04 2021 gautamUpdateElectronicsStuffed HV coil drivers received from Screaming Circuits

I think the only part missing for assembly now are 4 2U chassis. The PA95s need to be soldered on as well (they didn't arrive in time to send to SC). The stuffed boards are stored under my desk. I inspected one board, looks fine, but of course we will need to run some actual bench tests to be sure.

  15973   Mon Mar 29 17:07:17 2021 gautamSummarySUSMC3 new Input Matrix not providing stable loop

I suppose you've tried doing the submatrix approach, where SIDE is excluded for the face DoFs? Does that give a better matrix? To me, it's unreasonable that the side OSEM senses POS motion more than any single face OSEM, as your matrix suggests (indeed the old one does too). If/when we vent, we can try positioning the OSEMs better.

  15974   Mon Mar 29 17:11:54 2021 gautamUpdateSUSMC2 Coil Balancing updates

For this technique to work, (i) the WFS loops must be well tuned and (ii) the beam must be well centered on MC2. I am reasonably certain neither is true. For MC2 coil balancing, you can use a HeNe, there is already one on the table (not powered), and I guess you can use the MC2 trans QPD as a sensor, MC won't need to be locked so you can temporarily hijack that QPD (please don't move anything on the table unless you're confident of recovering everything, it should be possible to do all of this with an additional steering mirror you can install and then remove once your test is done). Then you can do any variant of the techniques available once you have an optical lever, e.g. single coil drive, pringle mode drive etc to do the balancing.

I think Hang had some technique he tried recently as well, maybe that is an improvement.

  15977   Mon Mar 29 19:32:46 2021 gautamUpdateLSCREFL55 whitening checkout

I repeated the usual whitening board characterization test of:

  • driving a signal (using awggui) into the two inputs of the whitening board using a spare DAC channel available in 1Y2
  • demodulating the response using the LSC sensing matrix infrastructure
  • Stepping the whitening gain, incrementing it in 3dB steps, and checking if the demodulated lock-in outputs increase in the expected 3dB steps.

Attachment #1 suggests that the steps are equal (3dB) in size, but note that the "Q" channel shows only ~half the response of the I channel. The drive is derived from a channel of an unused AI+dewhite board in 1Y2, split with a BNC Tee, and fed to the two inputs on the whitening filter. The impedance is expected to be the same on each channel, and so each channel should see the same signal, but I see a large asymmetry. All of this checked out a couple of weeks ago (since we saw ellipses and not circles) so not sure what changed in the meantime, or if this is symptomatic of some deeper problem.

Usually, doing this and then restoring the cabling returns the signal levels of REFL55 to nominal levels. Today it did not - at the nominal whitening gain setting of +18dB flat gain, when the PRMI is fringing, the REFL55 inputs are frequently reporting ADC overflows. Needless to say, all my attempts today evening to transition the length control of the vertex from REFL165 to REFL55 failed.

I suppose we could try shifting the channels to (physical) Ch5 and Ch6 which were formerly used to digitize the ALS DFD outputs and are currently unused (from Ch3, Ch4) on this whitening filter and see if that improves the situation, but this will require a recompile of the RTCDS model and consequent CDS bootfest, which I'm not willing to undertake today. If anyone decides to do this test, let's also take the opportunity to debug the BIO switching for the delay line.

Attachment 1: REFL55wht.png
REFL55wht.png
  15983   Thu Apr 1 00:50:06 2021 gautamUpdateSUSPRMdiag

I spent some time investigating the PRM this evening, trying out some of the stuff we discussed in the meeting.

  1. I used one of the SUS lockin oscillator to drive the butterfly mode (UL=+1, UR=-1, LL=-1, LR=+1) of the optic, @4.45 Hz, Amplitude=450cts (Oplev loops were engaged during the measurement).
  2. Used the sensing matrix infrastructure to demodulate the Oplev PIT/YAW error signals, using the other lockin channel (so that oscillator is just for demodulation, it doesn't actuate on the suspension).
  3. The digital demod phase was adjusted to put all of the demodulated signal in one ("I") quadrature.
  4. The balancing of the coils was perturbed by adding small amounts of the naive PIT (YAW) DoF to the pringle-mode actuation, while simultaneously looking to minimize the demodulated signal in YAW (PIT). 

Basically, my finding tonight was that I could not improve (make the pringle mode actuation witnessed by the Oplev QPD smaller) by +/- perturbing the butterfly actuation with of 0.05%, 0.5% and 1% of PIT (I didn't try YAW, or other values of PIT, as none of these seemed to do any good). It seems highly unlikely that the existing coil gains (these come after the output matrix) and the actual coil/magnet pairs are so perfectly tuned, so there must be something wrong with my method. I'll try more combos tomorrow. Separately, I verified that the naive PIT (YAW) moves the optic mainly, i.e. to the eye), in PIT(YAW) as judged by the REFL spot on the camera and the readback of the Oplev QPD.

For this work, I made a few changes to filter banks:

  1. Added 2Hz wide BPs centered around 4.45 Hz to the "SIGNAL" FM of the BS and PRM suspension lockins.
  2. Added 100mHz LPFs to the I and Q demodulated outputs.
  3. Made a copy of Kiwamu's perturbcoilbalance_fourosem.py (in scripts/SUS) to add little bit of PIT/YAW to the pringle actuation.

I noticed that the filters/switch states/gains for LOCKIN1 and LOCKIN2 are not consistent within either PRM or BS suspension, or across suspensions. Several filter INs/OUTs were also disabled - something for the SUSdiag team to note, whenever this is scripted, the script should check that the signal is indeed making it end-to-end.

  15987   Thu Apr 1 18:48:45 2021 gautamUpdateSUSMC2 Coil Balancing Test Results Success??

In these results, can you also include the new matrix and what the relative imbalances were?

  15990   Fri Apr 2 01:26:41 2021 gautamUpdateElectronicsREFL55 chain checkout again, seems fine

[koji, gautam]

Summary:

We could not find problems with any individual piece of the REFL55 electronics chain, from photodiode to ADC.  Nevertheless, the PRMI fringes witnessed by REFL55 is ~x10 higher than ~two weeks ago, when the PRMI could be repeatably and reliably locked using REFL55 signals (ETMs misaligned).

Details:

  1. Koji prepared a spare whitening board. However, before he swapped it in, he checked the existing board and found no problems with it.
    • 20mV input signal, 100 Hz was injected into the two REFL55 channels on the whitening board.
    • The flat whitening gain was set to +45 dB.
    • The signal levels seen in CDS was consistent with what is expected given an ADC conversion factor of 3276.8 cts/V.
  2. Tried putting the REFL55 demodulated outputs into the next two channels, 5&6, (currently unused) on the same whitening board.
    • After setting the whitening gains of these two channels also to +18dB, the saturation of the ADCs when the PRMI was fringing persisted.
  3. With the dark noise of the whitening filter, we enabled/disabled the on board frequency dependent whitening, and reasoned that the time domain increase in RMS seemed reasonable. So we decided to investigate parts of the electronics chain upstream of the whitening board, since we couldn't find anything obviously wrong with the whitening board.
  4. Injected -10dBm RF signal (=0.2 Vpp) into the RF input on the REFL55 demod board, and saw ~3500 cts-pp signal in CDS. This is totally consistent with my recent characterization of 16,000 cts/V for this demod board at the "nominal" + 18dB whitening gain setting. So the demodulator seems to function as advertised.
  5. Decided to repeat my test of using the Jenne laser to test the whole chain end-to-end.
    • In summary, we recovered the results (RF transimpedance of the PD, and signal levels in CDS for a known AM determined by the reference NF1611 PD) I reported there.
    • So it would seem that the entire REFL55 electronics chain performs as expected.
    • The only remaining explanation is that the optical gain of the PRMI has increased - but how?? 
    • Similar jumps in the REFL55 signal levels have occurred multiple times in the past, and each time, I was able to recover the "nominal" performance by this procedure (though I have no idea why that should work at all).
    • So I am highly skeptical that this has anything to do with the IFO optical gain, but that is the only difference between our AM laser based test and the "live" operating conditions when the signals are saturated.

Discussion and next steps:

Q: Koji asked me what is the problem with this apparent increased optical gain - can't we just compensate by decreasing the whitening gain?
A: I am unable to transition control of the PRMI (no ETMs) from 3f to 1f, even after reducing the whitening gain on the REFL55 channels to prevent the saturation. So I think we need to get to the bottom of whatever the problem is here.

Q: Why do we need to transfer the control of the vertex to the 1f signals at all?
A: I haven't got a plot in the elog, but from when I had the PRFPMI locked last year, the DARM noise between 100-1kHz had high coherence with the MICH control signal. I tried some feedforward to try and cancel it but never got anywhere. It isn't a quantitative statement but the 1f signals are expected to be cleaner?

Koji pointed out that the MICH signal is visible in the REFL55 channels even when the PRM is misaligned, so I'm gonna look back at the trend data to see if I can identify when this apparent increase in the signal levels occurred and if I can identify some event in the lab that caused it. We also discussed using the ratio of MICH signals in REFL and AS to better estimate the losses in the REFL path - the Faraday losses in particular are a total unknown, but in the AS path, there is less uncertainty since we know the SRM transmission quite precisely, and I guess the 6 output steering mirrors can be assumed to be R=99%. 

  15992   Fri Apr 2 15:17:23 2021 gautamSummaryGeneralHEPA AC cord replacement

From the last failure, I had ordered 2 extra capacitors (they are placed on top of the PSL enclosure above where the capacitors would normally be installed). If the new capacitors lasted < 6months, may be symptomatic of some deeper problem though, e.g. the HEPA fans themselves need replacing. We don't really have a good diagnostic of when the failure happened I guess as we don't have any channel recording the state of the fans.

Quote:

I think the PSL HEPA (both 2 units) are not running. The switches were on. And the variac was changed from 60% to 0%~100% a few times but no success.
I have no troubleshooting power anymore today. The main HEPA switch was turned off.

  15993   Fri Apr 2 15:22:54 2021 gautamUpdateSUSMatrix results, new measurement set to trigger

How should I try to understand why PIT and YAW are so different? 

Quote:
New output matrix for MC2 (C1:SUS-MC2_TO_COIL_ii_jj_GAIN)
  POS PIT YAW
UL 1 1.022 0.6554
UR 1 0.9776 -1.2532
LL 1 -0.9775 1.2532
LR 1 -1.0219 -0.6554
  15994   Sat Apr 3 00:42:40 2021 gautamUpdateLSCPRFPMI locking with half input power

Summary:

I wanted to put my optomechanical instability hypothesis to the test. So I decided to cut the input power to the IMC by ~half and try locking the PRFPMI. However, this did not improve the stability of the buildup in the arm cavities, while the control was solely on the ALS error signal

Details:

  1. The waveplate I installed for this purpose was rotated until the MC RFPD DCMON channel reported ~half it's nominal value.
  2. I adjusted the IMC servo gains appropriately to compensate. IMC lock was readily realized.
  3. I increased the whitening gains on the POX, POY and REFL165 photodiodes by 6dB, to compensate for the reduced light levels.
    • One day soon, we will have remote power control, and it'd be nice to have this process be automated.
    • Really, we should have de-whitening filters that undo these flat gains in addition to undoing the frequency dependent whitening.
    • I'm not sure the quality of the electronics is good enough though, for the changing electronics offsets to not be a problem.
    • One possibility is that we can normalize some signals by the DC light level at that port, but I still think compensating the changing optical gain as far upstream as possible is best, and the whitening gain is the convenient stage to do this.
  4. Recovered single arm POX/POY locking. 
  5. Then I decided to try and lock the PRFPMI with the reduced input power.

Basically, with some tweaks to loop gains, it worked, see Attachment #1. Note that the lower right axis shows the IMC transmission and is ~7500 cts, vs the nominal ~15,000 cts.

Discussion:

Cutting the input power did not have the effect I hoped it would. Basically, I was hoping to zero the optical CARM offset while the IFO was entirely under ALS control, and have the arm transmission be stable (or at least, stay in the linear regime of REFL11). However, the observation was that the IFO did the usual "buzzing" in and out of the linear regime. Right now, this is not at all a problem - once the IR error signal is blended in, and DC control authority is transferred to that signal, the lock acquisition can proceed just fine. And I guess it is cool that we can lock the IFO at ~half the input power, something to keep in mind when we have the remote controlled waveplate, maybe we always want to lock at the lowest power possible such that optomechanical transients are not a problem. 

I also don't think this test directly disputes my claim that the residual CARM noise when the arm cavities are under purely ALS control is smaller than the CARM linewidth.

What does this mean for my hypothesis? I still think it is valid, maybe the power has to be cut even further for the optomechanics to not be a problem. In Finesse (see Attachment #2), with 0.3 W input power to the back of the PRM, and with best guesses for the 40m optical losses in the PRC and arms, I still see that considerable phase can be eaten up due to the optomechanical resonance around ~100 Hz, which is where the digital CARM loop UGF is. So I guess it isn't entirely unreasonable that the instability didn't go away?


After this work, I undid all the changes I made for the low power lock test. I confirmed that IMC locking, POX/POY locking, and the dither alignment systems all function as expected after I reverted the system.

Attachment 1: PRFPMIlock_1301464998_1301465238.pdf
PRFPMIlock_1301464998_1301465238.pdf
Attachment 2: CARMplant.pdf
CARMplant.pdf
  15996   Mon Apr 5 22:26:01 2021 gautamUpdateLSCPRMI 1f locking (no ETMs) recovered

Since it seems like the entire electronics chain has no obvious failure, I decided to compensate for the apparent increased optical gain by turning the flat whitening gain down from +18dB to 0dB. Then, after some fiddling around with alignment, settings etc, I was able to lock the PRMI once again, with the ETMs misaligned, using REFL55_I to sense PRCL, and REFL55_Q to sense MICH. Some sensing matrices attached. Some notes:

  1. I made some changes to the presentation so that the units of the sensing matrix are now in [W/m] sensed on a photodiode. 
    • The info printed on the plot is also more verbose, I now indicate explicitly the actuator driven to make the measurement, and also the drive frequency. The various gains used to convert counts to Watts on a photodiode are also indicated.
    • I thought about printing the actual matrix too but haven't arrived at a clean prez style yet.
    • This is to facilitate easier comparison to Finesse models / analytic calcs.
    • I take into account all the gains from the photodetector to the servo error point where the measurement is made. However, the splitting fractions between various photodiodes is not included, so you will have to do that yourself when comparing to a Finesse model.
    • For example, in pg2 of Attachment #1, the measured gain of PRCL sensed in REFL55_I is ~2e6 W/m. But only ~4% of IFO REFL ends up on the REFL55 photodiode. So, this measurement would be consistent with a Finesse simulated optical gain of ~50MW/m, which is in the ballpark of what I do actually see.
  2. There seems to be reasonable agreement between Finesse and these measurements. But why should the old settings / locking have worked at all then?
  3. I tried two schemes for MICH actuation today.
    • The first was the usual BS + PRM combo, and I got the sensing matrix on pg 1 of Attachment #1. With this scheme, the MICH/PRCL orthogonality is a joke.
    • Then I changed the MICH actuator to the ITMs, and got the sensing matrix on pg 2 of Attachment #1. With this scheme, the orthogonality looks much better. I think the slight non-orthogonality in the 11/33 MHz photodiodes may even be reasonable, since the 11 MHz field isn't a good sensor of the anti-symmetric modes, but have to confirm by calculation/simulation. But certainly the separation of signals is much cleaner when the ITMs are used to control MICH.

So there is clearly something funky with the nominal MICH actuation scheme (MICH suspension, PRM suspension or both), which we should get to the bottom of before trying any low noise locking. I think using the ITMs as the MICH actuator in the full lock will not be a good low nosie strategy, as we would then be "polluting" all our suspended optics with our control loops, which seems highly suboptimal for technical noise sources like coil driver noise etc.

Attachment 1: PRMI_Apr5sensMat_consolidated.pdf
PRMI_Apr5sensMat_consolidated.pdf PRMI_Apr5sensMat_consolidated.pdf
  16006   Wed Apr 7 22:48:48 2021 gautamUpdateIOOWaveplate commissioning

Summary:

I spent an hour today evening checking out the remote waveplate operation. Basic remote operation was established 👍 . To run a test on the main beam (or any beam for that matter), we need to lay out some long cabling, and install the controller in a rack. I will work with Jordan in the coming days to do these things. Apart from the hardware, some EPICS channel will need to be added to the c1ioo.db file and a python script will need to be set up as a service to allow remote operation.

Part numbers:

  • The controller is a NewFocus ESP300.
  • The waveplate stage is a PR50CC. The waveplate itself that is mounted has a 1" diameter (clear aperture is more like 21mm), which I think is ~twice the size of the waveplates we have in the lab, good thing Livingston shipped us the waveplate itself too. It is labelled QWPO-1064-10-2, so should be a half wave plate as we want, but I didn't explicitly check with a linearly polarized beam today. Before any serious high power tests, we can first contact clean the waveplate to avoid any burning of dirt. The damage threshold is rated as 1 MW/cm^2, and I estimate that we will be well below this threshold for any power levels (<30W) we are planning to put through this waveplate. For a 100um radius beam with 30W, the peak intensity is ~0.2 MW/cm^2. This is 20% of the rated damage threshold, so may be better to enforce that the beam be >200um going through this waveplate.
  • The dimensions of the mount look compatible with the space we have on the PSL table (though of course once the amplifier comes into the picture, we will have to change the layout. Maybe it's better to keep everything downstream of the PMC fixed - then we just re-position the seed beam (i.e. NPRO) and amplifier, and then mode-match the output of the amplifier to the PMC.

Electrical tests:

  1. First, I connected a power cord to the ESP300 and powered it on - the front display lit up and displayed a bunch of diagnostics, and said something to the effect of "No stage connected".
  2. Next, I connected the rotary mount to "Axis #1": Male DB25 on the stage to female DB25 on the rear of the ESP300. The stage was recognized.
  3. Used the buttons on the front panel to rotate the waveplate, and confirmed visually that rotation was happening 👍 . I didn't calibrate the actual degrees of rotation against the readback on the front panel, but 45 degrees on the panel looked like 45 degrees rotation of the physical stage so seems fine.

RS232 tests:

  • This unit only has a 9-pin Dsub connector to interface remotely to it, via RS232 protocol. c1psl Supermicro host was designated the computer with which I would attempt remote control.
  • To test, I decided to use a serial-USB adapter. Since this is only a single unit, no need to get an RS232-ethernet interface like the one used in the vacuum rack, but if there are strong opinions otherwise we can adopt some other wiring/control philosophy.
  • No drivers needed to be installed, the host recognized the adapter immediately. I then shifted the waveplate and controller assembly to inside the VEA - they are sitting on a cart behind 1X2. Once the controller was connected to the USB-serial adapter cable, it was registered at /dev/ttyUSB0 immediately. I had to chown this port to the controls user for accessing it using python serial
  • Initially, I was pleasantly surprised when I found not one but TWO projects on PyPi that already claimed to do what I want! Sadly, neither NewportESP1.1 nor PyMeasure0.9.0 actually worked - the former is for python2 (and the string typesetting has changed for PySerial compatible with python3), while the latter seems to be optimized for Labview interfacing and didn't play so nice with the serial-USB adapter. I didn't want to spend >10mins on this and I know enough python serial to do the interfacing myself, so I pushed ahead. Good thing we have several pySerial experts in the group now, if any of you want to figure out how we can make either of these two utilities actually work for us - there is also this repo which claims to work for python 3 but I didn't try it because it isn't a managed package.
  • The command list is rather intimidating, it runs for some 100 (!) pages. Nevertheless, I used some basic commands to readback the serial number of the controller, and also succeeded in moving the stage around  by issuing the "PR" command appropriately 👍. BTW, I forgot that I didn't test the motor enable/disable which is an essential channel I think.
  • I think we actually only need a very minimal set of commands, so we don't need to read all 100 pages of instructions:
    • motor enable/disable
    • absolute and relative rotations
    • readback of the current position
    • readback of the moving status
    • a stop command
    • an interlock
  • Note that as a part of this work, in addition to chowning /dev/ttyUSB0, I installed the two aforementioned python packages on c1psl. I saw no reason to manually restart the modbus and latch services running on it, and I don't believe this work would have impacted the correct functioning of either of those two services, but be aware that I was poking around on c1psl. I was also reminded that the system python on this machine is 2.7 - basically, only the latch service that takes care of the gains for the IMC servo board are dependent on python (and my proposed waveplate control script will be too), but we should really upgrade the default python to 3.7/3.8.

Next steps:

Satisfied that the unit works basically as expected, I decided to stop for today. My thinking was that we can have the ESP300 installed in 1X1 or 1X2 (depending on where space is more readily available). I will upload have uploaded a cartoon here so people can comment if they like/dislike my plan

  • We need to use a long-ish cable to run from 1X1/1X2, where the controller will be housed, to the PSL enclosure. Livingston did ship one such long cable (still on Rana's table), but I didn't check if the length is sufficient / the functionality of this long cable. 
  • We need to set up some EPICS channels for the rotation stage angle, motor ENABLE/DISABLE, a "move stage" button, motion status, and maybe a channel to control the rotation speed? 
  • We need a python script that is reading from / writing to these EPICS channel in a while loop. Should be straightforward to setup something to run like the latch.py service that has worked decently reliably for ~a year now. afaik, there isn't a good way to run this synchronously, and the delay in sending/completing the execution of some of the serial commands might be ~1 second, but for the purpose of slowly ramping up the power, this shouldn't be a problem.
  • One question I do have is, what is the strategy to protect the IFO from the high power when the lock is lost? Surely we are not gonna rely on this waveplate for any fast actuation? With the current input power of 1W, the MCREFL photodiode sees ~100mW when the IMC loses lock. So if the final input power is 35W, do we wanna change the T=10% beamsplitter in the MCREFL path to keep this ratio?

Once everything is installed, we can run some tests to see if the rotary motion disturbs the PSL in any meaningful way. I will upload some photos to the picasa later. Photos here.

Attachment 1: remotePowCtrl.pdf
remotePowCtrl.pdf
  16022   Tue Apr 13 17:47:07 2021 gautamUpdateIOOWaveplate commissioning - software prepared

I spent some time today setting up a workable user interface to control the waveplate.

  1. Created some EPICS database records at /cvs/cds/caltech/target/ESP300.db. These are all soft channels. This required a couple of restarts of the modbus service on c1psl - as far as I can tell, everything has come back up without problems.
  2. Hacked newportESP to make it work, mainly some string encoding BS in the python2-->python3 paradigm shift.
  3. Made a python script at /cvs/cds/caltech/target/ESP300.py that is based on similar services I've set up for the CM servo and IMC servo boards. I have not yet set this up to run as a service on c1psl, but that is pretty trivial.
  4. Made a minimal MEDM screen, see Attachment #1. It is saved at  /opt/rtcds/caltech/c1/medm/c1psl/C1PSL_POW_CTRL.adl and can be accessed from the "PSL" tab on sitemap. We can eventually "calibrate" the angular position to power units.
  5. Confirmed that I can move the waveplate using this MEDM screen.

So this system is ready to be installed once Jordan and I find some time to lay out cabling + install the ESP300 controller in a rack.

At the moment, there is no high power and there is minimal risk of damaging anything, but someone should double check my logic to make sure that we aren't gonna burn the precious IFO optics. We should also probably hook up a hardware interlock to this controller.

I went through some aLIGO documentation and believe that they are using a custom made potentiometer based angle sensor rather than the integrated Newport (or similar) sensor+motor. My reading of the situation was that there were several problems to do with hysterisis, the "find home" routine etc. I guess for our purposes, none of these are real problems, as long as we are careful not to randomly rotate the waveplate through a full 180 degrees and go through the full fringe in the process. Need to think of a clever way to guard against careless / accidental MEDM button presses / slider drags.


Unrelated to this work: I haven't been in the lab for ~a week so I took the opportunity today to go through the various configs (POX/POY/PRMI resonant carrier etc). I didn't make a noise budget for each config but at least they can be locked 👍 . I also re-aligned the badly misaligned PMC and offloaded the somewhat large DC WFS offsets (~100 cts, which I estimate to be ~150 nNm of torque, corresponding to ~50 urad of misalignment) to the IMC suspensions' slow bias voltages. 

Attachment 1: remoteHWP.png
remoteHWP.png
  16023   Tue Apr 13 19:24:45 2021 gautamUpdatePSLHigh power operations

We (rana, yehonathan and i) briefly talked about having high power going into the IFO. I worked on some calcs a couple of years ago, that are summarized here. There is some discussion in the linked page about how much power we even need. In summary, if we can have

  • T_PMC ~85% which is what I measured it to be back in 2019
  • T_IMC * T_inputFaraday ~60% which is what I estimate it to be now
  • 98% mode matching into the IMC
  • power recycling gain of 40-45 once we improve the folding mirror situation in the recycling cavities
  • and a gain of 270-280 in the arm cavities (20-30ppm round trip loss)

then we can have an overall gain of ~2400 from laser to each arm cavity (since the BS divides the power equally between the two arms). The easiest place to get some improvement is to improve T_IMC * T_inputFaraday. If we can get that up to ~90%, then we can have an overall gain of ~4000, which is I think the limit of what is possible with what we have.

We also talked about the EOM. At the same time, I had also looked into the damage threshold as well as clipping losses associated with the finite aperture of our EOM, which is a NewFocus 4064 (KTP is the Pockel medium). The results are summarized in Attachments #1 and #2 respectively. Rana thinks the EOM can handle factor of ~3 greater power than the rated damage threshold of 20W/mm^2.

Attachment 1: intensityDist.pdf
intensityDist.pdf
Attachment 2: clippingLoss.pdf
clippingLoss.pdf
  16025   Wed Apr 14 12:27:10 2021 gautamUpdateGeneralLab left open again

Once again, I found the door to the outside in the control room open when I came in ~1215pm. I closed it.

  16028   Wed Apr 14 14:52:42 2021 gautamUpdateGeneralIFO State

The C1:IFO-STATE variable is actually a bunch (16 to be precise) of bits, and the byte they form (2 bytes) converted to decimal is what is written to the EPICS channel. It was reported on the call today that the nominal value of the variable when the IMC is locked was "8", while it has become "10" today. In fact, this has nothing to do with the IMC. You can see that the "PMC locked" bit is set in Attachment #1. This is done in the AutoLock.sh PMC autolocker script, which was run a few days ago. Nominally, I just lock the PMC by moving some sliders, and I neglect to set/unset this bit.

Basically, there is no anomalous behavior. This is not to say that the situation cannot be improved. Indeed, we should get rid of the obsolete states (e.g. FSS Locked, MZ locked), and add some other states like "PRMI locked". While there is nothing wrong with setting these bits at the end of execution of some script, a better way would be to configure the EPICS record to automatically set / unset itself based on some diagnostic channels. For example, the "PMC locked" bit should be set if (i) the PMC REFL is < 0.1 AND (ii) PMC TRANS is >0.65 (the exact thresholds are up for debate). Then we are truly recording the state of the IFO and not relying on some script to write to the bit (I haven't thoguht through if there are some edge cases where we need an unreasonable number of diagnostic channels to determine if we are in a certain state or not).

Attachment 1: IFOSTATE.png
IFOSTATE.png
  16032   Wed Apr 14 19:48:18 2021 gautamUpdatePSLLaser amplifier

A couple of years ago, I got some info about the amplifier setup at the sites from Terra - sharing here in case there is some useful info in there (our setup will be rather different, but it looked to me like our Amp is a 2017 vintage and it may be that the performance is not the same as reported in the 2019 paper).

collection of docs (table layout in 'Proposed....setup') : https://dcc.ligo.org/LIGO-T1700046

LVC 70W presentation: https://dcc.ligo.org/LIGO-G1800538 

I guess we should double check that the beam size everywhere (in vacuum and on the PSL table) is such that we don't exceed any damage thresholds for the mirrors used. 

  16033   Wed Apr 14 23:55:34 2021 gautamUpdateElectronicsHV Coil driver assembly

I've occcupied the southernmost electronics bench for assembling the 4 production version HV coil driver chassis. I estimate it will take me 3 days, and have left a sign indicating as much. Once the chassis assembly is done, I will need to occupy the northernmost bench where bench supplies are to run some functionality tests / noise measurements, and so unless there are objections, I will move the Acromag box which has been sitting there.

ELOG V3.1.3-