40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 20 of 335  Not logged in ELOG logo
ID Date Author Type Category Subject
  15905   Thu Mar 11 18:46:06 2021 gautamUpdateCDScds reboot

Since Koji was in the lab I decided to bite the bullet and do the reboot. I've modified the reboot script - now, it prompts the user to confirm that the time recognized by the FEs are the same (use the IOP model's status screen, the GPSTIME is updated live on the upper right hand corner). So you would do sudo date --set="Thu 11 Mar 2021 06:48:30 PM UTC" for example, and then restart the IOP model. Why is this necessary? Who knows. It seems to be a deterministic way of getting things back up and running for now so we have to live with it. I will note that this was not a problem between 2017 and 2020 Oct, in which time I've run the reboot script >10 times without needing to take this step. But things change (for an as of yet unknown reason) and we must adapt. Once the IOPs all report a green "DC" status light on the CDS overview screen, you can let the script take you the rest of the way again.

The main point of this work was to relax the data rate on the c1lsc model, and this worked. It now registers ~3.2 MB/s, down from the ~3.8 MB/s earlier today. I can now measure 2 loop TFs simultaneously. This means that we should avoid adding any more DQ channels to the c1lsc model (without some adjustment/downsampling of others).

Quote:

 Holding off on a restart until I decide I have the energy to recover the CDS system from the inevitable crash.

  15904   Thu Mar 11 14:27:56 2021 gautamUpdateCDStimesync issue?

I have recently been running into hitting the 4MB/s data rate limit on testpoints - basically, I can't run DTT TF and spectrum measurements that I was able to while locking the interferometer, which I certainly was able to this time last year. AFAIK, the major modification made was the addition of 4 DQ channels for the in-air BHD experiment - assuming the data is transmitted as double precision numbers, i estimate the additional load due to this change was ~500KB/s. Probably there is some compression so it is a bit more efficient (as this naive calc would suggest we can only record 32 channels and I counted 41 full rate channels in the model), but still, can't think of anything else that has changed. Anyway, I removed the unused parts and recompiled/re-installed the models (c1lsc and c1omc). Holding off on a restart until I decide I have the energy to recover the CDS system from the inevitable crash. For documentation I'm also attaching screenshot of the schematic of the changes made.

Anyway, the main point of this elog is that at the compilation stage, I got a warning I've never seen before:

Building front-end Linux kernel module c1lsc...
make[1]: Warning: File 'GNUmakefile' has modification time 13 s in the future
make[1]: warning:  Clock skew detected.  Your build may be incomplete.

This prompted me to check the system time on c1lsc and FB - you can see there is a 1 minute offset (it is not a delay in me issuing the command to the two machines)! I am suspecting this NTP action is the reason. So maybe a model reboot is in order. Sigh

  15903   Thu Mar 11 14:03:02 2021 gautamUpdateLSCAO path

There is some evidence of weird saturation but the gain balancing (0.8dB) and orthogonality (~89 deg) for the daughter board on the REFL11 demod board that generates the AO path error signal seem reasonable. This board would probably benefit from the AD797-->Op27 and thick-film-->thin film swap but i don't think this is to blame for being unable to execute the RF transition.

  15902   Thu Mar 11 08:13:24 2021 Paco, AnchalUpdateSUSIMC First Free Swing Test failed due to typo, restarting now

[Paco, Anchal]

The triggered code went on at 5:00 am today but a last minute change I made yesterday to increase number of repititions had an error and caused the script to exit putting everything back to normal. So as we came in the morning, we found the mode cleaner locked continuously after one free swing attempt at 5:00 am. I've fixed the script and ran it for 2 hours starting at 8;10 am. Our plan is to get some data atleast to play with when we are here. If the duration is not long enough, we'll try to run this again tomorrow morning. The new script is running on same tmux session 'MCFreeSwingTest' on Rossa

10:13 the script finished and IMC recovered lock.

Thu Mar 11 10:58:27 2021

The test ran succefully with the mode cleaner optics coming back to normal in the end of it. We wrote some scripts to read data and analyze it. More will come in future posts. No other changes were made today to the systems.

  15901   Thu Mar 11 02:10:06 2021 KojiSummaryBHDBHD Platform vertical dimentions

Stephen and I discussed the nominal heights of the BHD platform components.

  • The beam height from the stack is 5.5"
  • The platform height is 1.5" and the thickness of 0.4", according to the VOPO suspension, which we want to be compatible with.
  • Thus the beam height on the BHD platform is 4".
  • The VOPO platform has a minimum 0.1" gap from the installation surface when it is suspended.
  • When the BHD platform is fixed on the table, we'll use positioners that are fixed on the stack table. Then the BHD platform is fixed on the positioner rather than fixing the entire platform on the stack. This leaves us the option to suspend the platform in the future. The number of the positioners is TBD.
  • Looking at the head size for 1/4-20 socket head screws, It'd be nice to have the thickness of 0.5" for the positioners. This makes the thin part of the stiffener to be 0.6" in thickness.
     
  • The numbers are nominal for the initial design and subject to the change along with FEA simulations to determine the resonant frequency of the body modes.
  15900   Thu Mar 11 01:45:42 2021 gautamUpdateLSCPRFPMi
  1. PRM satellite box indeed seems to have been the culprit - shortly after I swapped it to the SRM, its shadow sensors went dark. I leave the watchdog tripped.
  2. I still was unable to realize the RF only IFO
    • Clearly my old settings don't work, so I tried to go about it systematically. First, try and transition CARM to RF, leave DARM on ALS.
    • As usual, I can realize the state were the arm powers are ~100, and the two paths are blended. 
    • But I'm not able to completely turn off the CARM_A path without blowing the lock.

Pity really, I was hoping to make it much further tonight. I think I'll have to go back to the high BW POX/POY lock, and also check out the conversion efficiency / noise of the daughter board on the REFL11 demod board. Compared to before my work on the RF source, the demod phase for the PRMI lock using REFL11 as an error signal has basically necessitated a change of the digital demod phase by 180 degrees - so I made the appropriate polarity changes in the CM_SLOW and AO paths (the assumption is that CARM in REFL11 would require the same change in digital demod phase, and I think this is a reasonable assumption - indeed, with the arm powers somewhat stable ~100, if I look at the PDH signal in REFL11 I and Q, it does seem to show up largely in the I quadrature (pre digital phase rotation). Anyway, with so many weird effects (wonky PRM suspension, strange PRMI sensing etc etc, who knows what's going on. This will take a systematic effort.

I defer the electronics characterization for the daytime (if I feel like I need it tomorrow I'll do it, else. Koji has said he can do it on Friday).

Quote:

 I was unable to fully hand off control from ALS-->RF, I suspect I may be using the wrong sign on the AO path (or some such other sub-optimal CM board settings). I'll hook up the SR785 and take some TFs tomorrow, that should give more insight into what's what. 

  15899   Wed Mar 10 19:58:27 2021 gautamUpdateLSCSR785 hooked up to CM board

In preparation for later today evening. The TT alignment wasn't visibly disturbed.

  15898   Wed Mar 10 17:35:47 2021 gautamUpdateSUSSpooky action at a distance

As I am sitting in the control room, the PRM suspension watchdog tripped again. This time, there is clearly no seismic activity. Yet, the BS suspension also shows a slight disturbance at the same time as the PRM. ITMY shows no perturbation though. My best hypothesis here is that the problem is electrical. In Attachment #1, you can see that all of the Sensors go to -6000 cts (whut?) for ~30 seconds. Zooming in to that segment in Attachment #2, it would appear that the light detected by the LED changed dramatically (went dark?) on all 5 coils. The 4 face coils have the same time constant but the side has a different one, but in any case, this level of light change in half a second is clearly not physical. Then the watchdog trips because this huge apparent motion elicits a kick from the damping loops.

The plots I attach are for the DQed sensor channels, so there is some digital filtering involved. But I confirmed that the signal doesn't go negative if I disable the input to the filter module. So it would seem that the voltage input to the ADC really chanegd polarity, seems unphysical. Could be Satellite Box or whitening electronics I suppose - I think we can exclude bad cabling, as that would just lead to the signals going to 0, whereas it would appear here that they did really change sign (confirmed by looking at the ULPDmon channel, which is digitized by Acromag, which reports -10 V at the time of glitch). But why should the BS care about the PRM electronics going wonky?

In addition to an exorcist, we need functioning electronics!


This optic has been hampering my locking attempts all evening. I switched the PRM and SRM satellite boxes, but then I remembered PRM has the Al foil "hats" to attenuate scattered light. of course the Al foil is conducting and can short the OSEM leads. I put some kapton pieces in between OSEM and foil to try and mitigate this issue but I suppose over time it could have slipped, and is making some intermittent contact, shorting PD anode and cathode (that would explain the PD reporting -10 V instead of some physical value).

If this is the problem we would need a vent to address it. In the daytime I'll measure L and R of the coils to see if the actuator imbalance I reported is also due to the same problem...

  15897   Wed Mar 10 15:35:25 2021 Paco, AnchalSummaryIMCIMC free swinging experiment set to trigger at 5:00 am

A tmux session named "MCFreeSwingTest" will run on Rossa. This session is running script scripts/SUS/freeSwingMC.py (also attached) which will trigger at 5:00 am to impart 30000 counts kick to MC1, MC2, and MC3 after shutting PSL shutter and disabling the MC autolocker. It will let them freely swing for 1050 sec and will repeat 15 times to allow some averaging. In the end, it will undo all the changes it does and switches on autolocker on IMC. The script is set to restore any changes in case it fails at any point or a Ctrl-C is detected.

  15896   Wed Mar 10 15:29:58 2021 AnchalSummaryIMCIMC free swinging prep

No we didn't fix the issue. We'll post some screenshots tomorrow. From "sitemap>Shutter>PSL" we meant in Shutter medm window, we clicked on the PSL close button. As pointed later, it switches C1:AUX-PSL_ShutterRqst while the PSL shutter switch on Lock MC medm screen switches C1:PSL-PSL_ShutterRqst. We were not sure if this was intentional, so we didn't change anything.

  15895   Wed Mar 10 15:00:16 2021 gautamSummaryIMCIMC free swinging prep

Did you fix this issue? It is helpful to post a screenshot of the offending MEDM screen in addition to witticisms. The elog says "sitemap>Shutter>PSL" but I can't find PSL under the dropdown for shutters from Sitemap.

# Moving on to IMC suspensions characterization:
- Closed the PSL shutter, to our suprise, the MC was still locked. We thought this would take away any light from IMC but it doesn't. Maybe the IFO Overview needs to show the schematic in a way where this doesn't happen: "No light from any laser entering the MC but it still is locked with a resonating field inside."

  15894   Wed Mar 10 11:55:22 2021 gautamUpdateSUSPRM suspension suspect

The procedure is that the optic is kicked to excite it, and allowed to ring down for ~1ksec, with damping turned off. The procedure is repeated 15 times for some averaging. 

Attachment #1 - sensor spectra from yesterday.

Attachment #2 - peaks using the naive diagonalization matrix from yesterday.

Attachment #3 - Data from ~1 year ago. 

The y-axis in all plots is labelled as "cts/rtHz" but these are the DQed channels, which come after a "cts2um" CDS filter - so if that filter is accurate, them the y-axes may be read as um/rtHz.

I wonder if the September 2020 earthquake somehow damaged the PRM suspension, as this experiment would suggest that the problem is not only with the actuation. The data was gathered with the neutral position of the PRM (between kicks) being well aligned for PRMI, and the DC values of all the shadow sensors in this position is close to half-light (~1V, except for side which was more like 4V). Hard to say what exactly is happening since only the PIT DoF has the weird asymmetric peak shape instead of the expected Lorentzian - I would have thought that a damaged wire or broken magnet would affect all 4 DoFs but the F.C. spring experience on ETMY showed that anything is possible.

  15893   Wed Mar 10 11:46:22 2021 Paco, AnchalSummaryIMCIMC free swinging prep

[Paco, Anchal]

# Initial State
- MC is locked. The PRM monitor shows some oscillations.
- POP monitor shows light flashing once in a while.
- AS monitor shows one beam along with some other flashing beam around it.
- PRM Watchdog is tripped and shutdown. Everything else is normal except for overload on SRM OpLevs.
- Donatella got a mouse promotion

# Reenabling PRM watchdog:
- The custom reEnablePRMWatchdog.py has been deleted.
- Tried enabling the coil outputs manually and switching watchdog to Normal.
- Again saw large fluctuations like yesterday.
- Probably still the same issue of how current calculated actuations to the coils is in range -600 to -900 and gives and impulse to the optics when suddenly turned on.
- Waiting for PRM to damp down a little.
- Today we plan to change the position bias on PRM C1:SUS-PRM_POS_OFFSET instead of changing biases in pitch and yaw.
- Changing C1:SUS-PRM_POS_OFFSET from 0 to +/- 100 without enabling the coils, it seems upper and lower coils are anticorrelated with just changing the position. So going back to changing pitch.
- Changing C1:SUS-PRM_PIT_OFFSET from 0 -> 780. Switched on watchdog to normal.
- PRM damped down. OpLev errors are also within range.
- Enabled both OpLevs.

# Try locking Y-Arm
- IFO>CONFIGURE>YARM>Restore YARM (POY) using Donatella. See a bunch of python error messages in the call complaining about unable to find some python 2 files. Closed it with Ctrl-C after a stuck state.
- Tried running it on Pianosa, the script ran without error but Y-Arm didn't lock.

# Try locking X-Arm
- IFO>CONFIGURE>XARM>Restore XARM (POX) on Donatella. Again a bunch of OSError messages. Donatella is not configured properly to run scripts.
- Tried running it on Piasnosa, the script ran without error but X-Arm didn't lock.
- This might mean that both arms are misaligned or the BS/PRM is misaligned.
- Moving around C1:SUS-PRM_PIT_OFFSET and C1:SUS-PRM_YAW_OFFSET in order to see if the transmitted light is misalgined. Both arms are set to acquire lock if possible. No luck.

# Hypothesis: The Arm cavity is not aligned within itself (ITM-ETM)
- Will try to lock X-Arm with green light while tuning the ETMX. Hopefully the BS and ITM are aligned so that once we align ETMX to get a green lock, the IR will also lock from the other side.
- Running IFO>CONFIGURE>XARM>Restore XARM (ALS) on Pianosa. No lock, moving forward with tunning ETMX pitch and yaw offsets. Nothing changed. Brought back to same values.

[Rana joined, Anchal moved to Rossa from Pianosa]

# Moving on to IMC suspensions characterization:
- Closed the PSL shutter, to our suprise, the MC was still locked. We thought this would take away any light from IMC but it doesn't. Maybe the IFO Overview needs to show the schematic in a way where this doesn't happen: "No light from any laser entering the MC but it still is locked with a resonating field inside."
- Shutting IMCR shutter (hoping that would unlock the IMC), still nothing happend.
- Tried shutting PSL shutter from Rossa, nothing happened to MC lock still.
- Closed shutter IOO>Lock MC> Close PSL and this unlocked the IMC. Found out that this shutter channel is C1:PSL-PSL_ShutterRqst while the one from the sitemap>Shutter>PSL changes C1:AUX-PSL_ShutterRqst. Some clarification on these medm screens would be nice.
- Disabled the MC autolocked from IOO>Lock MC screen (C1:IOO-MC_LOCK_ENABLE).
- Checked the scripts/SUS/freeswing.py to understand how kick is delivered and optic is left to swing freely.
- Next, we are looking at the C1SUS_MC1 screen to understand what channels to read during data acquisition.
- In sensor matrix, we see INMON for each sensor which is probably raw counts data from the OSEMs. Rana mentioned that OSEM data comes out in units of microns. These are C1:SUS-MC1_ULSEN_OUTPUT (and so on for UR, LL, LR, SD).

- In prep for finishing, recovered Autolocker by first opening the PSL mechanical shutter, then re-enabling the Autolocker. The IMC lock didn't immediately recover, and we saw some fuzz on the PSL-FSS_FAST trace, so we closed the shutter again, waited a minute, then re-opened it and MC caught its lock.
 

  15892   Wed Mar 10 00:32:03 2021 gautamUpdateLSCPRFPMi

The interferometer can nearly be locked again. I was unable to fully hand off control from ALS-->RF, I suspect I may be using the wrong sign on the AO path (or some such other sub-optimal CM board settings). I'll hook up the SR785 and take some TFs tomorrow, that should give more insight into what's what. With the arms held off resonance, the PRMI acquires lock nearly instantly (REFL165 I for PRCL, REFL165 Q for MICH), and can stay locked nearly indefinitely, which is what I need so I can get the RF lock going. However the sensing matrix (for vertex DoFs, arms held off resonance) still makes no sense to me. The MICH loop has ~50 Hz UGF and the PRCL loop ~150 Hz. I think the MICH loop shape can be optimized a little for better low frequency suppression, but this isn't the show-stopper at the moment. For record-keeping, the ALS performance was excellent and other subsystems were nominal tonight.

  15891   Tue Mar 9 18:49:28 2021 YehonathanUpdateSUSOSEM testing for SOSs

29 Good OSEMs, of which 1 is questionable (089) with PD voltage of 1.5V, 5 need some work (pigtailing, replace/remove/add screws). We have 4 pigtails. Schematics.

20 OK OSEMs (Slightly off-centered LED spot), of which 3 need some work (pigtailing, replace/remove/add screws).

13 Bad OSEMS (Way off-centered LED spot)

2 Defunct OSEMs

-------

Ed: KA
Good: 23 complete OSEMs +  5 good ones, which need soldering work (there are 4 pigtails and take one from a defunct OSEM).
OK:  Use good 7 OSEMs for the sides. And keep some functional OSEMs as spares.

 

  15890   Tue Mar 9 16:52:47 2021 JonUpdateCDSFront-end testing

Today I continued with assembly and testing of the new front-ends. The main progress is that the IO chassis is now communicating with the host, resolving the previously reported issue.

Hardware Issues to be Resolved

Unfortunately, though, it turns out one of the two (host-side) One Stop Systems PCIe cards sent from Hanford is bad. After some investigation, I ultimately resolved the problem by swapping in the second card, with no other changes. I'll try to procure another from Keith Thorne, along with some spares.

Also, two of the three switching power supplies sent from Livingston (250W Channel Well PSG400P-89) appear to be incompatible with the Trenton BPX6806 PCIe backplanes in these chassis. The power supply cable has 20 conductors and the connector on the board has 24. The third supply, a 650W Antec EA-650, does have the correct cable and is currently powering one of the IO chassis. I'll confirm this situation with Keith and see whether they have any more Antecs. If not, I think these supplies can still be bought (not obsolete).

I've gone through all the hardware we've received, checked against the procurement spreadsheet. There are still some missing items:

  • 18-bit DACs (Qty 14; but 7 are spares)
  • ADC adapter boards (Qty 5)
  • DAC adapter boards (Qty 9)
  • 32-channel DO modules (Qty 2/10 in hand)

Testing Progress

Once the PCIe communications link between host and IO chassis was working, I carried out the testing procedure outlined in T1900700. This performs a series checks to confirm basic operation/compatibility of the hardware and PCIe drivers. All of the cards installed in both the host and the expansion chassis are detected and appear correctly configured, according to T1900700. In the below tree, there is one ADC, one 16-ch DIO, one 32-ch DO, and one DolphinDX card:

+-05.0-[05-20]----00.0-[06-20]--+-00.0-[07-08]----00.0-[08]----00.0  Contec Co., Ltd Device 86e2
|                               +-01.0-[09]--
|                               +-03.0-[0a]--
|                               +-08.0-[0b-15]----00.0-[0c-15]--+-02.0-[0d]--
|                               |                               +-03.0-[0e]--
|                               |                               +-04.0-[0f]--
|                               |                               +-06.0-[10-11]----00.0-[11]----04.0  PLX Technology, Inc. PCI9056 32-bit 66MHz PCI <-> IOBus Bridge
|                               |                               +-07.0-[12]--
|                               |                               +-08.0-[13]--
|                               |                               +-0a.0-[14]--
|                               |                               \-0b.0-[15]--
|                               \-09.0-[16-20]----00.0-[17-20]--+-02.0-[18]--
|                                                               +-03.0-[19]--
|                                                               +-04.0-[1a]--
|                                                               +-06.0-[1b]--
|                                                               +-07.0-[1c]--
|                                                               +-08.0-[1d]--
|                                                               +-0a.0-[1e-1f]----00.0-[1f]----00.0  Contec Co., Ltd Device 8632
|                                                               \-0b.0-[20]--
\-08.0-[21-2a]--+-00.0  Stargen Inc. Device 0101
                \-00.1-[22-2a]--+-00.0-[23]--
                                +-01.0-[24]--
                                +-02.0-[25]--
                                +-03.0-[26]--
                                +-04.0-[27]--
                                +-05.0-[28]--
                                +-06.0-[29]--
                                \-07.0-[2a]--

Standalone Subnet

Before I start building/testing RTCDS models, I'd like to move the new front ends to an isolated subnet. This is guaranteed to prevent any contention with the current system, or inadvertent changes to it.

Today I set up another of the Supermicro servers sent by Livingston in the 1X6 test stand area. The intention is for this machine to run a cloned, bootable image of the current fb1 system, allowing it to function as a bootserver and DAQ server for the FEs on the subnet.

However, this hard disk containing the fb1 image appears to be corrupted and will not boot. It seems to have been sitting disconnected in a box since ~2018, which is not a stable way to store data long term. I wasn't immediately able to recover the disk using fsck. I could spend some more time trying, but it might be most time-effective to just make a new clone of the fb1 system as it is now.

  15889   Tue Mar 9 15:22:56 2021 KojiSummarySUSPRM suspension

I just saw the PRM watchdog tripped at ~15:20 local (23:20UTC). I restored the PRM but I saw only the side watchdog tripped.
Again at 15:27

17:55 I found the PRM was oscillating while the watchdogs were not tripped. I turned off the OPLEV servos and this made the PRM calmed down. But I didn't turn on the OPLEVs for the past two trips. How were the OPLEVs turned on???

Ah, I'm sorry, I missed the line that Gautam was running the free-swinging test on the PRM.
The two kicks starting from 23:08:50 and from 23:26:31 were spoiled. Did it make the measurement completely waisted?

 

  15888   Tue Mar 9 15:19:03 2021 KojiUpdateSUSOSEM testing for SOSs

How were the statistics of them? i.e. # of Good OSEMs, # of OK OSEMs, etc...

  15887   Tue Mar 9 14:37:26 2021 gautamSummarySUSPRM suspension

The PRM got tripped ~5AM this morning. The cause is unclear - the seismometer reports elevated activity ~10 minutes before the ringdown starts (as judged using the OSEMs). But the other optics didn't seem to receive as much of an impulse (I only show the BS sensors here as it sits on the same stack as the PRM). Anyway it certainly wasn't me trying to make life difficult for the morning team.

I was able to restore the damping with reEnableWatchdogs.py. I am now running some suspension tests on the PRM by letting it swing freely so please let that finish. I plan to attempt some locking this evening.

Quote:

[Paco, Anchal]

- Upon arrival, MC is locked, and we can see light in MON5 (PRM) (usually dark).

  15886   Tue Mar 9 14:30:22 2021 YehonathanUpdateSUSOSEM testing for SOSs

I finished ranking the OSEMS on the OSEM wiki page.

I also moved the OSEM data folder from /home/export/home to /users/public_html and created a soft link instead. I have done the same for the 40m_TIS folder that I uploaded there a while ago.

  15885   Tue Mar 9 12:41:29 2021 KojiSummaryElectronicsInvestigation on the invacuum Dsub cables

I believe the aLIGO style invac dsub cables and the conventional 40m ones are incompatible.
While the aLIGO spec is that Pin1 (in-vac) is connected to the shield, Pin13 (in-vac) is the one for the conventional cable. I still have to check if Pin13 is really connected to the shield, but we had trouble before for the IO TTs https://nodus.ligo.caltech.edu:8081/40m/7864.
(At least one of the existing end cables did not show this Pin13-chamber connection. However, the cables OMC/IMC chambers indicated this feature. So the cables are already inhomogenious.)

- Which way do we want to go? Our electronics are updated with aLIGO spec (New Sat amp, OMC electronics, etc), so I think we should start making the shift to the aLIGO spec.

- Attachment Top: The new coil drivers can be used together with the old cables using a custom DB25 cable (in-air).

- Attachment Mid: The combination of the conventional OSEM wiring and the aLIGO in-vac cable cause the conflict. The pin1 which is connected to the shield is used for the PD bias.

- Attachment Bottom: This can be solved by shifting the OSEMs by one pin.

Notes:
o The aLIGO cables have 12 twisted pair wires, but paired signals do not share a twisted pair.
   --- No. This can't be solved by rotating the connectors.
o This modification should be done only for the new suspension.
   --- In principle, we can apply this change to any SOSs. However, this action involves the vent. We probably want to install the new electronics for the existing suspensions before the vent.
o ^- This means that we have to have two types of custom DB25 in-air cables.
   --- Each cable should handle "Shield wire" from the sat amp correctly.

Related Links:

Active TT Pin Issue
https://nodus.ligo.caltech.edu:8081/40m/7863
and the thread

Hacky solution
https://nodus.ligo.caltech.edu:8081/40m/7869

Photo
https://photos.google.com/u/1/album/AF1QipOEDi7iBdS4EHcpM7GBbv9l6FiJx-Tkt1I2eSFA
Active TT Pin Swapping (December 21, 2012)

TT Wiring Diagram (Wiki)
https://wiki-40m.ligo.caltech.edu/Suspensions/Tip_Tilts_IO

  15884   Tue Mar 9 10:57:06 2021 Paco, AnchalSummaryIMCXARM lock and POX spectra

[Paco, Anchal]

- Upon arrival, MC is locked, and we can see light in MON5 (PRM) (usually dark).

# XARM locking
- Read through "XARM POX" script (path='/cvs/cds/rtcds/caltech/c1/burt/c1configure/c1configureXarm')
- Before running the script, we noticed the PRM watchdog is down, so we manually repeat the procedure from last time, but see more swinging even though the watchdog is active.
- Run a reEnablePRMWatchdogs.py script (a copy of reEnableWatchdogs.py with optics=['PRM']), which had the same effect. 
- We manually disable the watchdog to recover the state we first encountered, and wait for the beam in MON5 to come to rest.
    - The question is; is it fine to lock Xarm with PRM watchdog down?
    - To investigate this, we look at the effect of the offset on the unwatchdog-PRM.
    - Manually change 'PRM_POS_OFFSET' to 200, and -800 (which is the value used in the script) with no effect on the PRM swinging.
- Moving on, run IFO > CONFIGURE > ! (X Arm) > RESTORE XARM (XARM POX), and ... success.

# MC-POX noise spectra
- With XARM locked, open diaggui and take spectra for C1:LSC-POX11_I_ERR_DQ, C1:LSC-POX11_Q_ERR_DQ, C1:IOO-MC_F_DQ
- Lost XARM lock while we were figuring out unit conversions...
    - Assuming 2.631e-13 m/counts (6941) and using 37.79 m (arm length), 1064.1 nm wavelength, we get a calibration factor of 2.631e-13 * c / (2*L*lambda) ~ 0.9809 Hz/count 
    - (FAQ?, how to find/compute/measure the correct calibration factors?)
- Relock XARM, retake spectra. Attachment 1 has plots for POX11_I/Q_ERR_DQ spectrum (cts/rtHz, we couldn't find relevant calibration) and MC_F_DQ in (Hz/rtHz from referring to 15576, we couldn't get the units to show on y scale.)

# MC-POY noise spectra (attempt)
- Now, run IFO > CONFIGURE > ! (Y Arm) > RESTORE YARM (YARM POY), and XARM locks (why?)
    - Could PRM watchdog being down be the cause? 
- Try C1ASS > (YARM) ! More Scripts > ON, and looked at YARM PIT/YAW striptool. 
- C1ASS > (YARM) ! Freeze Outputs, then OFF
- Go back to IFO > CONFIGURE > ! (Y Arm) > Align YARM  (ASS ON: Unfreeze), try running this then Freeze, then OFF Zero Outputs.
- Try RESTORE YARM (POY) again, still not working.
- Try RESTORE YARM ALS, then try again after opening the shutter, but also fail to lock AUX.
    - Is the PRM WD behind some evil misalignment? Will move forward with XARM bc it is happy.

# ARM locking
- Attempted the IFO > CONFIGURE > ! (X Arm) > RESTORE Xarm (XARM ALS) but green failed to lock and we lost XARM lock.
- Try to recover XARM lock... success. It's nice to have a (repeatable) checkpoint.
- Attempt YARM lock. Not successful. It just seems like the lock Triggers are not raised (misalignment?)
    - From C1SUS_ETMY, try changing the bias "C1:SUS-ETMY_YAW_OFFSET" manually to reduce the OPLEV_YERROR. Changed from -47 to -57.
    - Retry YARM lock script... no luck
    - From C1SUS_PRM, try changing the bias "C1:SUS-PRM_PIT_OFFSET" manually to reduce OPLEV errors. Changed from 34 to 22 with no effect, then realized the coil outputs are disabled because the WD is down...
    - So we do the following BIAS changes "C1:SUS-PRM_PIT_OFFSET" = 34 > 770 and "C1:SUS-PRM_YAW_OFFSET" = 134 > -6
    - Enable all Coil Outputs, turn WD to Normal, turn OPLEVs ON, (this time the beam does not swing like crazy).
    - Fine tune BIASes "C1:SUS-PRM_PIT_OFFSET" = 770 > 805  and "C1:SUS-PRM_YAW_OFFSET" = -6 > 65
        - Saw YARM locking briefly, then unlocking, but we stopped once the OPLEV_ERRs no longer overloaded (from magnitudes > 50 to ~ 40).
- Retry YARM lock... no luck
    - From C1SUS_ETMY, try changing the bias "C1:SUS-ETMY_PIT_OFFSET" from -1 to 6. 

Stop for the day. Leave XARM locked, MC locked. 

  15883   Mon Mar 8 22:01:26 2021 gautamUpdateLSCMore PRMI

There are still many mysteries remaining - e.g. the MICH-->PRCL contribution still can't be nulled. But for now, I have the settings that keep the PRMI locked fairly robustly with REFL55I/Q or REFL165I/Q (I quadrature for PRCL, Q for MICH in both cases), see Attachment #1 and Attachment #2 respectively. For the 1f locking, the REFL55 digital demod phase was fine-tuned to minimize the frequency noise (generated by driving MC2) coupling to the Michelson readout (as the Michelson is supposed to be immune) - the coupling was measured to be ~60dB larger at the PRCL error point than MICH. There was still nearly unity coherence between my MC2 drive and the MICH error point demodulated at the drive frequency, but I was not able to null it any better than this. With the PRMI (ETMs misaligned) locked on the 1f signals, I measured Attachment #1 and used it to determine the demod phase that would best enable REFL165_I to be a PRCL sensor. Rana thinks that if there is some subtle effect in the marginally stable PRC, we would not see it unless we do a mode scan (time consuming to set up and execute). So I'm just going to push on with the PRFPMI locking - let's see if the clean arm mode forces a clean TEM00 mode to be resonant in the PRC, and if that can sort out the lack of orthogonality between MICH/PRCL in the 1f sensors (after all, we only care about the 3f signals in as much as they allow us to lock the interferometer). I'll try the PRMI with arms on ALS tomorrow eve.

I have no idea what to make of how the single frequency lines I am driving in MICH and PRCL show up in REFL11 and REFL33 - the signals are apparently completely degenerate (in optical quadrature). How this is possible even though the PRMI remains stably locked, POP22/POP110/AS110 report stable sideband buildup is not clear to me.

  15882   Mon Mar 8 20:11:51 2021 ranaFrogsComputer Scripts / Programsactivate_matlab out of control on Megatron

there were a zillion processes trying to activate (this is the initial activation after the initial installation) matlab 2015b on megatron, so I killed them all. Was someone logged in to megatron and trying to run matlab sometime in 2020? If so, speak now, or I will send the out-of-control process brute squad after you!

  15881   Mon Mar 8 19:22:56 2021 ranaSummarySUSIMC suspension characterization

Herewith, I describe an adventure

  1. Balance the OSEM input matrix using the free swinging data (see prev elogs).
  2. Balance the coil actuation by changing the digital coil gains. This should be done above 10 Hz using optical levers, or some IMC readout (like the WFS). At the end of this process, you should put a pringle vector into the column of the SUS output matrix that corresponds to one of the SUS OSC/LOCKIN screens. Verily, the pringle excitation should produce no signal in MC_F or da WFS.
  3. use the Malik doc on the single suspension to design feed-forward filters for the SUS COIL filter banks. You can get the physical parameters using the design documents on DCC / 40m wiki and then modify them a bit based on the eigenfrequencies in the free swinging data.
  4. Model the 2x2 system which includes longitudinal and pitch motion. Consider how accurate the filters must be to maintain a cross-coupling of < 3% in the 0.5-2 Hz band.
  5. Is this decoupling forsooth still maintained when you close the SUS damping loops in the model? If not, why so?
  6. Make step response measurements of the damping loops and record/plot data. Use physical units of um/urad for the y-axes. How much is the step response cross-coupling?
  7. Consider the IMC noise budget: are the low pass filters in the damping loops low-passing enough? How much damping is demasiado (considering the CMRR of the concrete slab for seismic waves)?
  8. Can we use Radhika's AAA representation to auto-tune the FF and damping filters? It would be very slick to be able to do this with one button click.

gautam: For those like me who don't know what the AAA representation is: the original algorithm is here, and Lee claims his implementation of it in IIRrational is better, see his slides.

  15880   Mon Mar 8 17:09:29 2021 gautamUpdateSUSPRM coil actuators heavily imbalanced

I realized I hadn't checked the PRM actuator as thoroughly as I had the others. I used the Oplev as a sensor to check the coil balancing, and I noticed that while all 4 coils show up with the expected 1/f^2 profile at the Oplev error point, the actuator gains seem imbalanced by a factor of ~5. The phase isn't flat because of some filters in the Oplev electronics I guess. The Oplev loops were disabled for the measurement, and the excitations were small enough that the beam stayed reasonably well centered on the QPD throughout. This seems very large to me - the values in the coil output filter gains lead me to expect more like a ~10% mismatch in the actuation strenghts, and similar tests on other optics in the past, e.g. ETMY, have yielded much more balanced results. I'm collecting some free-swinging PRM data now as an additional check. I verified that all the coils seem actuatable at least, by applying a 500 ct step at the offset of the coil output FM, and saw that the optic moved (it was such a test that revealed that MC1 had a busted actuator some time ago). If the eigenmode spectra look as expected, I think we can rule out broken magnets, but I suppose the magnets could still be not well matched in strength?

  15879   Mon Mar 8 12:54:54 2021 gautamUpdateEquipment loan40m-->Cryo
  1. Busby box
  2. SR554 transformer preamplifier
  15878   Mon Mar 8 12:40:35 2021 gautamSummarytrainingInvestigate how-to XARM locking

For the arm locking, the "Restore Xarm (XARM POX)" script from the "IFO_CONFIGURE" MEDM screen should get you there (I just checked it and it works fine). It is worth getting a hang of the PDH signal chain (read what the script is doing and map it to the signal chain) so you get a feel for where there may be offsets, saturations, what the trigger logic is etc. The LSC overview screen is supposed to be pretty intuitive (if you think it can be improved, I'd love to hear it but please don't change it without documenting) and there are also the webviews of the simulink models (these are RO so feel free to click around, for the LSC the c1lsc model is the relevant one).

  15877   Mon Mar 8 12:01:02 2021 Paco, AnchalSummarytrainingInvestigate how-to XARM locking

[Paco, Anchal]

- Started zoom stream; thanks to whoever installed it!
- Spent some time trying to understand how anything we did last thursday lead to the sensing matrix change, but still cannot figure it out. 
- Tracking back on our actions, at ~10:30 we ran burt Restore with the 08:19/.*snap and in lack of a better suspect, we blame it on that action for now.

# ARM locking??
- Reading (not running) the scripts/XARM/lockXarm.py script and try to understand the workflow. It is pretty confusing that the result was to lock Yarm last time.
- It looks like this script was a copy of lockYarm.py, and was never updated (there's a chance we ran it for the first time last thursday)
- *Is there a script to lock the Arms?* Or should we write one? To write one, we first attempt a manual procedure;
    1. No need to change RFPD InMTRX
    2. All filters inputs / outputs are enabled 
    3. Outputs from XARM and YARM in the Output matrix are already going to ETMX and ETMY
      - Maybe we can have the ARM lock engage by changing the MC directly?
    4. Change C1:SUS-MC2_POS_OFFSET from -38 to -0, and enable C1:SUS-MC2_POS_OFFSET_ON
    5. Manually scan MC2_POS_OFFSET to 250 (nothing happens), then -250, then back to -38 (WFS1 PIT and YAW changed a little, but then returned to their nominal values)
      - Or maybe we need to provide the right gain...
    6. Disabled C1:SUS-MC2_POS_OFFSET_ON (back to nominal state)
    7. Look into manually changing C1:LSC-XARM_GAIN;
      From the command line using python:
      >> import epics
      >> ch_name = 'C1:LSC-XARM_GAIN'
      >> epics.caput(ch_name, 0.155) # nominal = 0.150
      - Could be unrelated, but we noted a slow spike on C1:PSL-FSS_PCDRIVE (definitely from before we changed anything)
      - Still nothing is happening
    8. Changed the gain to 0.175, then back to 0.150, no effect... then 0.2, 0.3 ...
      - Stop and check SUS_Watchdogs (should not have changed?) and everything remains nominal
      - Revert all changes symmetrically.
      - Could we have missed enabling FM1?
      - Briefly lost MC lock, but it came back on its own (probably unrelated)

- Wrap it up for the day. In summary; no harm done to our knowledge.

  15876   Sun Mar 7 19:56:27 2021 AnchalUpdateLSCSensing matrix settings messed with

I understand this mst be frustrating for you. But we did not change these settings, knowingly atleast. We have documented all the things we did there. The only thing I can think of which could possibly change any of those channels are the scripts that we ran that are mentioned and the burt restore that we did on all channels (which wasn't really necessary). We promise to be more vigilant of changes that occur when we are present in future.

Quote:

To my dismay, I found today that somebody had changed the oscillator frequencies for the sensing matrix infrastructure we have. The change happened 2 days and 2 hours ago (I write this at ~1230 on Saturday, 3/6), i.e. ~1030am on Thursday. According to the elog, this is when Anchal and Paco were working on the interferometer, but I can find no mention of these settings being changed. Not cool guys 😒 .

This was relatively easy to track down but I don't know what else may have been messed with. I don't understand how anything that was documented in the elog can lead to this weird doubling of the frequencies.

I have now restored the correct settings. The "sensing matrix" I posted last night is obviously useless.

 

  15875   Sun Mar 7 15:26:10 2021 gautamUpdateLSCHousekeeping + more PRMI
  1. Beam pointing into PMC was tweaked to improve transmission.
  2. AS110 photodiode was re-installed on the AS table - I picked off 30% of the light going to the AS WFS using a beamsplitter and put it on the AS110 photodiode.
  3. Adjusted ASDC whitening gain - we have been running nominally with +18dB, but after Sept 2020 vent, there is ~x3 amount of light incident on the AS55 RFPD (from which the ASDC signal is derived). I want to run the dither alignment servos that use this PD using the same settings as before, hence this adjustment.
  4. Adjusted digital demod phases of POP22, POP110 and AS110 signals with the PRMI locked (sideband resonant). I want these to be useful to debug the PRMI. the phases were adjusted so that AS110_Q, POP22_I and POP110_I contain the signal (= sideband buildup) when the PRMI is locked.
  5. Ran the actuator calibration routine for BS, ITMX and ITMY - i'll try and do the PRM and ETMs as well later.
  6. With the PRMI locked (sidebands resonant), looked at the sideband power buildup. POP22 and POP110 remain stable, but there is some low frequency variation in the AS110_Q channel (but not the I channel, so this is really a time varying transmission of the f2 sideband to the dark port). What's that about? Also unsure about those abrupt jumps in the POP22/POP110 signals, see Attachment #1 (admittedly these are slow channels). I don't see any correlation in the MICH control signal.
  7. Measured the loop shapes of the MICH (UGF ~90 degrees, PM~30 degrees) and PRCL (UGF~110 Hz, PM~30 degreees) loops - stability margins and loop UGFs seem reasonable to me.
  8. Tried nulling the MICH-->PRCL coupling by adjusting the MICH-->PRM matrix element - as has been the case for a while, unable to do any better, and I can't null that line as we expect to be able to.
  9. Not expecting to get anything sensible, but ran some sensing matrix lines (at the correct frequencies this time).
  10. Tried locking the PRMI with MICH actuation to an ITM instead of the BS - I can realize the lock but the loop OLTF I measure with this configuration is very weird, needs more investigation. I may look into this later today evening.

I was also reminded today of the poor reliability of the LSC whitening electronics. Basically, there may be hidden saturations in all the channels that have a large DC value (e.g. the photodiode DC mon channels) due to the poor design of the cascaded gain stages. I was thinking about using the REFL DC channel to estimate the mode-matching into the PRC, but this has a couple of problems. Electronically, there may be some signal distortion due to the aforementioned problem. But in addition, optically, the estimation of mode-matching into the PRC by comparing REFL DC levels in single bounce off the PRM and the PRMI locked has the problem that the mode-matching is degenerate with the intra-cavity loss, which is of the same order as the mode mismatch (a percent or two I claiM). If Koji or someone else can implement the fix suggested by Hartmut for all the LSC whitening channels, that'd give us more faith in the signals. It may be less work than just replacing all the whitening filters with a better design (e.g. the aLIGO ISC whitening filter which implements the cascaded gain stages using single OP27s and more importantly has a 1 kohm series resistance with the input to the op amp (so the preceeding stage never has to drive > 10V/1kohms ~10mA of DC current) would presumably reduce distortion.

  15874   Sat Mar 6 12:34:18 2021 gautamUpdateLSCSensing matrix settings messed with

To my dismay, I found today that somebody had changed the oscillator frequencies for the sensing matrix infrastructure we have. The change happened 2 days and 2 hours ago (I write this at ~1230 on Saturday, 3/6), i.e. ~1030am on Thursday. According to the elog, this is when Anchal and Paco were working on the interferometer, but I can find no mention of these settings being changed. Not cool guys 😒 .

This was relatively easy to track down but I don't know what else may have been messed with. I don't understand how anything that was documented in the elog can lead to this weird doubling of the frequencies.

I have now restored the correct settings. The "sensing matrix" I posted last night is obviously useless.

  15873   Fri Mar 5 22:25:13 2021 gautamUpdateLSCPRMI 1f SB locking recovered

Now that the REFL55 signal chain is capable of providing balanced, orthogonal readout of the two quadratures, I was able to recover the 1f SB resonant lock pretty easily. Ran sensing lines for ~5mins, still looks weird. But I didn't try to optimize anything / do other checks (e.g. actuate MICH using ITMs instead of BS) tonight, and I'm craving the Blueberry pie Rana left me. Will continue to do more systematic tests in the next days.

  15872   Fri Mar 5 17:48:25 2021 JonUpdateCDSFront-end testing

Today I moved the c1bhd machine from the control room to a new test area set up behind (west of) the 1X6 rack. The test stand is pictured in Attachment 1. I assembled one of the new IO chassis and connected it to the host.

I/O Chassis Assembly

  • LIGO-style 24V feedthrough replaced with an ATX 650W switching power supply
  • Timing slave installed
  • Contec DO-1616L-PE card installed for timing control
  • One 16-bit ADC and one 32-channel DO module were installed for testing

The chassis was then powered on and LED lights illuminated indicating that all the components have power. The assembled chassis is pictured in Attachment 2.

Chassis-Host Communications Testing

Following the procedure outlined T1900700, the system failed the very first test of the communications link between chassis and host, which is to check that all PCIe cards installed in both the host and the expansion chassis are detected. The Dolpin host adapter card is detected:

07:06.0 PCI bridge: Stargen Inc. Device 0102 (rev 02) (prog-if 00 [Normal decode])
    Flags: bus master, fast devsel, latency 0
    Bus: primary=07, secondary=0e, subordinate=0e, sec-latency=0
    I/O behind bridge: 00002000-00002fff
    Prefetchable memory behind bridge: 00000000c0200000-00000000c03fffff
    Capabilities: [40] Power Management version 2
    Capabilities: [50] MSI: Enable- Count=1/1 Maskable- 64bit+
    Capabilities: [60] Express Downstream Port (Slot+), MSI 00
    Capabilities: [80] Subsystem: Device 0000:0000
    Kernel driver in use: pcieport

However the OSS PCIe adapter card linking the host to the IO chassis was not detected, nor were any of the cards in the expansion chassis. Gautam previously reported that the OSS card was not detected by the host (though it was not connected to the chassis then). Even now connected to the IO chassis, the card is still not detected. On the chassis-side OSS card, there is a red LED illuminated indicating "HOST CARD RESET" as pictured in Attachment 3. This may indicate a problem with the card on the host side. Still more debugging to be done.

  15871   Fri Mar 5 16:24:24 2021 gautamUpdateLSCREFL55 demod board re-installed in 1Y2

I don't have a good explanation why, but I too measured similar numbers to what Koji measured. The overall conversion gain for this board (including the +20dB gain from the daughter board) was measured to be ~5.3 V/V on the bench, and ~16000 cts/V in the CDS system (100Hz offset from the LO frequency). It would appear that the effective JMS-1-H conversion loss is <2dB. Seems fishy, but I can't find anything else obviously wrong with the circuit (e.g. a pre-amp for the RF signal that I missed, there is none).

I also attach the result of the measured noise at the outputs of the daughter board (i.e. what is digitized by the ADC), see Attachment #2. Apart from the usual forest of lines of unknown origin, there is still a significant excess above the voltage noise of the OP27, which is expected to be the dominant noise source in this configuration. Neverthelesss, considering that we have only 40dB of whitening gain, it is not expected that we see this noise directly in the digitized signal (above the ADC noise of ~1uV/rtHz). Note that the measured noise today, particularly for the Q channel,  is significantly lower than before the changes were made

  15870   Fri Mar 5 15:32:53 2021 KojiSummaryElectronicsA bunch of electronics received

The parts will be ordered by Koji The components for the additional BIO I/F have been ordered.

  15869   Fri Mar 5 15:31:23 2021 KojiUpdateLSCREFL55 demod board rework

Missed to note: The IF test was done at TP7 and TP6 using pomona clips i.e. brefore the preamp.

 

  15868   Fri Mar 5 15:03:28 2021 gautamSummaryElectronicsA bunch of electronics received

The PCBs for the D1002593 BIO I/F (5pcs ea of D1001050 and D1001266) were received (from JLCPCB) today. idk what the status of the parts (digikey?) is.

Quote:

Received additional front/rear panels. Updated the original entry and Wiki [Link]

  15867   Fri Mar 5 13:53:57 2021 gautamUpdateLSCREFL55 demod board rework

0 dBm ~ 0.63 Vpp. I guess there is ~4dB total loss (3dB from splitter and 1dB from total excess loss above theoretical from various components) between the SMA input and each RF input of the JMS-1-H mixer, which has an advertised conversion loss of ~6dB. So the RF input to each mixer, for 0dBm to the front panel SMA is ~-4dBm (=0.4 Vpp), and the I/F output is 0.34Vpp. So the conversion loss is only ~-1.5 dB? Seems really low? I assume the 0.34 Vpp is at the input to the preamp? If it's after the preamp, then the numbers still don't add up, because with the nominal 6dB conversion loss, the output. should be ~2Vpp? I will check it later.

Quote:

With LO3dBm. RF0dBm, and delta_f = 30Hz, the output Vpp of 340mV and the phase difference is 88.93deg. (Attachment 3/4, the traces were averaged)

  15866   Fri Mar 5 00:53:09 2021 KojiSummaryElectronicsA bunch of electronics received

Received additional front/rear panels. Updated the original entry and Wiki [Link]

 

  15865   Thu Mar 4 23:57:35 2021 KojiSummaryElectronicsInspection of the new custom dsub cables

I made the inspection of the new custom DSub cables (came from Texas).

The shelled version gives us some chance to inspect/modify the internal connections. (good)
The wires are well insulated. The conductors are wrapped with the foils and then everything is in the braid tube shield. The braid is soldered on one of the connectors. (Attachment  3/4 shows the soldering of the conductor by intentionally removing one of the insulations).

It wasn't clear that if the conductors are twisted or not (probably not).

  15864   Thu Mar 4 23:16:08 2021 KojiUpdateLSCREFL55 demod board rework

A new hybrid splitter (DQS-10-100) was installed. As the amplification of the final stage is sufficient for the input level of 3dBm, I have bypassed the input amplification (Attachment 1). One of the mixer was desoldered to check the power level. With a 1dB ATTN, the output of the last ERA-5 was +17.8dBm (Attachment 2). (The mixer was resoldered.)

With LO3dBm. RF0dBm, and delta_f = 30Hz, the output Vpp of 340mV and the phase difference is 88.93deg. (Attachment 3/4, the traces were averaged)

  15863   Thu Mar 4 15:48:26 2021 KojiSummaryPEMWatchdog tripped, Optics damped back

EQs seen on Summary pages
https://nodus.ligo.caltech.edu:30889/detcharsummary/day/20210304/pem/seismic_blrms/

  15862   Thu Mar 4 11:59:25 2021 Paco, AnchalSummaryLSCWatchdog tripped, Optics damped back

Gautam came in and noted that the optics damping watchdogs had been tripped by a >5 magnitude earthquake somewhere off the coast of Australia. So, under guided assistance, we manually damped the optics using following:

  • Using the scripts/SUS/reEnableWatchdogs.py script we re-enabled all the watchdogs.
  • Everything except SRM was restored to stable state.
  • Then we clicked on SRM in SUS-> Watchdogs, disabled the Oplevs, shutdown the watchdog.
  • We changed the threshold for watchdog temporarily to 1000 to allow damping.
  • We enabled all the coil outputs  manually. Then enabled watchdog by clicking on Normal.
  • Once the SRM was damped, we shutdown the watchdog, brought back the threshold to 215 and restarted it.

Gautum also noticed that MC autolocker got turned OFF by me (Anchal), we turned it back on and MC engaged the lock again. All good, no harm done.

  15861   Thu Mar 4 10:54:12 2021 Paco, AnchalSummaryLSCPOY11 measurement, tried to lock Green Yend laser

[Paco, Anchal]

- First ran burtgooey as last time.

- Installed pyepics on base environment of donatella

ASS XARM:
- Clicked on ON in the drop down of "! More Scripts" below "! Scripts XARM" in C1ASS.adl
- Clicked on "Freeze Outputs" in the same menu after some time.
- Noticed that the sensing and output matrix of ASS on XARM and YARM look very different. The reason probably is because the YARM outputs have 4 TT1/2 P/Y dof instead of BS P/Y on the XARM. What are these TT1/2?

(Probably, unrelated but MC Unlocked and kept on trying to lock for about 10 minutes attaining the lock eventually.)

Locking XARM:
- From scripts/XARM we ran lockXarm.py from outside any conda environment using python command.
- Weirdly, we see that YARM is locked??? But XARM is not. Maybe this script is old.
- C1:LSC-TRY-OUTPUT went to around 0.75 (units unknown) while C1:LSC-TRX-OUTPUT is fluctuating around 0 only.

POY11 Spectrum measurement when YARM is locked:
- Created our own template as we couldn't find an existing one in users/Templates.
- Template file and data in Attachment 2.
- It is interesting to see most of the noise is in I quadrature with most noise in 10 to 100 Hz.
- Given the ARM is supposed to be much calmer than MC, this noise should be mostly due to the mode cleaner noise.
- We are not sure what units C1:LSC-POY11_I_ERR_DQ have, so Y scale is shown with out units.


Trying to lock Green YEND laser to YARM:
- We opened the Green Y shutter.
- We ensured that when temperature slider og green Y is moved up, the beatnote goes up.
- ARM was POY locked from previous step.
- Ran script scripts/YARM/Lock_ALS_YARM.py from outside any conda environment using python command.
- This locked green laser but unlocked the YARM POY.

Things moving around:
- Last step must have made all the suspension controls unstable.
- We see PRM and SRM QPDs moving a lot.
- Then we did burt restore to /opt/rtcds/caltech/c1/burt/autoburt/today/08:19/*.snap to go back to the state before we started changing things today.

[Paco left for vaccine appointment]

- However the unstable state didn't change from restore. I see a lot of movement in ITMX/Y. PRM and BS also now. Movement in WFS1 and MC2T as well.
 - I closed PSL shutter as well to hopefully disengage any loops that are still running unstably.
 - But at this point, it seems that the optics are just oscillating and need time to come back to rest. Hopefully we din't cause too much harm today :(.
 


My guess on what happened:

  • Us using the Lock_ALS_YARM.py probably created an unstable configuration in LSC matrix and was the start of the issue.
  • On seeing PRM fluctuate so much, we thought we should just burst restore everything. But that was a hammer to the problem.
  • This hammer probably changed the suspension position values suddenly causing an impulse to all the optics. So everything started oscillating.
  • Now MC WFS is waiting for MC to lock before it stablizes the mode cleaner. But MC autolocker is unable to lock because the optics are oscillating. Chicken-egg issue.
  • I'm not aware of how manually one can restore the state now. My only known guess is that if we wait for few hours, everything should calm back enough that MC can be locked and WFS servo can be switched on.
  15860   Wed Mar 3 23:23:58 2021 gautamUpdateALSArm cavity scan

I see no evidence of anything radically different from my PSL table optical characterization in the IMC transmitted beam, see Attachment #1. The lines are just a quick indicator of what's what and no sophisticated peak fitting has been done yet (so the apparent offset between the transmission peaks and some of the vertical lines are just artefacts of my rough calibration I believe). The modulation depths recovered from this scan are in good agreement with what I report in the linked elog, ~0.19 for f1 and ~0.24 for f2. On the bright side, the ALS just worked and didn't require any electronics fudgery from me. So the mystery continues.

  15859   Wed Mar 3 22:13:05 2021 gautamUpdateLSCREFL55 demod board rework

After this work, I measured that the orthogonality was poor. I confirmed on the bench that the PQW-2-90 was busted, pin 2 (0 degree output) showed a sensible signal half of the input, but pin 6 had far too small an output and the phase difference was more like 45 degrees and not 90 degrees. I can't find any spares of this part in the lab - however, we do have the equivalent part used in the aLIGO demodulator. Koji has kindly agreed to do the replacement (it requires a bit of jumper wiring action because the pin mapping between the two parts isn't exactly identical - in fact, the circuit schematic uses a transformer to do the splitting, but at some unknown point in time, the change to the minicircuits part was made. Anyway, until this is restored, I defer the PRMI sideband locking.

Quote:

There were multiple problems with the REFL55 demod board. I fixed them and re-installed the board. The TFs and noise measured on the bench now look more like what is expected from a noise model. The noise in-situ also looked good. After this work, my settings for the PRMI sideband lock don't work anymore so I probably have to tweak things a bit, will look into it tomorrow.

  15857   Wed Mar 3 12:00:58 2021 Paco, AnchalHowToIMCMC_F ASD

[Paco, Anchal]

- Saved BURT backup in /users/anchal/BURTsnaps/
- Copied existing code for mode cleaner noise budget from /users/rana/mat/mc. Will work on this from home to convert it inot new pynb way.

Get baseline IMC measurements (passive):
- MC_F:
  - What is MC_F? Let's find out.
  - On MC_F Cal window titled 'C1IOO-MC_FREQ', we turned off ON/OFF and back on again.
  - Using diaggui, we measured ASD of MC_F channel in units of counts/rtHz.

[Rana, Paco]

- Using diaggui, measured ASD from a template (under /users/Templates) and overlay the 1/f noise of the NPRO (Attachment 1)

[Anchal, Paco]

- WFS Master
  - Went through the schematic and tried to understand what is happening.
  - Accidentally switched on MC WF relief (python 3). Bunch of things were displayed on a terminal for a while and then we Ctrl-C it.
  - The only thing we noticed that change is a slight increase in WFS1 Yaw, and a corresponding decrease in WFS1 Pitch, WFS2 Pitch, and WFS2 Yaw.
  - We need to find out what this script does.


Future work:

  • Create an automated script for taking MC_F_DQ spectrum and refer it against reference trace.
  • Use pynb to create a noise budget for mode cleaner.
  • Identify excess noise between 10-40 Hz.
  • Configure output matrix in WFS Master to reduce the noise. Automate this process as well.
  15856   Wed Mar 3 11:51:07 2021 YehonathanUpdateSUSOSEM testing for SOSs

I finished testing the OSEMs. I put all the OSEMs back in the box. The OSEMS were divided into several bags. I put the OSEM box next to the south flow bench on the floor.

I have uploaded the OSEM catalog to the wiki. I will upload the LED spot images later.

In summary:

Total 64 OSEMS, 31 long, 33 short.

Perfectly centered LED spots, ready for C&B OSEMS: 30, 12 long, 18 short.

Perfectly centered LED spots, need some work (missing pigtails, weird screws) OSEMS: 7, 5 long, 2 short.

Slightly off-centered (subjective) LED spots, ready for C&B OSEMS: 20, 7 long, 13 short.

Slightly off-centered (subjective) LED spots, need some work (missing pigtails, weird screws) OSEMS: 4 long

Defective OSEMS or LED spot way off-center: 3.

  15855   Tue Mar 2 19:52:46 2021 gautamUpdateLSCREFL55 demod board rework

There were multiple problems with the REFL55 demod board. I fixed them and re-installed the board. The TFs and noise measured on the bench now look more like what is expected from a noise model. The noise in-situ also looked good. After this work, my settings for the PRMI sideband lock don't work anymore so I probably have to tweak things a bit, will look into it tomorrow.

ELOG V3.1.3-