40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 335 of 335  Not logged in ELOG logo
IDup Date Author Type Category Subject
  16849   Thu May 12 20:11:18 2022 AnchalUpdateBHDBHDBS Output beams steered out to ITMY table

I successfully steered out the two output beams from BHD BS to ITMY table today. This required significant changes on the table, but I was able to bring back the table to balance coarsely and then recover YARM flashing with fine tuning of ITMY.

  • The counterweights were kept at the North end of the table which was in way of one of the output beams of BHD.
  • So I saved the level meter positions in my head and removed those counterweights.
  • I also needed to remove the cable post for ITMY and SRM that was in the center of the table.
  • I installed a new cable post which is just for SRM and is behind AS2. ITMY's cable post is next to it on the other edge of the table. This is to ensure that BHD board can come in later without disturbing existing layout.
  • I got 3 Y1-45P and 1 Y1-0 mirror. The Y1-0 mirror was not installed on a mount, so I removed an older optic which was unlabeled and put this on it's mount.
  • Note that I noticed that some light (significant enough to be visible on my card) is leaking out of the 45P mirrors. We need to make sure we aren't loosing too much power due to this.
  • Both beams are steered through the center of the window, they are separating outside and not clipping on any of the existing optics outside. (See attachment 1, the red beam in the center is the ITMY oplev input beam and the two IR beams are the outputs from BHD BS).
  • Also note that I didn't find any LO beam while doing this work. I only used AS beam to align the path.
  • I centered the ITMY oplev at the end.

Next steps:

  • LO path needs to be tuned up and cleared off again. We need to match the beams on BHD BS as well.
  • Setup steering mirrors and photodiodes on the outside table on ITMY.
Attachment 1: signal-2022-05-12-201844.jpeg
signal-2022-05-12-201844.jpeg
  16850   Thu May 12 20:24:29 2022 yutaUpdateBHDPOX and POY investigation

[Anchal, Yuta]

We checked POX and POY RF signal chains for sanity check since Xarm cannot be locked in IR stably as opposed to Yarm.
POX beam seems to be healthy. This issue doesn't prevent us from closing the vacuum tank.

POY
 - RF PD has SPB-10.7+ and ZFL-500NL+ attached to the RF output.
 - At the demodulation electronics rack, SMA connectors are used everywhere.
 - With Yarm flashing at ~1, RF output has ~24 mVpp right after RF PD, ~580mVpp after SPB-10.7+ and ZFL-500NL+, and ~150mVpp at right before the demodulation box.
 - There is roughly a factor of 3 loss in the cabling from POY RF PD to the demodulation rack.
 - Laser power at POY RF PD was measured to be 16 uW

POX
 - RF PD doesn't have amplifiers attached.
 - At the demodulation electronics rack, N connector is used.
 - With Xarm flashing at ~1, RF output has ~30 mVpp right after RF PD, and ~20mVpp at right before the demodulation box.
 - Losses in the cabling from POX RF PD to the demodulation rack is small compared with that for POY.
 - Laser power at POX RF PD was measured to be 16 uW

Summary
 - POX and POY RF PDs are receiving almost the same mount of power
 - POY has larger error signal than POX because of RF amplifier, but the cable loss is high

Conclusion
 - There might be something in the electronics, but we can close the vacuum tanks

Attachment 1: POY.JPG
POY.JPG
  16851   Fri May 13 14:26:00 2022 JCUpdateAlignmentLO2 Beam

[Yehonathan, JC]

Yehonathan and I attempted to align the LO2 beam today through the BS chamber and ITMX Chamber. We found the LO2 beam was blocked by the POKM1 Mirror. During this attempt, I tapped TT2 with the Laser Card. This caused the mirror to shake and dampen into a new postion. Afterwards, when putting the door back on ITMX, one of the older cables were pulled and the insulation was torn. This caused some major issues and we have been able to regain either of the arms to their original standings.

  16852   Fri May 13 18:42:13 2022 PacoUpdateAlignmentITMX and ITMY sat amp failures

[Yuta, Anchal, Paco]

As described briefly by JC, there were multiple failure modes going during this work segment. blush


ITMX SatAmp SAGA

Indeed, the 64 pin crimp cable from the gold sat amp box broke when work around ITMX chamber was ongoing. We found the right 64 pin head replacement around and moved on to fix the connector in-situ. After a first attempt, we suddenly lost all damping on vertex SUS (driven by these old sat amp electronics) because our c1susaux acromag chassis stopped working. After looking around the 1x5 rack electronics we noted that one of the +- 20 VDC Sorensens were at 11.6 VDC, drawing 6.7 A of current (nominally this supply draws over 5 Amps!) so we realized we had not connected the ITMX sat amp correctly, and the DC rail voltage drop busted the acromag power as well, tripping all the other watchdogs devil ...

We fixed this by first, unplugging the shorted cable from the rack (at which point the supply went back to 20 VDC, 4.7 A) and then carefully redoing the crimp connector. The second attempt was successful and we restored the c1susaux modbusIOC service (i.e. slow controls).


ITMY SatAmp SAGA

As we restored the slow controls, and damped most vertex suspensions, we noticed ITMY UL and SD osems were reading 0 counts both on the slow and fast ADCs. crying We suspected we had pulled some wires around when busy with the ITMX sat amp saga. We found that Side OSEM cLEMO cable was very loose on the whitening board. In fact, we have had no side osem signal on ITMY for some time. We fixed this. Nevertheless the UL channel remained silent... We then did the following tests:

  • Test PD mon outputs on the whitening card. We realized the whitening cards were mislabeled, with ITMX and ITMY flipped angry. We have labeled them appropriately.
  • Tested input DB15 cable with breakout board.
  • Went to the ITMY sat amp box and used the satellite box TESTER 2 on J1. It seemed correct.
  • We opened the chamber, tested the in-vacuum segments, they all were ok.
  • We flipped UR-UL OSEMs and found that the UL OSEM is healthy and works fine on UR channel.
  • We tested the in-air cable between satellite box and vacuum flange and it was ok too.
  • We suspected that the satellite box tester  is lying, so we replaced the satellite box with the spare old MC1 satellite box, and indeed that solved the issue.

DO NOT TRUST THE SATELLITE BOX TESTER 2.


Current state:

  • IMC locking normally.
  • All suspensions are damping properly.
  • Oplevs are not centered.
  • No flashing on either of the arms. We had no luck in ~20 min of attempt with just input injection changed.
  • On kicking PR3, we do see some flashing on XARM, which means XARM cavity atleast is somewhat aligned.
  • All remaining tasks before pumpdown are still remaining. We just lost the whole day.
  16853   Sat May 14 08:36:03 2022 ChrisUpdateDAQDAQ troubleshooting

I heard a rumor about a DAQ problem at the 40m.

To investigate, I tried retrieving data from some channels under C1:SUS-AS1 on the c1sus2 front end. DQ channels worked fine, testpoint channels did not. This pointed to an issue involving the communication with awgtpman. However, AWG excitations did work. So the issue seemed to be specific to the communication between daqd and awgtpman.

daqd logs were complaining of an error in the tpRequest function: error code -3/couldn't create test point handle. (Confusingly, part of the error message was buffered somewhere, and would only print after a subsequent connection to daqd was made.) This message signifies some kind of failure in setting up the RPC connection to awgtpman. A further error string is available from the system to explain the cause of the failure, but daqd does not provide it. So we have to guess...

One of the reasons an RPC connection can fail is if the server name cannot be resolved. Indeed, address lookup for c1sus2 from fb1 was broken:

$ host c1sus2
Host c1sus2 not found: 3(NXDOMAIN)

In /etc/resolv.conf on fb1 there was the following line:

search martian.113.168.192.in-addr.arpa

Changing this to search martian got address lookup on fb1 working:

$ host c1sus2
c1sus2.martian has address 192.168.113.87

But testpoints still could not be retrieved from c1sus2, even after a daqd restart.

In /etc/hosts on fb1 I found the following:

192.168.113.92  c1sus2

Changing the hardcoded address to the value returned by the nameserver (192.168.113.87) fixed the problem.

It might be even better to remove the hardcoded addresses of front ends from the hosts file, letting DNS function as the sole source of truth. But a full system restart should be performed after such a change, to ensure nothing else is broken by it. I leave that for another time.

  16854   Mon May 16 10:49:01 2022 AnchalUpdateDAQDAQ troubleshooting

[Anchal, Paco, JC]

Thanks Chris for the fix. We are able to access the testpoints now but we started facing another issue this morning, not sure how it is related to what you did.

  • The C1:LSC-TRX_OUT and C1:LSC-TRY_OUT channels are stuck to zero value.
  • These were the channels we used until last friday to align the interferometer.
  • These channels are routed through the c1rfm FE model (Reflected Memory model is the name, I think). These channels carry the IR transmission photodiode monitors at the two ends of the interferometer, where they are first logged into the local FEs as C1:SUS-ETMX_TRX and C1:SUS-ETMY_TRY .
  • These channels are then fed to C1:SCX-RFM_TRX -> C1:RFM_TRX -> C1:RFM-LSC_TRX -> C1:LSC-TRX and similar for Y side.
  • We are able to see channels in the end FE filtermodule testpoints (C1:SUS-ETMX_TRX_OUT & C1:SUS-ETMY_TRY_OUT)
  • However, we are unable to see the same signal in c1rfm filter module testpoints like C1:RFM_TRX_IN1, C1:RFM_TRY_IN1 etc
  • There is an IPC error shown in CDS FE status screen for c1rfm in c1sus. But we remember seeing this red for a long time and have been ignoring it so far as everything was working regardless.

The steps we have tried to fix this are:

  • Restart all the FE models in c1lsc, c1sus, and c1ioo (without restarting the computers themselves) , and then burt restore.
  • Restart all the FE models in c1iscex, and c1iscey (only c1iscey computer was restarted) , and then burt restore.

These above steps did not fix the issue. Since we have  the testpoints (C1:SUS-ETMX_TRX_OUT & C1:SUS-ETMY_TRY_OUT) for now to monitor the transmission levels, we are going ahead with our upgrade work without resovling this issue. Please let us know if you have any insights.

  16855   Mon May 16 12:59:27 2022 ChrisUpdateDAQDAQ troubleshooting

It looks like the RFM problem started a little after 2am on Saturday morning (attachment 1). It’s subsequent to what I did, but during a time of no apparent activity, either by me or others.

The pattern of errors on c1rfm (attachment 2) looks very much like this one previously reported by Gautam (errors on all IRFM0 ipcs). Maybe the fix described in Koji’s followup will work again (involving hard reboots).

Attachment 1: timeseries.png
timeseries.png
Attachment 2: err.png
err.png
  16856   Mon May 16 13:22:59 2022 yutaUpdateBHDREFL and AS paths aligned at AP table

After Xarm and Yarm were aligned by Anchal et al, I aligned AS and REFL path in the AP table.
REFL path was alreasy almost perfectly aligned.

REFL path
 -REFL beam centered on the REFL camera
 -Aligned so that REFL55 and REFL33 RFPDs give maximum analog DC outputs when ITMY was misaligned to avoid MICH fringe
 -Aligned so that REFL11 give maximum C1:LSC-REFL11_I_ERR (analog DC output on REFL11 RFPD seemed to be not working)

AS path
 -AS beam centered on the AS camera. AS beam seems to be clipped at right side when you see at the viewport from -Y side.
 -Aligned so that AS55 give maximum C1:LSC-ASDC_OUT16 (analog DC output on AS55 RFPD seemed to be not working)
 -Aligned so that AS110 give maximum analog DC output

Attachment 1: REFLPOP.JPG
REFLPOP.JPG
Attachment 2: POPAS.JPG
POPAS.JPG
  16857   Mon May 16 14:46:35 2022 TommyUpdateElectronicsRFSoC MTS Work

We followed the manual's guide for setting up MTS to sync on external signal. In the xrfdc package, we update the RFdc class to have RunMTS, SysRefEnable, and SysRefDisable functions as prescribed on page 180 of the manual. Then, we attempted to run the new functions in the notebook and read the DAC signal outputs on an oscilloscope. The DACs were not synced. We were also unable to get FIFOlatency readings. 

  Draft   Mon May 16 16:13:01 2022 YehonathanUpdateBHDInitial BHD modeling: Damped suspension model

I was finally able to set up a stable suspension model with the help of Yuta and I'm ready to start doing some noise budgeting in MICH with BHD readout.

 

 

ELOG V3.1.3-