40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 335 of 335  Not logged in ELOG logo
New entries since:Wed Dec 31 16:00:00 1969
IDup Date Author Type Category Subject
  16849   Thu May 12 20:11:18 2022 AnchalUpdateBHDBHDBS Output beams steered out to ITMY table

I successfully steered out the two output beams from BHD BS to ITMY table today. This required significant changes on the table, but I was able to bring back the table to balance coarsely and then recover YARM flashing with fine tuning of ITMY.

  • The counterweights were kept at the North end of the table which was in way of one of the output beams of BHD.
  • So I saved the level meter positions in my head and removed those counterweights.
  • I also needed to remove the cable post for ITMY and SRM that was in the center of the table.
  • I installed a new cable post which is just for SRM and is behind AS2. ITMY's cable post is next to it on the other edge of the table. This is to ensure that BHD board can come in later without disturbing existing layout.
  • I got 3 Y1-45P and 1 Y1-0 mirror. The Y1-0 mirror was not installed on a mount, so I removed an older optic which was unlabeled and put this on it's mount.
  • Note that I noticed that some light (significant enough to be visible on my card) is leaking out of the 45P mirrors. We need to make sure we aren't loosing too much power due to this.
  • Both beams are steered through the center of the window, they are separating outside and not clipping on any of the existing optics outside. (See attachment 1, the red beam in the center is the ITMY oplev input beam and the two IR beams are the outputs from BHD BS).
  • Also note that I didn't find any LO beam while doing this work. I only used AS beam to align the path.
  • I centered the ITMY oplev at the end.

Next steps:

  • LO path needs to be tuned up and cleared off again. We need to match the beams on BHD BS as well.
  • Setup steering mirrors and photodiodes on the outside table on ITMY.
Attachment 1: signal-2022-05-12-201844.jpeg
  16850   Thu May 12 20:24:29 2022 yutaUpdateBHDPOX and POY investigation

[Anchal, Yuta]

We checked POX and POY RF signal chains for sanity check since Xarm cannot be locked in IR stably as opposed to Yarm.
POX beam seems to be healthy. This issue doesn't prevent us from closing the vacuum tank.

 - RF PD has SPB-10.7+ and ZFL-500NL+ attached to the RF output.
 - At the demodulation electronics rack, SMA connectors are used everywhere.
 - With Yarm flashing at ~1, RF output has ~24 mVpp right after RF PD, ~580mVpp after SPB-10.7+ and ZFL-500NL+, and ~150mVpp at right before the demodulation box.
 - There is roughly a factor of 3 loss in the cabling from POY RF PD to the demodulation rack.
 - Laser power at POY RF PD was measured to be 16 uW

 - RF PD doesn't have amplifiers attached.
 - At the demodulation electronics rack, N connector is used.
 - With Xarm flashing at ~1, RF output has ~30 mVpp right after RF PD, and ~20mVpp at right before the demodulation box.
 - Losses in the cabling from POX RF PD to the demodulation rack is small compared with that for POY.
 - Laser power at POX RF PD was measured to be 16 uW

 - POX and POY RF PDs are receiving almost the same mount of power
 - POY has larger error signal than POX because of RF amplifier, but the cable loss is high

 - There might be something in the electronics, but we can close the vacuum tanks

Attachment 1: POY.JPG
  16851   Fri May 13 14:26:00 2022 JCUpdateAlignmentLO2 Beam

[Yehonathan, JC]

Yehonathan and I attempted to align the LO2 beam today through the BS chamber and ITMX Chamber. We found the LO2 beam was blocked by the POKM1 Mirror. During this attempt, I tapped TT2 with the Laser Card. This caused the mirror to shake and dampen into a new postion. Afterwards, when putting the door back on ITMX, one of the older cables were pulled and the insulation was torn. This caused some major issues and we have been able to regain either of the arms to their original standings.

  16852   Fri May 13 18:42:13 2022 PacoUpdateAlignmentITMX and ITMY sat amp failures

[Yuta, Anchal, Paco]

As described briefly by JC, there were multiple failure modes going during this work segment. blush


Indeed, the 64 pin crimp cable from the gold sat amp box broke when work around ITMX chamber was ongoing. We found the right 64 pin head replacement around and moved on to fix the connector in-situ. After a first attempt, we suddenly lost all damping on vertex SUS (driven by these old sat amp electronics) because our c1susaux acromag chassis stopped working. After looking around the 1x5 rack electronics we noted that one of the +- 20 VDC Sorensens were at 11.6 VDC, drawing 6.7 A of current (nominally this supply draws over 5 Amps!) so we realized we had not connected the ITMX sat amp correctly, and the DC rail voltage drop busted the acromag power as well, tripping all the other watchdogs devil ...

We fixed this by first, unplugging the shorted cable from the rack (at which point the supply went back to 20 VDC, 4.7 A) and then carefully redoing the crimp connector. The second attempt was successful and we restored the c1susaux modbusIOC service (i.e. slow controls).


As we restored the slow controls, and damped most vertex suspensions, we noticed ITMY UL and SD osems were reading 0 counts both on the slow and fast ADCs. crying We suspected we had pulled some wires around when busy with the ITMX sat amp saga. We found that Side OSEM cLEMO cable was very loose on the whitening board. In fact, we have had no side osem signal on ITMY for some time. We fixed this. Nevertheless the UL channel remained silent... We then did the following tests:

  • Test PD mon outputs on the whitening card. We realized the whitening cards were mislabeled, with ITMX and ITMY flipped angry. We have labeled them appropriately.
  • Tested input DB15 cable with breakout board.
  • Went to the ITMY sat amp box and used the satellite box TESTER 2 on J1. It seemed correct.
  • We opened the chamber, tested the in-vacuum segments, they all were ok.
  • We flipped UR-UL OSEMs and found that the UL OSEM is healthy and works fine on UR channel.
  • We tested the in-air cable between satellite box and vacuum flange and it was ok too.
  • We suspected that the satellite box tester  is lying, so we replaced the satellite box with the spare old MC1 satellite box, and indeed that solved the issue.


Current state:

  • IMC locking normally.
  • All suspensions are damping properly.
  • Oplevs are not centered.
  • No flashing on either of the arms. We had no luck in ~20 min of attempt with just input injection changed.
  • On kicking PR3, we do see some flashing on XARM, which means XARM cavity atleast is somewhat aligned.
  • All remaining tasks before pumpdown are still remaining. We just lost the whole day.
  16853   Sat May 14 08:36:03 2022 ChrisUpdateDAQDAQ troubleshooting

I heard a rumor about a DAQ problem at the 40m.

To investigate, I tried retrieving data from some channels under C1:SUS-AS1 on the c1sus2 front end. DQ channels worked fine, testpoint channels did not. This pointed to an issue involving the communication with awgtpman. However, AWG excitations did work. So the issue seemed to be specific to the communication between daqd and awgtpman.

daqd logs were complaining of an error in the tpRequest function: error code -3/couldn't create test point handle. (Confusingly, part of the error message was buffered somewhere, and would only print after a subsequent connection to daqd was made.) This message signifies some kind of failure in setting up the RPC connection to awgtpman. A further error string is available from the system to explain the cause of the failure, but daqd does not provide it. So we have to guess...

One of the reasons an RPC connection can fail is if the server name cannot be resolved. Indeed, address lookup for c1sus2 from fb1 was broken:

$ host c1sus2
Host c1sus2 not found: 3(NXDOMAIN)

In /etc/resolv.conf on fb1 there was the following line:

search martian.113.168.192.in-addr.arpa

Changing this to search martian got address lookup on fb1 working:

$ host c1sus2
c1sus2.martian has address

But testpoints still could not be retrieved from c1sus2, even after a daqd restart.

In /etc/hosts on fb1 I found the following:  c1sus2

Changing the hardcoded address to the value returned by the nameserver ( fixed the problem.

It might be even better to remove the hardcoded addresses of front ends from the hosts file, letting DNS function as the sole source of truth. But a full system restart should be performed after such a change, to ensure nothing else is broken by it. I leave that for another time.

ELOG V3.1.3-