I successfully steered out the two output beams from BHD BS to ITMY table today. This required significant changes on the table, but I was able to bring back the table to balance coarsely and then recover YARM flashing with fine tuning of ITMY.
We checked POX and POY RF signal chains for sanity check since Xarm cannot be locked in IR stably as opposed to Yarm.
POX beam seems to be healthy. This issue doesn't prevent us from closing the vacuum tank.
- RF PD has SPB-10.7+ and ZFL-500NL+ attached to the RF output.
- At the demodulation electronics rack, SMA connectors are used everywhere.
- With Yarm flashing at ~1, RF output has ~24 mVpp right after RF PD, ~580mVpp after SPB-10.7+ and ZFL-500NL+, and ~150mVpp at right before the demodulation box.
- There is roughly a factor of 3 loss in the cabling from POY RF PD to the demodulation rack.
- Laser power at POY RF PD was measured to be 16 uW
- RF PD doesn't have amplifiers attached.
- At the demodulation electronics rack, N connector is used.
- With Xarm flashing at ~1, RF output has ~30 mVpp right after RF PD, and ~20mVpp at right before the demodulation box.
- Losses in the cabling from POX RF PD to the demodulation rack is small compared with that for POY.
- Laser power at POX RF PD was measured to be 16 uW
- POX and POY RF PDs are receiving almost the same mount of power
- POY has larger error signal than POX because of RF amplifier, but the cable loss is high
- There might be something in the electronics, but we can close the vacuum tanks
Yehonathan and I attempted to align the LO2 beam today through the BS chamber and ITMX Chamber. We found the LO2 beam was blocked by the POKM1 Mirror. During this attempt, I tapped TT2 with the Laser Card. This caused the mirror to shake and dampen into a new postion. Afterwards, when putting the door back on ITMX, one of the older cables were pulled and the insulation was torn. This caused some major issues and we have been able to regain either of the arms to their original standings.
[Yuta, Anchal, Paco]
As described briefly by JC, there were multiple failure modes going during this work segment.
Indeed, the 64 pin crimp cable from the gold sat amp box broke when work around ITMX chamber was ongoing. We found the right 64 pin head replacement around and moved on to fix the connector in-situ. After a first attempt, we suddenly lost all damping on vertex SUS (driven by these old sat amp electronics) because our c1susaux acromag chassis stopped working. After looking around the 1x5 rack electronics we noted that one of the +- 20 VDC Sorensens were at 11.6 VDC, drawing 6.7 A of current (nominally this supply draws over 5 Amps!) so we realized we had not connected the ITMX sat amp correctly, and the DC rail voltage drop busted the acromag power as well, tripping all the other watchdogs ...
We fixed this by first, unplugging the shorted cable from the rack (at which point the supply went back to 20 VDC, 4.7 A) and then carefully redoing the crimp connector. The second attempt was successful and we restored the c1susaux modbusIOC service (i.e. slow controls).
As we restored the slow controls, and damped most vertex suspensions, we noticed ITMY UL and SD osems were reading 0 counts both on the slow and fast ADCs. We suspected we had pulled some wires around when busy with the ITMX sat amp saga. We found that Side OSEM cLEMO cable was very loose on the whitening board. In fact, we have had no side osem signal on ITMY for some time. We fixed this. Nevertheless the UL channel remained silent... We then did the following tests:
DO NOT TRUST THE SATELLITE BOX TESTER 2.
I heard a rumor about a DAQ problem at the 40m.
To investigate, I tried retrieving data from some channels under C1:SUS-AS1 on the c1sus2 front end. DQ channels worked fine, testpoint channels did not. This pointed to an issue involving the communication with awgtpman. However, AWG excitations did work. So the issue seemed to be specific to the communication between daqd and awgtpman.
daqd logs were complaining of an error in the tpRequest function: error code -3/couldn't create test point handle. (Confusingly, part of the error message was buffered somewhere, and would only print after a subsequent connection to daqd was made.) This message signifies some kind of failure in setting up the RPC connection to awgtpman. A further error string is available from the system to explain the cause of the failure, but daqd does not provide it. So we have to guess...
One of the reasons an RPC connection can fail is if the server name cannot be resolved. Indeed, address lookup for c1sus2 from fb1 was broken:
$ host c1sus2
Host c1sus2 not found: 3(NXDOMAIN)
$ host c1sus2
Host c1sus2 not found: 3(NXDOMAIN)
In /etc/resolv.conf on fb1 there was the following line:
Changing this to search martian got address lookup on fb1 working:
$ host c1sus2
c1sus2.martian has address 192.168.113.87
$ host c1sus2
c1sus2.martian has address 192.168.113.87
But testpoints still could not be retrieved from c1sus2, even after a daqd restart.
In /etc/hosts on fb1 I found the following:
Changing the hardcoded address to the value returned by the nameserver (192.168.113.87) fixed the problem.
It might be even better to remove the hardcoded addresses of front ends from the hosts file, letting DNS function as the sole source of truth. But a full system restart should be performed after such a change, to ensure nothing else is broken by it. I leave that for another time.
[Anchal, Paco, JC]
Thanks Chris for the fix. We are able to access the testpoints now but we started facing another issue this morning, not sure how it is related to what you did.
The steps we have tried to fix this are:
These above steps did not fix the issue. Since we have the testpoints (C1:SUS-ETMX_TRX_OUT & C1:SUS-ETMY_TRY_OUT) for now to monitor the transmission levels, we are going ahead with our upgrade work without resovling this issue. Please let us know if you have any insights.
It looks like the RFM problem started a little after 2am on Saturday morning (attachment 1). It’s subsequent to what I did, but during a time of no apparent activity, either by me or others.
The pattern of errors on c1rfm (attachment 2) looks very much like this one previously reported by Gautam (errors on all IRFM0 ipcs). Maybe the fix described in Koji’s followup will work again (involving hard reboots).
After Xarm and Yarm were aligned by Anchal et al, I aligned AS and REFL path in the AP table.
REFL path was alreasy almost perfectly aligned.
-REFL beam centered on the REFL camera
-Aligned so that REFL55 and REFL33 RFPDs give maximum analog DC outputs when ITMY was misaligned to avoid MICH fringe
-Aligned so that REFL11 give maximum C1:LSC-REFL11_I_ERR (analog DC output on REFL11 RFPD seemed to be not working)
-AS beam centered on the AS camera. AS beam seems to be clipped at right side when you see at the viewport from -Y side.
-Aligned so that AS55 give maximum C1:LSC-ASDC_OUT16 (analog DC output on AS55 RFPD seemed to be not working)
-Aligned so that AS110 give maximum analog DC output
We followed the manual's guide for setting up MTS to sync on external signal. In the xrfdc package, we update the RFdc class to have RunMTS, SysRefEnable, and SysRefDisable functions as prescribed on page 180 of the manual. Then, we attempted to run the new functions in the notebook and read the DAC signal outputs on an oscilloscope. The DACs were not synced. We were also unable to get FIFOlatency readings.
I was finally able to set up a stable suspension model with the help of Yuta and I'm now ready to start doing some MICH noise budgeting with BHD readout. (Tip: turns out that in the zpk function in Matlab you should multiply the poles and zeros by -2*pi to match the zpk TFs in Foton)
I copied all the filters from the suspension MEDM screens into a Matlab. Those filters were concatenated with a single pendulum suspension TF with poles at [0.05e-1+1i, 0.05e-1-1i] and a gain of 4 N/kg.
I multiplied the OLTF with the real gains at the DAC/DAC/OSEMs/Coil Driver and Coils. I ignore whitening/dewhitening for now. The OLTF was calculated with no additional ad-hoc gain.
Attachment 1 shows the calculated open-loop transfer function.
Attachment 2 shows OLTF of ETMY measured last week.
Attachment 3 shows the step and impulse responses of the closed-loop system.
[Anchal, Paco, Yuta]
The SRM Oplev injection and detection paths interfere heavily with the POY11. Due to the limited optical access, I suggest we try steering POYM1 YAW and adapting the RFPD path accordingly.
I centered WFS1 PD so that IMC WFS Servo does not go out of range.
[Anchal, Paco, Yuta, JC]
After agreement from Yuta/Anchal, I moved POYM1 yaw to clear the aforementioned path, and Ian restored the POY11 RFPD path. The demodulation phase might need to be corrected afterwards, before any lockign attempts.
Current OSEM sensor values with all the suspensions aligned are attached.
For 'BS','ITMX','ETMX','ITMY','ETMY','PRM','SRM','LO1','LO2', the ones out of the range [200,800] are marked, and for 'PR2','PR3','SR2','AS1','AS4', the ones out of the range [6000,24000] are marked.
[JC, Tega, Chub]
Today we installed the 200 lbs doors on the end station chambers.
[Paco, Anchal, Yuta]
Today, in short we:
Prep for closing and pump down.
[Chub, JC, Jordan, Yuta, Yehonathan, Paco]
Closed in the following order:
After closing the heavy doors, we tried to have GTRY less clipped using PR2, PR3, ITMY and ETMY. During this adventure, we also aligned GRY injection beam by hand. Rotating a waveplate for GRY injection made GRY locking stably at GTRY of ~0.3.
[Vacuum gauge sensors]
Paco informed me that the FRG sensor EPICS channels are not available on dataviewer, so I added them to slow channels ini file (/opt/rtcds/caltech/c1/chans/daq/C0EDCU.ini). I also commented out the old CC1, CC2, CC3 and CC4 gauges. A service restart is required for them to become available but this cannot be done right now because it would adversely affect the progress of the upgrade work. So this would be done at a later date.
git repo - https://git.ligo.org/40m/vac
Finally incorporated the FRGs into the main modbusIOC service and everything seems to be working fine. I have also removed the old sensors (CC1,CC2,CC3,CC4,PTP1,IG1) from the serial client list and their corresponding EPICS channels. Furthermore, the interlock service python script has been updated so that all occurrence of old sensors (turns out to be only CC1) were replaced by their corresponding new FRG sensor (FRG1) and a redundnacy was also enacted for P1a where the interlock condition is replicated with P1a being replaced with FRG1 because they both sense the main volume pressure.
[JC, Jordan, Paco, Chub]
We began with the pumpdown this morning. We started with the annulus volume and proceeded by using the following:
1. Isolate the RGA Volume by closing of valves VM3 and V7.
2. Opened valves VASE, VASV, VABSSCT, VABS, VABSSCO, VAEV, and VAEE, in that order.
3. Open VA6 to allow P3, FRG3, and PAN to equalize.
4. Turn on RP1 and RP3, then open V6 to begin the pumpdown. Manually turn RV1 to slowly change the differential. (OPEN SLOWLY)
We did have to replace gauge PAN becuase it was reading a signal error. In addition, we found the cable is a bit sketchy and has a sharp bend. The signal comes in and out when the cable is fiddled with.
I modified the script freeSwing.py to use damping loop output switches to free the optic instead of watchdog or coil output filters. This ensures that the free swing test is being done at the nominal position of the optic. I started tests for LO1, LO2, As2, As4, PR2, PR3, and SR2 in a tmux session names freeSwing on rossa.
Note: LO2 face OSEMs are hardly sensitive to any motion right now due to excessive pitch offset required for LO beam. We should relieve this offset to LO1 and rerun this test later.