40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 336 of 349  Not logged in ELOG logo
ID Date Author Typeup Category Subject
  16569   Tue Jan 11 10:23:18 2022 PacoUpdateElectronicsITMY feedthroughs and in-vac cables installed - part II

[Paco, Chub]

The ITMY 10" flange with 4 DSUB-25 feedthroughs has been installed with the cables connected at the in-vac side. This is the second of two flanges, and includes 4 cables ordered vertically in stacks of 2 & 2 for [[AS1-1, AS1-2, AS4-1, AS4-2]] respectively. No major incidents during this one, except maybe a note that all the bolts were extremely dirty and covered with gunk, so we gave a quick swipe with wet cloths before reinstalling them.

  16570   Tue Jan 11 10:46:07 2022 TegaUpdateCDSSUS screen debugging

Seen. Thanks.

Red Arrow: The channel was labeled incorrectly as INMON instead of OUTPUT

Green Arrow: OK, I will create a custom medm screen for this.

Blue arrow: Hmm, OK I will look into this. Doing this work remotely is a pain as the medm response is quite slow for poking around.

Orange circle: OK, I'll move this to the left side of the line.

Note to self: I also noticed another error on the side (LPYS blue box just b4 the sum). The channel is pointing to YAW instead of the side, so needs to be fixed as well.

Quote:

Indicated by the red arrow:
Even when the side damping servo is off, the number appears at the input of the output matrix

Indicated by the green arrows:
The face magnets and the side magnets use different ADCs. How about opening a custom ADC panel that accommodates all ADCs at once? Same for the DAC.

Indicated by the blue arrows:
This button opens a custom FM window. When the pitch gain was modified with a ramping time, the pitch and yaw gain grows at the same time even though only the pitch gain was modified.

Indicated by the orange circle:
The numbers are not indicated here, but they are input-related numbers (for watchdogging) rather than output-related numbers. It is confusing to place them here.

 

  16571   Tue Jan 11 10:58:58 2022 TegaUpdateSUSTemporary watchdog

Started working on this. First created a git repo for tracking https://git.ligo.org/40m/susaux.git

I have looked through the folder to see what needs doing and I think I know what needs to be done for the final case by just following the same pattern for the other optics, which I am listing below

- Create database file for the BHD optics, say C1_SUS-AUX_LO1.db by copying another optic database file say C1_SUS-AUX_SRM. Then replace the optic name.

- Insert a new line "C1:SUS-LO1_LATCH_OFF" in the file autoBurt_watchdogs.req

- Populate the file autoBurt.req with the appropriate channels for LO1

- Populate the file C1SUSaux_post.sh with the corresponding commands for LO1

- Add the line dbLoadDatabase("/cvs/cds/caltech/target/c1susaux/C1_SUS-AUX_LO1.db") to the file C1SUSaux.cmd

 

For the temporary watchdog, we comment everything I have just talked about, and do only what come next.

My question is the following:

I understand that we need to use the OUT16 slow channel as a temporary watchdog since we don't currently have access to the slow channels bcos the Acromag units have not been installed. My guess from Koji's instructions is that we need to update the channels in the last two fields "INPA" and "INPB" below

record(calc,"C1:SUS-LO1_UL_CALC")
{
        field(DESC,"ANDs Enable with Watchdog")
        field(SCAN,"1 second")
        field(PHAS,"1")
        field(PREC,"2")
        field(HOPR,"40")
        field(LOPR,"-40")
        field(CALC,"A&B")
        field(INPA,"C1:SUS-LO1_UL_COMM  PP  NMS")
        field(INPB,"C1:SUS-LO1_LATCH_OFF  PP  MS")
}

Suppose we replace the channel for INPA with C1:SUS-LO1_ULCOIL_OUT16, what about INPB. Is this even the right thing to do as I am just guessing here?

 

  16573   Tue Jan 11 13:43:14 2022 KojiUpdateSUSTemporary watchdog

I don't remember the syntax of the db file, but here this calc channel computes A&B. And A&B corresponds to INPA and INPB.

        field(CALC,"A&B")
        field(INPA,"C1:SUS-LO1_UL_COMM  PP  NMS")
        field(INPB,"C1:SUS-LO1_LATCH_OFF  PP  MS")

What is this LATCH doing?

 

  16574   Tue Jan 11 14:21:53 2022 PacoUpdateElectronicsBS feedthroughs and in-vac cables installed

[Paco, Yehonathan, Chub]

The BS chamber 10" flange with 4 DSUB-25 feedthroughs has been installed with the cables connected at the in-vac side. This is the second of two flanges, and includes 4 cables ordered vertically in stacks of 2 & 2 for [[LO2-1, LO2-2, PR3-1, PR3-2]] respectively.

  16575   Tue Jan 11 15:21:16 2022 AnchalUpdateBHDPR2 transmission calculation

I did this simple calculation where I assumed 1W power from laser and 10% transmission past IMC. We would go ahead with V6-704/V6-705 ATFilms 3/8" optic. It would bring down the PRC gain to ~30 but will provide plenty of light for LO beam and alignment.

  16582   Thu Jan 13 16:08:00 2022 YehonathanUpdateBHDgluing magnets after AS1/4 misfortune

{Yehonathan, Anchal, Paco}

In the cleanroom, we removed AS1 and AS4 from their SOS towers. We removed the mirrors from the adapters and put them in their boxes. The broken magnets were collected from the towers and their surfaces were cleaned as well as the magnet sockets on the two adapters and on the side block from where the magnets were knocked off.

We prepared our last batch of glue (more glue was ordered three days ago) and glue 2 side magnets and 2 face magnets. We also took the chance and apply glue on the counterweights on the thick optic adapters so there is no need to look for alternatives for now.

The peek screws and nuts were assembled on the thick optics SOS towers instead of the metal screws and nuts that were used as upper back EQ stops.

  16583   Thu Jan 13 17:10:55 2022 AnchalUpdateBHDPR2 transmission calculation

I corrected the calculation by adding losses by the arm cavity ends times the arm cavity finesse and also taking into account the folding of the cavity mirror. I used exact formula for finesse calculation and divided it by pi to get the PRC gain from there. Attachment 3 is the notebook for referring to the calculations I made.

Note that using V6-704 would provide 35 mW of LO power when PRFPMI is locked and 113 uW for alignment, but will bring down the PRC Gain to 17.5.

pre-2010 ITM (if it is still an option) would provide 12 mW of LO power when PRFPMI is locked and 28 uW for alignment, but will keep the PRC Gain to 24.6.

I still have to do a curvature check on the V6-704 optic.

  16584   Fri Jan 14 03:07:04 2022 KojiUpdateBHDPR2 transmission calculation

I opened the notebook but I was not sure where you have the loss per bounce for the arm cavity.

    PRC_RT_Loss = 2 * PR3_T + 2 * PR2_T + 2 * Arm_Cavity_Finesse * ETM_T + PRM_T

Do you count the arm reflection loss to be only 2 * 13ppm * 450 = 1.17%?

  16585   Fri Jan 14 11:00:29 2022 AnchalUpdateBHDPR2 transmission calculation

Yeah, I counted the loss from arm cavities as the transmission from ETMs on each bounce. I assumed Michelson to be perfectly aligned to get no light at the dark port.  Should I use some other number for the round-trip loss in the arm cavity?

  16586   Fri Jan 14 12:01:21 2022 AnchalUpdateElectronicsBS & ITMY feedthroughs labeled and connected to Sat Amps

I labeled all the newly installed flanges and connected the in-air cables (40m/16530) to appropriate ports. These cables are connected to the CDS system on 1Y1/1Y0 racks through the satellite amplifiers. So all new optics now can be damped as soon as they are placed. We need to make more DB9 plugs for setting "Acquire" mode on the HAM-A coil drivers since our Binary input system is not ready yet. Right now, we only have 2 such plugs which means only one optic and be damped at a time.

 

  16587   Fri Jan 14 13:46:25 2022 AnchalUpdateBHDPR2 transmission calculation updated

I updated the arm cavity roundtrip losses due to scattering. Yehonathan told me that arm cavity looses 50ppm every roundtrip other than the transmission losses. With the updated arm cavity loss:

  PRFPMI LO Power (mW) Unlocked PRC LO Power (uW) PRC Gain
pre-2010 ITM 8 28 15.2
V6:704 24 113 12

 

  16588   Fri Jan 14 14:04:51 2022 PacoUpdateElectronicsRFSoC 2x2 board arrived

The Xilinx RFSoC 2x2 board arrived right before the winter break, so this is kind of an overdue elog. I unboxed it, it came with two ~15 cm SMA M-M cables, an SD card preloaded with the ARM processor and a few overlay jupyter notebooks, a two-piece AC/DC adapter (kind of like a laptop charger), and a USB 3.0 cable. I got a 1U box, lid, and assembled a prototype box to hold this board, but this need not be a permanent solution (see Attachment #1). I drilled 4 thru holes on the bottom of the box to hold the board in place. A large component exceeds the 1U height, but is thin enough to clear one of the thin slits at the top (I believe this is a fuse of some sort). Then, I found a brand new front panel, and drilled 4x 13/32 thru holes in the front for SMA F-F connectors.

I powered the board, and quickly accessed its tutorial notebooks, including a spectrum analyzer and signal generators just to quickly check it works normally. The board has 2 fast RFADCs and 2 RFDACs exposed, 12 and 14 bit respectively, running at up to 4 GSps.

  16589   Fri Jan 14 17:33:10 2022 YehonathanUpdateBHDAS4 resurrection

{Yehonathan, Anchal}

Came this morning, the gluing of the magnets was 100% successful. Side blocks, counterweights were assembled. We suspend AS4 and adjust the roll balance and the magnet height (attachments 1,2). OpLev was slightly realigned.

The pitch was balanced. We had to compensate for the pitch shift due to the locking of the counterweights. Once we got good pitch balance, the motion spectrum was taken (attachment 3). Major peaks are at 755mHz, 953mHz, 1040mHz.

Previous peaks were 755mHz, 964mHz, and 1.062Hz so not much has changed. We pushed back the OSEMs, adjusted OSEM plate and locked it tightly. We lock the EQ stops and transfer AS4 to the vacuum chamber in foil. We open the foil inside the chamber. No magnets were broken. Everything seems to be intact. We connect the OSEMs to CDS.

  16597   Wed Jan 19 14:41:23 2022 KojiUpdateBHDSuspension Status

Is this the correct status? Please directly update this entry.


LO1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
LO2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
AS1 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]

AS4 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
PR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
PR3 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]
SR2 [Glued] [Suspended] [Balanced] [Placed In Vacuum] [OSEM Tuned] [Damped]


Last updated: Fri Jan 28 10:34:19 2022

  16598   Wed Jan 19 16:22:48 2022 AnchalUpdateBHDPR2 transmission calculation updated

I have further updated my calculation. Please find the results in the attached pdf.

Following is the description of calculations done:


Arm cavity reflection:

Reflection fro arm cavity is calculated as simple FP cavity reflection formula while absorbing all round trip cavity scattering losses (between 50 ppm to 200 ppm) into the ETM transmission loss.

So effective reflection of ETM is calculated as

r_{\rm ETMeff} = \sqrt{1 - T_{\rm ETM} - L_{\rm RT}}

r_{\rm arm} = \frac{-r_{\rm ITM} + r_{\rm ETMeff}e^{2i \omega L/c}}{1 - r_{\rm ITM} r_{\rm ETMeff}e^{2 i \omega L/c}}

The magnitude and phase of this reflection is plotted in page 1 with respect to different round trip loss and deviation of cavity length from resonance. Note that the arm round trip loss does not affect the sign of the reflection from cavity, at least in the range of values taken here.


PRC Gain

The Michelson in PRFPMI is assumed to be perfectly aligned so that one end of PRC cavity is taken as the arm cavity reflection calculated above at resonance. The other end of the cavity is calculated as a single mirror of effective transmission that of PRM, 2 times PR2 and 2 times PR3. Then effective reflectivity of PRM is calculated as:

r_{\rm PRMeff} = \sqrt{1 - T_{\rm PRM} - 2T_{\rm PR2} - 2T_{\rm PR3}}

t_{\rm PRM} = \sqrt{T_{\rm PRM}}

Note, that field transmission of PRM is calculated with original PRM power transmission value, so that the PR2, PR3 transmission losses do not increase field transmission of PRM in our calculations. Then the field gain is calculated inside the PRC using the following:

g = \frac{t_{\rm PRM}}{1 - r_{\rm PRMeff} r_{\rm arm}e^{2 i \omega L/c}}

From this, the power recycling cavity gain is calculated as:
G_{\rm PRC} = |g|^2

The variation of PRC Gain is showed on page 2 wrt arm cavity round trip losses and PR2 transmission. Note that gain value of 40 is calculated for any PR2 transmission below 1000 ppm. The black verticle lines show the optics whose transmission was measured. If V6-704 is used, PRC Gain would vary between 15 and 10 depending on the arm cavity losses. With pre-2010 ITM, PRC Gain would vary between 30 and 15.


LO Power

LO power when PRFPMI is locked is calculated by assuming 1 W of input power to IMC. IMC is assumed to let pass 10% of the power (L_{\rm IMC}=0.1). This power is then multiplied by PRC Gain and transmitted through the PR2 to calculate the LO power.

P_{\rm LO, PRFPMI} = P_{\rm in} L_{\rm IMC}G_{\rm PRC}T_{\rm PR2}

Page 3 shows the result of this calculation. Note for V6-704, LO power would be between 35mW and 15 mW, for pre-2010 ITM, it would be between 15 mW and 5 mW depending on the arm cavity losses.

The power available during alignment is simply given by:
P_{\rm LO, align, PRM} = P_{\rm in} L_{\rm IMC} T_{\rm PRM} T_{\rm PR2}

P_{\rm LO, align, no PRM} = P_{\rm in} L_{\rm IMC} T_{\rm PR2}

If we remove PRM from the input path, we would have sufficient light to work with for both relevant optics.


I have attached the notebook used to do these calculations. Please let me know if you find any mistake in this calculation.

  16599   Wed Jan 19 18:15:34 2022 YehonathanUpdateBHDAS1 resurrection

Today I suspended AS1. Anchal helped me with the initial hanging of the optics. Attachments 1,2 show the roll balance and side magnet height. Attachment 3 shows the motion spectra.

The major peaks are at 668mHz, 821mHz, 985mHz.

For some reason, I was not able to balance the pitch with 2 counterweights as I did with the rest of the thin optics (and AS1 before). Inserting the weights all the way was not enough to bring the reflection up to the iris aperture that was used for preliminary balancing. I was able to do so with a single counterweight (attachment 4). I'm afraid something is wrong here but couldn't find anything obvious. It is also worth noting that the yaw resonance 668mHz is different from the 755mHz we got in all the other optics. Maybe one or more of the wires are not clamped correctly on the side blocks?

The OSEMs were pushed into the OSEM plate and the plates were adjusted such that the magnets are at the center of the face OSEMs. The wires were clamped and cut from the winches. The SOS is ready for installation.

Also, I added a link to the OSEM assignments spreadsheet to the suspension wiki.

I uploaded some pictures of the PEEK EQ stops, both on the thick and thin optics, to the Google Photos account.

  16600   Wed Jan 19 21:39:22 2022 TegaUpdateSUSTemporary watchdog

After some work on the reference database file, we now have a template for temporary watchdog implementation for LO1 located here "/cvs/cds/caltech/target/c1susaux/C1_SUS-AUX_LO1.db".

Basically, what I have done is swap the EPICS asyn analog input readout for the COIL and OSEM to accessible medm channels, then write out watchdog enable/disable to coil filter SW2 switch. Everything else in the file remains the same. I am worried about some of the conversions but the only way to know more is to see the output on the medm screen.

To test, I restarted c1su2 but this did not make the LO1 database available, so I am guessing that we also need to restart the c1sus, which can be done tomorrow.

  16601   Thu Jan 20 00:26:50 2022 KojiUpdateSUSTemporary watchdog

As the new db is made for c1susaux, 1) it needs to be configured to be read by c1susaux 2) it requires restarting c1susaux 3) it needs to be recorded by FB 4) and restartinbg FB.
(^-Maybe not super exact procedure but conceptually like this)

cf.
https://wiki-40m.ligo.caltech.edu/How_To/Add_or_rename_a_daq_channel

 

  16602   Thu Jan 20 01:48:02 2022 KojiUpdateBHDPR2 transmission calculation updated

IMC is not such lossy. IMC output is supposed to be ~1W.

The critical coupling condition is G_PRC = 1/T_PRM = 17.7. If we really have L_arm = 50ppm, we will be very close to the critical coupling. Maybe we are OK if we have such condition as our testing time would be much longer in PRMI than PRFPMI at the first phase. If the arm loss turned out to be higher, we'll be saved by falling to undercoupling.
When the PRC is close to the critical coupling (like 50ppm case), we roughly have Tprc x 2 and Tarm to be almost equal. So each beam will have 1/3 of the input power i.e. ~300mW. That's probably too much even for the two OMCs (i.e. 4 DCPDs). That's OK. We can reduce the input power by 3~5.

Quote:

LO Power

LO power when PRFPMI is locked is calculated by assuming 1 W of input power to IMC. IMC is assumed to let pass 10% of the power (L_{\rm IMC}=0.1).

 

  16603   Thu Jan 20 12:10:51 2022 YehonathanUpdateBHDAS1 resurrection

I was wondering whether I should take AS1 down to redo the wire clamping on the side blocks. I decided to take the OpLev spectrum again to be more certain. Attachments 1,2,3 show 3 spectra taken at different times.

They all show the same peaks 744mHz, 810mHz, 1Hz. So I think something went wrong with yesterday's measurement. I will not take AS1 down for now. We still need to apply some glue to the counterweight.

  16604   Thu Jan 20 16:42:55 2022 PacoUpdateBHDAS4 OSEMs installation - part 2

[Paco]

Turns out, the shifting was likely due to the table level. Because I didn't take care the first time to "zero" the level of the table as I tuned the OSEMs, the installation was b o g u s. So today I took time to,

a) Shift AS4 close to the center of the table.

b) Use the clean level tool to pick a plane of reference. To do this, I iteratively placed two counterweights (from the ETMX flow bench) in two locations in the breadboard such that I nominally balanced the table under this configuration to zome reference plane z0. The counterweight placement is of course temporary, and as soon as we make further changes such as final placement of AS4 SOS, or installation of AS1, their positions will need to change to recover z=z0.

c) Install OSEMs until I was happy with the damping. ** Here, I noticed the new suspension screens had been misconfigured (probably c1sus2 rebooted and we don't have any BURT), so quickly restored the input and output matrices.


SUSPENSION STATUS UPDATED HERE

  16606   Thu Jan 20 17:21:21 2022 TegaUpdateSUSTemporary watchdog

Temp software watchdog now operational for LO1 and the remaining optics!

Koji helped me understand how to write to switches and we tried for a while to only turnoff the output switch of the filters instead of the writing a zero that resets everything in the filter.

Eventually, I was able to move this effort foward by realising that I can pass the control trigger along multiple records using the forwarding option 'FLNK'. When I added this field to the trigger block, record(dfanout,"C1:SUS-LO1_PUSH_ALL"), and subsequent calculation blocks, record(calcout,"C1:SUS-LO1_COILSWa") to record(calcout,"C1:SUS-LO1_COILSWd"), everything started working right.

Quote:

After some work on the reference database file, we now have a template for temporary watchdog implementation for LO1 located here "/cvs/cds/caltech/target/c1susaux/C1_SUS-AUX_LO1.db".

Basically, what I have done is swap the EPICS asyn analog input readout for the COIL and OSEM to accessible medm channels, then write out watchdog enable/disable to coil filter SW2 switch. Everything else in the file remains the same. I am worried about some of the conversions but the only way to know more is to see the output on the medm screen.

To test, I restarted c1su2 but this did not make the LO1 database available, so I am guessing that we also need to restart the c1sus, which can be done tomorrow.

 

  16607   Thu Jan 20 17:34:07 2022 KojiUpdateBHDV6-704/705 Mirror now @Downs

The PR2 candidate V6-704/705 mirrors (Qty2) are now @Downs. Camille picked them up for the measurements.

To identify the mirrors, I labeled them (on the box) as M1 and M2. Also the HR side was checked to be the side pointed by an arrow mark on the barrel. e.g. Attachment 1 shows the HR side up

  16608   Thu Jan 20 18:16:29 2022 AnchalUpdateBHDAS4 set to trigger free swing test

AS4 is set to go through a free swinging test at 10 pm tonight. We have used this script (Git/40m/scripts/SUS/InMatCalc/freeSwing.py) reliably in the past so we expect no issues, it has a error catching block to restore all changes at the end of the test or if something goes wrong.

To access the test, on allegra, type:

tmux a -t AS4

Then you can kill the script if required by Ctrl-C, it will restore all changes while exiting.

  16611   Fri Jan 21 12:46:31 2022 TegaUpdateCDSSUS screen debugging

All done (almost)! I still have not sorted the issue of pitch and yaw gains growing together when modified using ramping time. Image of custom ADC and DAC panel is attached.

 

Quote:

Seen. Thanks.

 
Quote:

Indicated by the red arrow:
Even when the side damping servo is off, the number appears at the input of the output matrix

Indicated by the green arrows:
The face magnets and the side magnets use different ADCs. How about opening a custom ADC panel that accommodates all ADCs at once? Same for the DAC.

Indicated by the blue arrows:
This button opens a custom FM window. When the pitch gain was modified with a ramping time, the pitch and yaw gain grows at the same time even though only the pitch gain was modified.

Indicated by the orange circle:
The numbers are not indicated here, but they are input-related numbers (for watchdogging) rather than output-related numbers. It is confusing to place them here.

 

 

  16612   Fri Jan 21 14:51:00 2022 KojiUpdateBHDV6-704/705 Mirror now @Downs

Camille@Downs measured the surface of these M1 and M2 using Zygo.

Result of the ROC measurements:M1: ROC=2076m (convex)M2: ROC=2118m (convex)
Here are screenshots. One file shows the entire surface and the other shows the central 30mm.
  16613   Fri Jan 21 16:40:10 2022 AnchalUpdateBHDAS4 Input Matrix Diagonalization performed.

The free swinging test was successful. I ran the input matrix diagonalization code (/opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/sus_diagonalization.py) on the AS4 free-swinging data collected last night. The logfile and results are stored in /opt/rtcds/caltech/c1/Git/40m/scripts/SUS/InMAtCalc/AS4 directory. Attachment 1 shows the power spectral density of the DOF basis data (POS, PIT, YAW, SIDE) before and after the diagonalization. Attachment 2 shows the fitted peaks.


Free Swinging Resonances Peak Fits
  Resonant Frequency [Hz] Q A
POS 1.025 337 3647
PIT 0.792 112 3637
YAW 0.682 227 1228
SIDE 0.993 164 3094

AS4 New Input Matrix
  UL UR LR LL SIDE
POS
0.844
0.707
0.115
0.253
-1.646
PIT
0.122
0.262
-1.319
-1.459
0.214
YAW
1.087
-0.901
-0.874
1.114
0.016
SIDE
0.622
0.934
0.357
0.045
3.822

The new matrix was loaded on AS4 input matrix and this resulted in no control loop oscillations at least. I'll compare the performance of the loops in future soon.

 

  16616   Mon Jan 24 17:12:27 2022 ranaUpdateBHDAS4 Input Matrix Diagonalization performed.

I think our suspension input matrix diagonalization is not so robust usually because we only choose a inverting matrix which gives the best separation for a single suspension alignment.

i.e. we have seen in the past that adjusting the bias for the alignment makes the matrix inversion not work well. Sometime people turn OFF the alignment bias before making the ringdown and that makes the whole measurement invalid.

This is because the sensitivity of the OSEMs to longitudinal and/or transverse motion is significantly different for different alignment.

I wonder if there's a way we can choose a better matrix by putting in random gain errors on the shadow sensor signals and then finding the matrix which gives the best diag under an ensemble of gain errors.

  16617   Mon Jan 24 17:58:21 2022 YehonathanUpdateBHDPR2 Suspension

I picked up the PR2 mirrors (labeled M1, M2) from Anchel's table and took them to the cleanroom. By inspection, I spotted some dust particles on M1. I wasn't able to remove them with clean air so I decided to use M2 which looked much cleaner. I wasn't able to discern any wedge angle on the optic. I inserted the optic into a thin optic adapter. The optic is thicker than I expected so I use long screws for the mirror clamping. I expect that the pitch balance will shift towards the front of the mirror so I assembled only 1 counterweight for now. The side blocks with wires in them were installed.

I engraved the SOS and installed the winches on it. Paco came in and helped me to hang the optic. Looking at the wire hanging angle I realize that 1 counterweight at the front is not enough. I install a second counterweight at the back and observe that I can cross the balancing point.

I locked the EQ stops. Suspension work continues tomorrow...

  16619   Mon Jan 24 20:48:48 2022 AnchalUpdateBHDAS4 Input Matrix Diagonalization performed.

I agree. That's an interesting idea. But does that mean that there is an always working inverse matrix solution or that any solution will be vulnerable to the alignment biases.

I think we can also calculate the matrix rotation required as a function of dc biases and do that rotation in the simulimk model.

Quote:

I think our suspension input matrix diagonalization is not so robust usually because we only choose a inverting matrix which gives the best separation for a single suspension alignment.

i.e. we have seen in the past that adjusting the bias for the alignment makes the matrix inversion not work well. Sometime people turn OFF the alignment bias before making the ringdown and that makes the whole measurement invalid.

This is because the sensitivity of the OSEMs to longitudinal and/or transverse motion is significantly different for different alignment.

I wonder if there's a way we can choose a better matrix by putting in random gain errors on the shadow sensor signals and then finding the matrix which gives the best diag under an ensemble of gain errors.

 

  16624   Tue Jan 25 18:37:12 2022 TYehonathanUpdateBHDPR2 Suspension

PR2's side magnet height was adjusted and its roll was balanced (attachment 1,2). I verified that the OpLev beam is still aligned. The pitch was balanced: First, using an iris for rough adjustment. Then, with the QPD. I locked the counterweight setscrew.

I turned off the HEPAs, damped PR2, and measured the QPD spectra (attachment 3). Major peaks are at 690mHz, 953mHz, and 1.05Hz. I screwed back the lower OSEM plate. The wires were clamped to the suspension block and were cut. Winch adapter plate removed. I wanted to push OSEMs into the OSEM plates but the wiki is down so I can't tell what was the plan. This will have to wait for tomorrow. Also here like with AS1 we need to apply glue to the counterweights.

  16634   Mon Jan 31 10:39:19 2022 JordanUpdateVACTP1 and Manual Gate Valve Removal

Jordan, Chub

Today, Chub and I removed TP1 and the failed manual gate valve off of the pumping spool.

First, P2 needed to be vented in order to remove TP1. TP1 has a purge valve on the side of the pump which we slowly opened bringing the P2 volume up to atmosphere. Although, this was not vented using the dry air/N2, using this purge valve eliminated the need to vent the RGA volume.

Then we disconnected TP1 foreline, removed TP1+8" flange reducer, then the gate valve. All of the removed hardware looked good, so no need to replace bolts/nuts, only needs new gaskets. TP1 and the failed valve are sitting on a cart wrapped in foil next to the pumping station.

  16636   Tue Feb 1 20:16:09 2022 TegaUpdateBHDPR2 candidate mirror analysis

git repo: git@git.ligo.org:tega-edo/charmirrormap.git

The analysis code takes in a set of raw images, 10 in our case,  for each mirror and calculates the zernike aberration coefficients for each image, then takes their average. This average value is used to reconstruct the mirror height map.  Finally, the residual error between the reconstructed image and the raw data is calculated.

We repeat the analysis for different field of views (FoV) namely 10mm, 20mm, 30mm, 40mm and 46.5mm and save the results in the output folder of the repo.

The analysis output for a 10mm FoV aperture at the mirror center is shown in the attachement. These three images show the input data, the reconstructed mirror surface map and the residual error.

  16643   Thu Feb 3 10:25:59 2022 JordanUpdateVACTP1 and Manual Gate Valve Install

Jordan, Chub

Chub and I installed the new manual gate valve (Nor-Cal GVM-6002-CF-K79) and reinstalled TP1. The new gate valave was placed with the sealing side towards the main 40m volume, then TP1 was installed on top and the foreline reattched to TP1.

This valve has a hard stop in the actuator to prevent over torquing.

 

  16644   Thu Feb 3 14:47:12 2022 ChubUpdateElectronicsnew UPS in place

Received the new 1100VA APC UPS today and placed it at the bottom of the valve rack.  I'd connected the battery and plugged the unit into the AC outlet, but did not turn it on due to the power outage this weekend.

  16646   Fri Feb 4 10:04:47 2022 ChubUpdateGeneraldish soap and clean scrub sponges!

Bought dish soap and scrub sponges today and placed them under the sink with the other dish supplies.

  16648   Mon Feb 7 09:00:26 2022 PacoUpdateGeneralScheduled power outage recovery

[Paco]

Started recovering from scheduled (Feb 05) power outage. Basically, time-reversing through this list.


== Office area ==

  • Power martian network switches, WiFi routers on the north-rack.
  • Power windows (CAD) machine on.

== Main network stations ==

  • Power on nodus, try ping (fail).
  • Power on network switches, try ping (success), try ssh controls@nodus.ligo.caltech.edu (success).
  • Power on chiara to serve names for other stations, try ssh chiara (success).
  • Power on fb1, try ping (success), try ssh fb1 (success).
  • Power on paola (xend laptop), viviana (yend laptop), optimus, megatron.

== Control workstations ==

  • Power on zita (success)
  • Power on giada (success), run system upgrade.
  • Power on donatella (success)
  • Power on allegra (fail)  **
  • Power on pianosa (success)
  • Power on rossa (success)
  • From nodus, started elog (success).

== PSL + Vertex instruments ==

  • Turn on newport PD power supplies on PSL table.
  • Turn on TC200 temp controller on (setpoint --> 36.9 C)
  • Turn on two oscilloscopes in PSL table.
  • Turn on PSL (current setpoint --> 2.1 A, other settings seem nominal)
  • Turn on Thorlabs HV pzt supply.
  • Turn on ITMX OpLev / laser instrument AC strip.

== YEND and XEND instruments ==

  • Turn on XEND AUX pump on (current setpoint -->1.984 A)
  • Turn on XEND AUX SHG oven on (setpoint --> 37.1 C) (see green beam)
  • Turn on XEND AUX shutter controller on.
  • Turn on DCPD supply, and OpLev supply AC strip on.
  • Turn on YEND AUX pump on (fail) *
    • With the controller on STDBY, I tried setting up the current but got HD FAULT (or according to the manual this is what the head reports when the diode temperature is too high...)
    • Upon power cycling the controller, even the controller display stopped working... YAUX controller + head died? maybe just the diode? maybe just the controller?
      • I borrowed a spare LW125 controller from the PSL table (Yehonathan pointed me to it) and swapped it in.
      • Got YEND AUX to lase with this controller, so the old controller is busted but at least the laser head is fine.
      • Even saw SHG light. We switched the laser head off to "STDBY" (so it remains warm) and took the faulty controller out of there.
  • Turn on YEND AUX SHG oven on (setpoint -->35.7 C)
  • Turn on YEND AUX shutter controller on.

== YARM Electronic racks ==

== XARM Electronic racks ==

 


* Top priority, this needs to be fixed.

** Non-priority, but to be debugged

  16649   Mon Feb 7 15:32:48 2022 YehonathanUpdateGeneralY End laser controller

I went to the Y end. The AUX laser was on Standby. I pushed the Standby button. The laser turned on and there was some green light. However, the controller displayed the message "CABLE?" which according to the manual means that the laser head is powered but there is no control over the laser (e.g. the control cable is disconnected). I turned off the controller and disconnected both the power and control cables. I put them back and turned the controller back on.

I pushed the Standby button, the laser turned on and this time the controller displayed the laserhead's state. I was able to change the current/temperature. The problem seems to be resolved.

  16650   Mon Feb 7 16:14:37 2022 TegaUpdateComputersrealtime system reboot problem

I was looking into plotting temperature sensor data trend and why we currently do not have frame data written to file (on /frames) since Friday, and noticed that the FE models were not running. So I spoke to Anchal about it and he mentioned that we are currently unable to ssh into the FE machines, therefore we have been unable to start the models. I recalled the last time we enountered this problem Koji resolved it on Chiara, so I search the elog for Koji's fix and found it here, https://nodus.ligo.caltech.edu:8081/40m/16310. I followed the procedure and restarted c1sus and c1lsc machine and we are now able to ssh into these machines. Also restarted the remaining FE machines and confirm that can ssh into them. Then to start models, I ssh into each FE machine (c1lsc, c1sus, c1ioo, c1iscex, c1iscey, c1sus2) and ran the command

rtcds start --all

to start all models on the FE machine. This procedure worked for all the FE machines but failed for c1lsc. For some reason after starting the first the IOP model - c1x04, c1lsc and c1ass, the ssh connection to the machine drops. When we try to ssh into c1lsc after this event, we get the following error :  "ssh: connect to host c1lsc port 22: No route to host".  I reset the c1lsc machine and deecided to to start the IOP model for now. I'll wait for Anchal or Paco to resolve this issue.


[Anchal, Tega]

I informed Anchal of the problem and ask if he could take a look. It turn out 9 FE models across 3 FE machines (c1lsc, c1sus, c1ioo) have a certain interdependece that requires careful consideration when starting the FE model. In a nutshell, we need to first start the IOP models in all three FE machines before we start the other models in these machines. So we turned off all the models and shutdown the FE machines mainly bcos of a daq issue, since the DC (data concentrator) indicator was not initialised. Anchal looked around in fb1 to figure out why this was happening and eventually discovered that it was the same as the ms_stream issue encountered earlier in fb1 clone (https://nodus.ligo.caltech.edu:8081/40m/16372). So we restarted fb1 to see if things clear up given that chiara dhcp sever is now working fine. Upon restart of fb1, we use the info in a previous elog that shows if the DAQ network is working or not, r.e. we ran the command

$ /opt/mx/bin/mx_info
MX:fb1:mx_init:querying driver:error 5(errno=2):No MX device entry in /dev.

 The output shows that MX device was not initialiesd during the reboot as can also be seen below.

$ sudo systemctl status daqd_dc -l

● daqd_dc.service - Advanced LIGO RTS daqd data concentrator
   Loaded: loaded (/etc/systemd/system/daqd_dc.service; enabled)
   Active: failed (Result: exit-code) since Mon 2022-02-07 18:02:02 PST; 12min ago
  Process: 606 ExecStart=/usr/bin/daqd_dc_mx -c /opt/rtcds/caltech/c1/target/daqd/daqdrc.dc (code=exited, status=1/FAILURE)
 Main PID: 606 (code=exited, status=1/FAILURE)

Feb 07 18:01:56 fb1 systemd[1]: Starting Advanced LIGO RTS daqd data concentrator...
Feb 07 18:01:56 fb1 systemd[1]: Started Advanced LIGO RTS daqd data concentrator.
Feb 07 18:02:00 fb1 daqd_dc_mx[606]: [Mon Feb  7 18:01:57 2022] Unable to set to nice = -20 -error Unknown error -1
Feb 07 18:02:00 fb1 daqd_dc_mx[606]: Failed to do mx_get_info: MX not initialized.
Feb 07 18:02:00 fb1 daqd_dc_mx[606]: 263596
Feb 07 18:02:02 fb1 systemd[1]: daqd_dc.service: main process exited, code=exited, status=1/FAILURE
Feb 07 18:02:02 fb1 systemd[1]: Unit daqd_dc.service entered failed state.


NOTE: We commented out the line

Restart=always

in the file "/etc/systemd/system/daqd_dc.service" in order to see the error, BUT MUST UNDO THIS AFTER THE PROBLEM IS FIXED!

  16651   Mon Feb 7 16:53:02 2022 KojiUpdateGeneralScheduled power outage recovery

I went to the X end and found it was warm. Turned out that not all the A/Cs were on. They were turned on now.

  16652   Wed Feb 9 11:56:24 2022 AnchalUpdateGeneralBringing back CDS

[Anchal, Paco]

Bringing back CDS took a lot of work yesterday. I'm gonna try to summarize the main points here.


mx_start_stop

For some reason, fb1 was not able to mount mx devices automatically on system boot. This was an issue I earlier faced in fb1(clone) too. The fix to this problem is to run the script:

controls@fb1:/opt/mx/sbin/mx_start_stop start

To make this persistent, I've configured a daemon (/etc/systemd/system/mx_start_stop.service) in fb1 to run once on system boot and mount the mx devices as mentioned above. We did not see this issue of later reboots yesterday.


gpstime

Next was the issue of gpstime module out of date on fb1. This issue is also known in the past and requires us to do the following:

controls@fb1:~ 0$ sudo modprobe -r gpstime
controls@fb1:~ 1$ sudo modprobe gpstime

Again, to make this persistent, I've configured a daemon (/etc/systemd/system/re-add-gpstime.service) in fb1 to run the above commands once on system boot. This corrected gpstime automatically and we did not face these problems again.


time synchornization

Later we found that fb1-FE computers, ntp time synchronization was not working and the main reason was that fb1 was unable to access internet. As a rule of thumb, it is always a good idea to try pinging www.google.com on fb1 to ensure that it is connected to internet. The issue had to do with fb1 not being able to find any namespace server. We fixed this issue by reloading bind9 service on chiara a couple of times. We're not really sure why it wasn't working.

~>sudo service bind9 stop
~>sudo service bind9 start
~>sudo service bind9 status
* bind9 is running

After the above, we saw that fb1 ntp server is working fine. You see following output on fb1 when that is the case:

controls@fb1:~ 0$ ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
-table-moral.bnr 110.142.180.39   2 u  399  512  377  195.034  -14.618   0.122
*server1.quickdr .GPS.            1 u   67   64  377  130.483   -1.621   1.077
+ntp2.tecnico.ul 56.99.239.27     2 u  473  512  377  184.648   -0.775   2.231
+schattenbahnhof 129.69.1.153     2 u  365  512  377  144.848    3.841   1.092
 192.168.123.255 .BCST.          16 u    -   64    0    0.000    0.000   0.000

On the FE models, timedatectl should show that NTP synchronized feild is yes. That wasn't happening even after us restarting the systemd-timesyncd service. After this, I just tried restarting all FE computers and it started working.


CDS

We had removed all db9 enabling plugs on the new SOSs beforehand to keep coils off just in case CDS does not come back online properly.

Everything in CDS loaded properly except the c1oaf model which kepy showing 0x2bad status. This meant that some IPC flags are red on c1sus, c1mcs and c1lsc as well. But everything else is green. See attachment 1. I then burtrestroed everything in the /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2022/Feb/4/12:19 directory. This includes the snapshot of c1vac as well that I added on autoburt that day. All burt restore statuses were green OK. I think we are in good state now to start watchdogs on the new SOSs and put back the db9 enabling plugs.


Future work:

When somebody gets time, we should make cutom service files in fb1:/etc/systemd/system/ symbolic links to a repo directory and version control these important services. We should also make sure that their dependencies and startup order is correctly configured. I might have done a half-assed job there since I recently learned how to make unit files. We should do the same on nodus and chiara too. Our hope is that on one glorious day, the lab can be restarted without spending more than 20 min on booting up the computers and network.

 

  16653   Wed Feb 9 13:55:05 2022 KojiUpdateGeneralBringing back CDS

Great recovery work and cleaning of the rebooting process.

I'm just curious: Did you observe that the c1sus2 cards have different numbering order than the previous along with the power outage/cycling?

  16655   Wed Feb 9 16:43:35 2022 PacoUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

[Paco, Anchal]

  • We went in and measured the power after the power splitting HWP at the PSL table. Almost right before the PSL shutter (which was closed), when the PMC was locked we saw ~ 598 mW (!!)
  • Checking back on ESP300, it seems the channel was not enabled even though the right angle was punched in, so it got enabled.
    • No change.
  • The power adjustment MEDM screen is not really working...
  • Going back to the controller, press HOME on the Axis 1 (our HWP) and see it go to zero...
    • Now the power measured is ~ 78 mW.
  • Not sure why the MEDM screen didn't really work (this needs to be fixed later)

We proceeded to align the MC optics because all offsets in MC_ALIGN screen were zeroed. After opening the PSL shutter, we used values from last year as a reference, and try to steadily recover the alignment. The IMC lock remains at large.

  16657   Thu Feb 10 15:41:00 2022 AnchalUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

I found out that the ESP300 service needs to be run in root mode for it to be able to connect to the USB port of HWP motor controller. While doing this change, I noticed that the channels hosted by c1psl might have a duplication conflict with some other channel hosting computer, because a lot of them show the Warning: "Identical process variable names on multiple servers" which is not good. Someone should look into this conflict.

I added instructions on the power control MEDM screen as it was very non-trivial to use. I have set the power such that the C1:IOO-MC_RFPD_DCMON is 5.6 and this happened at C1:IOO-HWP_POS_SET 2.29.

  16658   Thu Feb 10 17:57:48 2022 AnchalUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

Something is wrong with the Video MUX. The system did not turn back on with full functionality. Even though we see the screens as they were before the power shutdown, we have lost control on switching any of the videos. I went to check the wiki page about Video MUX which told be we should be able to see the configuration screen on this link, but the page wasn't opening. I went and removed the power cable and put it back in. That brought back the configuration page. Still, I could not change any of the video feeds however this time, I could see the EPICS channel values (like C1:VID-QUAD1_4) change. I tired to go to the configuration page and change the matrix values from the control tab there. I found out that the matrix was mislabeled and while making the changes, I started seeing blue screen on QUAD1_3 (where MC2T was set before). I set the QUAD1_3 (output 23) to MC2T (input 16), but no change. The EPICS values are also set properly, so I don't understand the reason behind blue screen. The same happened when I tried to use:

~>/opt/rtcds/caltech/c1/scripts/general/videoscripts videoswitch3 QUAD1_3 MC2T

Weirdly, this caused the QUAD1_4 screen to go blue. Running following had no effect:

~>/opt/rtcds/caltech/c1/scripts/general/videoscripts videoswitch3 QUAD1_4 MCR

So, I'm not sure what to do. This really needs to be fixed! I wanted to see teh MC2F camera so that I can align IMC, that was the whole reason for this rabit hole. Help needed.

  16659   Thu Feb 10 19:03:23 2022 KojiUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

I came back to the 40m and started the investigation.

If I ping 192.168.113.92, it responds. But telnet (port 23) was rejected. I somehow tried ssh and it responds! I even could login to the host using usual password. Here is the prompt.

controls@nodus|~> ssh 192.168.113.92
controls@192.168.113.92's password:

...
controls@c1sus2:~ 0$

Oh no...

Looks like c1sus2 and the videomux have the IP address conflict.

Here are the useful ELOG links:

https://nodus.ligo.caltech.edu:8081/40m/4498

https://nodus.ligo.caltech.edu:8081/40m/4529

  16660   Thu Feb 10 19:46:37 2022 KojiUpdateGeneralScheduled power outage recovery - Locking mode cleaner(s)

== Assign new IP address to c1sus2 ==

cf: [40m ELOG 16398] [40m ELOG 16396]

- Shutdown c1sus2 (Oh, no. This killed c1lsc/c1sus/c1ioo... This should be taken care of later)

- Confirmed 192.168.113.87 is not alive

- Go to chiara
- Modify /diskless/root/etc/hosts

192.168.113.87  c1sus2 c1sus2.martian

- Modify /etc/dhcp/dhcpd.conf

host c1sus2 {
  hardware ethernet 00:25:90:06:69:C2;
  fixed-address 192.168.113.87;
}

- Modify /var/lib/bind/martian.hosts

c1sus2          A    192.168.113.87
videomux        A    192.168.113.92

- Modify /var/lib/bind/martian.hosts/rev.113.168.192.in-addr.arpa

87            PTR    c1sus2.martian
92            PTR    videomux.martian

- Reload/restart bind9 / dhcpd. Run the following command

sudo service bind9 reload
sudo service isc-dhcp-server restart

- Restart c1sus2 and confirm if the IP address was actually changed

controls@c1sus2:~ 0$ /sbin/ifconfig
eth0      Link encap:Ethernet  HWaddr 00:25:90:06:69:c2
          inet addr:192.168.113.87  Bcast:192.168.113.255  Mask:255.255.255.0
...

== Restart c1lsc / c1sus /c1ioo ==

- Reboot c1lsc/c1sus/c1ioo

- Go to scripts/cds

- Run startC1LSC.sh and follow the instruction

 

  16661   Thu Feb 10 21:10:43 2022 KojiUpdateGeneralVideo Mux setting reset

Now the video matrix is responding correctly and the web interface shows up. (Attachment 1)

Also the video buttons respond as usual. I pushed Locking Template button to bring the setting back to nominal. (Attachment 2)

  16663   Thu Feb 10 21:51:02 2022 KojiUpdateCDS[Solved] Huge random numbers flowing into ETMX/ETMY ASC PIT/YAW

Huge random numbers are flowing into ETMX/ETMY ASC PIT/YAW. Because of this, I could not damp the ETMX/ETMY suspension at the beginning during the recovery from rebooting. (Attachment 1)
By turning off the output of the ASC filters, the mirrors were successfully damped.

Looking at the FE model view of the end RTSs, there were two possibilities: (Attachment 2)

- They are coming from RFM connection
- They are coming from ASXASY

ASX/ASY are not active and I could not see anything producing these numbers. Burtrestore didn't help.

The possibility was something at the other side of the RFM, or corruption of the RFM signal.

- Looking at the RFM model (Attachment 3), the ASC signals are coming from ASS and IOO. The ASS path has the filter module (C1:RFM-ETMX_PIT and etc). This FM is quiet and not guilty.

- Why do we have the RFM from IOO? I went to IOO and found the new ASC (WFS) model is there. I didn't realize the presence of this model. In fact ASC screen showed that these random numbers are flowing into the end SUSs.
So I did burtrestore of c1iooepics. Alas! they are gone.

Now I can go home.

ELOG V3.1.3-