40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
 40m Log, Page 170 of 170 Not logged in
Wed Aug 17 07:35:48 2022, yuta, Bureaucracy, General, My wish list for IFO commissioning

FPMI related
- Better suspension damping HIGH
- Investigate ITMX input matrix diagonalization (40m/16931)
- Output matrix diagonalization
* FPMI lock is not stable, only lasts a few minutes for so. MICH fringe is too fast; 5-10 fringes/sec in the evening.
- Noise budget HIGH
- Calibrate error signals (actually already done with sensing matrix measurement 40m/17069)
- Make a sensitivity curve using error and feedback signals (actuator calibration 40m/16978)
* See if optical gain and actuation efficiency makes sense. REFL55 error signal amplitude is sensitive to cable connections.
- FPMI locking
- Use CARM/DARM filters, not XARM/YARM filters
- Remove FM4 belly
- Automate lock acquisition procedure
- Initial alignment scheme
- Investigate which suspension drifts much
- Scheme compatible with BHD alignment
* These days, we have to align almost from scratch every morning. Empirically, TT2 seems to recover LO alignment and PR2/3 seems to recover Yarm alignment (40m/17056). Xarm seems to be stable.
- ALS
- Install alignment PZTs for Yarm
- Restore ALS CARM and DARM
* Green seems to be useful also for initial alignment of IR to see if arms drifted or not (40m/17056).
- ASS
- Suspension output matrix diagonalization to minimize pitch-yaw coupling (current output matrix is pitch-yaw coupled 40m/16915)
- Balance ITM and ETM actuation first so that ASS loops will be understandable (40m/17014)
- Suspension calibrations
- Calibrate oplevs
- Calibrate SUSPOS/PIT/YAW/SIDE signals (40m/16898)
* We need better understanding of suspension motions. Also good for A2L noise budgeting.
- CARM servo with Common Mode Board
- Do it with single arm first

BHD related
- Better suspension damping HIGH
- Invesitage LO2 input matrix diagonalization (40m/16931)
- Output matrix diagonalization (almost all new suspensions 40m/17073)
* BHD fringe speed is too fast (~100 fringes/sec?), LO phase locking saturates (40m/17037).
- LO phase locking
- With better suspensions
- Measure open loop transfer function
- Try dither lock with dithering LO or AS with MICH offset (single modulation)
- Modify c1hpc/c1lsc so that it can modulate BS and do double demodulation, and try double demodulation
- Noise Budget HIGH
- Calibrate MICH error signal and AS-LO fringe
- Calibrate LO1, LO2, AS1, AS4 actuation using ITM single bounce - LO fringe
- Check BHD DCPD signal chain (DCPD making negative output when fringes are too fast; 40m/17067)
- Make a sensitivity curve using error and feedback signals
- AS-LO mode-matching
- Model what could be causing funny LO shape
- Model if having low mode-matching is bad or not
* Measured mode-matching of 56% sounds too low to explain with errors in mode-matching telescope (40m/16859, 40m/17067).

IMC related
- WFS loops too fast (40m/17061)
- Noise Budget
- Investigate MC3 damping (40m/17073)
- MC2 length control path

Fri Aug 19 14:46:32 2022, Anchal, Update, SUS, Open loop transfer function measurements for local damping loops of BHD optics

[Anchal, Tega]

As a first step to characterize all the local damping loops, we ran an open loop transfer function measurement test for all BHD optics, taking transfer function using band-limited (0.3 Hz to 10 Hz) gaussian noise injection at error points in different degrees of freedom. Plots are in the git repo. I'll make them lighter and post here.

We have also saved coherence of excitation at the IN1 test points of different degrees of freedom that may be later used to determine the cross-coupling in the system.

The test ran automatically using measSUSOLTF.py script. The script can run the test parallelly on all suspensions in principle, but not in practice because the cdsutils.getdata apparently has a limitation on how many real-time channels (we think it is 8 maximum) one can read simultaneously. We can get around this by defining these test points at DQ channels but that will probably upset the rtcds model as well. Maybe the thing to do is to separate the c1su2 model into two models handling 3 and 4 suspensions. But we are not sure if the limitation is due to fb or DAQ network (which will persist even if we reduce the number of testpoints on one model) or due to load on a single core of FE machines.

The data is measured and stored here. We can do periodic tests and update data here.

### Next steps:

• Run the test for old optics as well.
• Fit the OLTF model with the measured data, and divide by the digital filter transfer function to obtain the plant transfer function for each loop.
• Set maximum noise allowed in the local damping loop for each degree of freedom, and criteria for Q of the loop.
• Adjust gains and or loop shape to reach the requirements on all the suspensions in a quantitative manner.
• (optional) Add a BLRMS calculation stream in SUS models for monitoring loop performance and in-loop noise levels in the suspensions.
• More frequency resolution, please. (KA)
Sat Aug 20 20:26:10 2022, Anchal, Update, SUS, Open loop transfer function measurements for local damping loops of Core optics

I made measurements of old optics OLTF today. I have reduced the file sizes of the plots and data now. It is interesting that it is allowed to read 9 channels simultaneously from c1mcs or c1sus models, even together. The situation with c1su2 is a bit unclear. I was earlier able to take measurements of 6 channels at once from c1su2 but not I can't read more than 1 channel simultaneously. This suggests that the limit is dictated by how much a single model is loaded, not how much we are reading simultaneously. So if we split c1su2 into two models, we might be able to read more optics simultaneously, saving time and giving us the ability to measure for longer.

Attached are the results for all the core optics. Inferences will be made later in the week.

Note: Some measurements have very low coherence in IN2 channels in most of the damping frequency region, these loops need to be excited harder. (eg PIT, POS, YAW, on ITMs and ETMs).

Mon Aug 22 14:36:49 2022, rana, Update, SUS, Open loop transfer function measurements for local damping loops of Core optics

for damping and OL loops, we typically don't measure the TF like this because it takes forever and we don't need that detailed info for anything. Just use the step responses in the way we discussed at the meeting 2 weeks ago. There's multiple elog entries from me and others illustrating this. The measurement time is then only ~30 sec per optic, and you also get the cross-coupling for free. No need for test-point channels and overloading, just use the existing DQ channels and read back the response from the frames after the excitations are completed.

Tue Aug 23 14:59:15 2022, JC, Update, Tools, New Toolbox at Y-End

A new tool box has been placed at the Y-end! Each drawer has its label so PLEASE put the tools back in their correct location. In addition to this, Each tool has its assigned tool box, so PLEASE RETURN all tools to their designated tool box. The tools can be distinguished by a writing or heat shrink which corresponds to the color of the tool chest or location. Photo #2 is an example of how the tools have been marked.

Each toolbox from now on will contain a drawer for the folllowing: Measurements, Allen Keys, Pliers and Cutters, Screwdrivers, Zipties and Tapes, Allen Ball Drivers, Crescent Wrenches, Clamps, and Torque Wrenches/ Ratchets.

Wed Aug 24 10:49:43 2022, Cici, Update, General, Measuring DFD output/X-arm laser PZT TF with Moku

We measured the TF of the X-arm laser PZT using the Moku so we can begin fitting to that data and hopefully creating a digital filter to cancel out PZT resonances.

-------------------------------------------------------------

We calculated the DFD calibration (V/Hz) using:

Vrf = 0.158 mV (-6 dBm), Km = 1 (K_phi = Km*Vrf), cable length = 45m,  Tau = cable length/(0.67*3*10^8 m/s) ~ 220 ns.

We've taken some preliminary data and can see the resonances around 200-300 kHz.

---------------------------------------------------------

Next steps are taking more data around the resonances specifically, calibrating the data using the DFD calibration we calculated, and adjusting parameters in our model so we can model the TF.

Wed Aug 24 12:02:24 2022, Paco, Update, SUS, ITMX SUS is sus UL glitches?

[Yehonathan, Paco]

This morning, while attempting to align the IFO to continue with noise-budgeting, we noted the XARM lock was not stable and showed glitches in the C1:LSC-TRX_OUT (arm cavity transmission). Inspecting the SUS screens, we found the ULSEN rms ~ 6 times higher than the other coils so we opened an ndscope with the four face OSEM signals and overlay the XARM transmission. We immediately noticed the ULSEN input is noisy, jumping around randomly and where bigger glitches correlated with the arm cavity transmission glitches. This is appreciated in Attachment #1.

## Signal chain investigation

We'll do a full signal investigation on ITMX SUS electronics to try and narrow down the issue, but it seems the glitches come and go... Is this from the gold satamp box? ...

Wed Aug 24 16:37:52 2022, Cici, Update, General, More DFD/AUX PZT resonance measurements

Some more measurements of the PZT resonances (now zoomed in!) I'm adjusting parameters on our model to try and fit to it by hand a bit, definitely still needs improvements but not bad for a 2-pole 2-zero fit for now. I don't have a way to get coherence data from the moku yet but I've got a variety of measurements and will hopefully use the standard deviation to try and find a good error prediction...

Thu Aug 25 15:24:06 2022, Paco, HowTo, Electronics, RFSoC 2x2 board -- fandango

[Paco, Chris Stoughton, Leo -- remote]

This morning Chris came over to the 40m lab to help us get the RFSoC board going. After checking out our setup, we decided to do a very basic series of checks to see if we can at least get the ADCs to run coherently (independent of the DACs). For this I borrowed the Marconi 2023B from inside the lab and set its output to 1.137 GHz, 0 dBm. Then, I plugged it into the ADC1 and just ran the usual spectrum analyzer notebook on the rfsoc jupyter lab server. Attachment #1 - 2 shows the screen captured PSDs for ADCs 0 and 1 respectively with the 1137 MHz peaks alright.

Before this simple test, we actually reached out to Leo over at Fermilab for some remote assistance on building up our minimally working firmware. For this, Chris started a new vivado project on his laptop, and realized the rfsoc 2x2 board files are not included in it by default. In order to add them, we had to go into Tools, Settings and add the 2020.1 Vivado Xilinx shop board repository path to the rfsoc2x2 v1.1 files. After a little bit of struggling, uninstalling, reinstalling them, and restarting Vivado, we managed to get into the actual overlay design. In there, with Leo's assistance, we dropped the Zynq MPSoC core (this includes the main interface drivers for the rfsoc 2x2 board). We then dropped an rf converter IP block, which we customized to use the right PLL settings. The settings, from the System Clocking tab were changed to have a 409.6 MHz Reference Clock (default was 122.88 MHz). This was not straightforward, as the default sampling rate of 2.00 GSPS was not integer-related so we had to also update that to 4.096 GSPS. Then, we saw that the max available Clock Out option was 256 MHz (we need to be >= 409.6 MHz), so Leo suggested we dropped a Clocking Wizard block to provide a 512 MHz clock input for the rfdc. The final settings are captured in Attachment # 3. The Clocking Wizard was added, and configured on its Output Clocks tab to provide a Requested Output Freq of 512 MHz. The finall settings of the Clocking wizard are captured in Attachment #4. Finally, we connected the blocks as shown in Attachment #5.

We will continue with this design tomorrow.

Thu Aug 25 16:39:31 2022, Cici, Update, General, I have learned the absolute basics of github

I have now added code/data to my github repository. (it's the little victories)

Fri Aug 26 12:46:07 2022, Cici, Update, General, Progress on fitting PZT resonances

Here is an update of how fitting the resonances is going - I've been modifying parameters by hand and seeing the effect on the fit. Still a work in progress. Magnitude is fitting pretty well, phase is very confusing. Attempted vectfit again but I can't constrain the number of poles and zeros with the code I have and I still get a nonsensical output with 20 poles and 20 zeros. Here is a plot with my fit so far, and a zip file with my moku data of the resonances and the code I'm using to plot.

Mon Aug 29 13:33:09 2022, JC, Update, General, Lab Cleanup

The machine shop looked a mess this morning, so I cleaned it up. All power tools are now placed in the drawers in the machine shop. Let me know if there are any questions of where anything here is placed.

Mon Aug 29 18:25:12 2022, Cici, Update, General, Taking finer measurements of the actuator transfer function

Took finer measurements of the x-arm aux laser actuator tranfer function (10 kHz - 1 MHz, 1024 pts/decade) using the Moku.

--------------------------------------

I took finer measurements using the moku by splitting the measurement into 4 sections (10 - 32 (~10^4.5) kHz, 32 - 100 kHz, 100 - 320 kHz, 320 - 1000 kHz) and then grouping them together. I took 25 measurements of each ( + a bonus in case my counting was off), plotted them in the attached notebook, and calculated/plotted the standard deviation of the magnitude (normalized for DC offset). Could not upload to the ELOG as .pdf, but the pdf's are in the .zip file.

--------------------------------------

Next steps are to do the same stdev calculation for phase, which shouldn't take long, and to use the vectfit of this better data to create a PZT inversion filter.

Wed Aug 31 11:39:48 2022, Yehonathan, Update, LSC, Updated XARM noise budget

For educational purposes we update the XARM noise budget and add the POX11 calibrated dark noise contribution (attachment).

Wed Aug 31 12:57:07 2022, rana, Summary, ALS, control of ALS beat freq from command line -easy

The PZT sweeps that we've been making to characterize the ALS-X laser should probably be discarded - the DFD was not setup correctly for this during the past few months.

Since the DFD only had a peak-peak range of ~5 MHz, whenever the beat frequency drifts out of the linear range (~2-3 MHz), the data would have an arbitrary gain. Since the drift was actually more like 50 MHz, it meant that the different parts of a single sweep could have some arbitrary gain and sign !!! This is not a good way to measure things.

I used an ezcaservo to keep the beat frequency fixed. The attacehed screenshot shows the command line. We read back the unwrapped beat frequency from the phase tracker, and feedback on the PSL's NPRO temperature. During this the lasers were not locked to any cavities (shutters closed, but servos not disabled).

For the purposes of this measurement, I reduced the CAL factor in the phase tracker screen so that the reported FINE_PHASE_OUT is actually in kHz, rather than Hz on this plot. So the green plot is moving by 10's of MHz. When the servo is engaged, you can see the SLOWDC doing some action. We think the calibration of that channel is ~1 GHz/V, so 0.1 SLOWDC Volts should be ~100 MHz. I think there's a factor of 2 missing here, but its close.

As you can see in the top plot, even with the frequency stabilized by this slow feedback (-1000 to -600 seconds), the I & Q outputs are going through multiple cycles, and so they are unusable for even a non serious measurement.

The only way forward is to use less of a delay in the DFD: I think Anchal has been busily installing this shorter cable (hopefully, its ~3-5 m long so that the linear range is more. I think a 10 m cable is too long.), and the sweeps taken later today should be more useful.

Thu Sep 1 09:00:02 2022, JC, Configuration, Daily Progress, Locked both arms and aligned Op Levs

Each morning now, I am going to try to align both arms and lock. Along with that, sometime at towards the end of each week, we should align the OpLevs. This is a good habit that should be practiced more often, not only by me. As for the Y Arm, Yehonathan and I had to adjust the gain to 0.15 in order to stabilize the lock.

Mon Sep 20 15:42:44 2021, Ian MacMillan, Summary, Computers, Quantization Code Summary

This post serves as a summary and description of code to run to test the impacts of quantization noise on a state-space implementation of the suspension model.

Purpose: We want to use a state-space model in our suspension plant code. Before we can do this we want to test to see if the state-space model is prone to problems with quantization noise. We will compare two models one for a standard direct-ii filter and one with a state-space model and then compare the noise from both.

Signal Generation:

First I built a basic signal generator that can produce a sine wave for a specified amount of time then can produce a zero signal for a specified amount of time. This will let the model ring up with the sine wave then decay away with the zero signal. This input signal is generated at a sample rate of 2^16 samples per second then stored in a numpy array. I later feed this back into both models and record their results.

State-space Model:

The code can be seen here

The state-space model takes in the list of excitation values and feeds them through a loop that calculates the next value in the output.

Given that the state-space model follows the form

$\dot{x}(t)=\textbf{A}x(t)+ \textbf{B}u(t)$   and  $y(t)=\textbf{C}x(t)+ \textbf{D}u(t)$ ,

the model has three parts the first equation, an integration, and the second equation.

1. The first equation takes the input x and the excitation u and generates the x dot vector shown on the left-hand side of the first state-space equation.
2. The second part must integrate x to obtain the x that is found in the next equation. This uses the velocity and acceleration to integrate to the next x that will be plugged into the second equation
3. The second equation in the state space representation takes the x vector we just calculated and then multiplies it with the sensing matrix C. we don't have a D matrix so this gives us the next output in our system

This system is the coded form of the block diagram of the state space representation shown in attachment 1

Direct-II Model:

The direct form 2 filter works in a much simpler way. because it involves no integration and follows the block diagram shown in Attachment 2, we can use a single difference equation to find the next output. However, the only complication that comes into play is that we also have to keep track of the w(n) seen in the middle of the block diagram. We use these two equations to calculate the output value

$y[n]=b_0 \omega [n]+b_1 \omega [n-1] +b_2 \omega [n-2]$,  where w[n] is  $\omega[n]=x[n] - a_1 \omega [n-1] -a_2 \omega[n-2]$

Bit length Control:

To control the bit length of each of the models I typecast all the inputs using np.float then the bit length that I want. This simulates the computer using only a specified bit length. I have to go through the code and force the code to use 128 bit by default. Currently, the default is 64 bit which so at the moment I am limited to 64 bit for highest bit length. I also need to go in and examine how numpy truncates floats to make sure it isn't doing anything unexpected.

Bode Plot:

The bode plot at the bottom shows the transfer function for both the IIR model and the state-space model. I generated about 100 seconds of white noise then computed the transfer function as

$G(f) = \frac{P_{csd}(f)}{P_{psd}(f)}$

which is the cross-spectral density divided by the power spectral density. We can see that they match pretty closely at 64 bits. The IIR direct II model seems to have more noise on the surface but we are going to examine that in the next elog

Wed Sep 22 14:22:35 2021, Ian MacMillan, Summary, Computers, Quantization Noise Calculation Summary

Now that we have a model of how the SS and IIR filters work we can get to the problem of how to measure the quantization noise in each of the systems. Den Martynov's thesis talks a little about this. from my understanding: He measured quantization noise by having two filters using two types of variables with different numbers of bits. He had one filter with many more bits than the second one. He fed the same input signal to both filters then recorded their outputs x_1 and x_2, where x_2 had the higher number of bits. He then took the difference x_1-x_2. Since the CDS system uses double format, he assumes that quantization noise scales with mantissa length. He can therefore extrapolate the quantization noise for any mantissa length.

Here is the Code that follows the following procedure (as of today at least)

This problem is a little harder than I had originally thought. I took Rana's advice and asked Aaron about how he had tackled a similar problem. We came up with a procedure explained below (though any mistakes are my own):

1. Feed different white noise data into three of the same filter this should yield the following equation: $\textbf{S}_i^2 =\textbf{S}_{ni}^2+\textbf{S}_x^2$, where $\textbf{S}_i^2$ is the power spectrum of the output for the ith filter,  $\textbf{S}_{ni}^2$ is the noise filtered through an "ideal" filter with no quantization noise, and  $\textbf{S}_x^2$ is the power spectrum of the quantization noise. Since we are feeding random noise into the input the power of the quantization noise should be the same for all three of our runs.
2. Next, we have our three outputs:  $\textbf{S}_1^2$,  $\textbf{S}_2^2$, and  $\textbf{S}_3^2$ that follow the equations:

$\textbf{S}_1^2 =\textbf{S}_{n1}^2+\textbf{S}_x^2$

$\textbf{S}_2^2 =\textbf{S}_{n2}^2+\textbf{S}_x^2$

$\textbf{S}_3^2 =\textbf{S}_{n3}^2+\textbf{S}_x^2$

From these three equations, we calculate the three quantities: $\textbf{S}_{12}^2$$\textbf{S}_{23}^2$, and $\textbf{S}_{13}^2$ which are calculated by:

$\textbf{S}_{12}^2 =\textbf{S}_{1}^2-\textbf{S}_2^2\approx \textbf{S}_{n1}^2 -\textbf{S}_{n2}^2$

$\textbf{S}_{23}^2 =\textbf{S}_{2}^2-\textbf{S}_3^2\approx \textbf{S}_{n2}^2 -\textbf{S}_{n3}^2$

$\textbf{S}_{13}^2 =\textbf{S}_{1}^2-\textbf{S}_3^2\approx \textbf{S}_{n1}^2 -\textbf{S}_{n3}^2$

from these quantities, we can calculate three values: $\bar{\textbf{S}}_{n1}^2$$\bar{\textbf{S}}_{n2}^2$, and $\bar{\textbf{S}}_{n3}^2$ since these are just estimates we are using a bar on top. These are calculated using:

$\bar{\textbf{S}}_{n1}^2\approx\frac{1}{2}(\textbf{S}_{12}^2+\textbf{S}_{13}^2+\textbf{S}_{23}^2)$

$\bar{\textbf{S}}_{n2}^2\approx\frac{1}{2}(-\textbf{S}_{12}^2+\textbf{S}_{13}^2+\textbf{S}_{23}^2)$

$\bar{\textbf{S}}_{n3}^2\approx\frac{1}{2}(\textbf{S}_{12}^2+\textbf{S}_{13}^2-\textbf{S}_{23}^2)$

using these estimates we can then estimate  $\textbf{S}_{x}^2$  using the formula:

$\textbf{S}_{x}^2 \approx \textbf{S}_{1}^2 - \bar{\textbf{S}}_{n1}^2 \approx \textbf{S}_{2}^2 - \bar{\textbf{S}}_{2}^2 \approx \textbf{S}_{3}^2 - \bar{\textbf{S}}_{n3}^2$

we can average the three estimates for  $\textbf{S}_{x}^2$  to come up with one estimate.

This procedure should be able to give us a good estimate of the quantization noise. However, in the graph shown in the attachments below show that the noise follows the transfer function of the model to begin with. I would not expect this to be true so I believe that there is an error in the above procedure or in my code that I am working on finding. I may have to rework this three-corner hat approach. I may have a mistake in my code that I will have to go through.

I would expect the quantization noise to be flatter and not follow the shape of the transfer function of the model. Instead, we have what looks like just the result of random noise being filtered through the model.

Next steps:

The first real step is being able to quantify the quantization noise but after I fix the issues in my code I will be able to start liking at optimal model design for both the state-space model and the direct form II model. I have been looking through the book "Quantization noise" by Bernard Widrow and Istvan Kollar which offers some good insights on how to minimize quantization noise.

Mon Sep 27 12:12:15 2021, Ian MacMillan, Summary, Computers, Quantization Noise Calculation Summary

I have not been able to figure out a way to make the system that Aaron and I talked about. I'm not even sure it is possible to pull the information out of the information I have in this way. Even the book uses a comparison to a high precision filter as a way to calculate the quantization noise:

"Quantization noise in digital filters can be studied in simulation by comparing the behavior of the actual quantized digital filter with that of a refrence digital filter having the same structure but whose numerical calculations are done extremely accurately."
-Quantization Noise by Bernard Widrow and Istvan Kollar (pg. 416)

Thus I will use a technique closer to that used in Den Martynov's thesis (see appendix B starting on page 171). A summary of my understanding of his method is given here:

A filter is given raw unfiltered gaussian data $f(t)$ then it is filtered and the result is the filtered data $x(t)$ thus we get the result: $f(t)\rightarrow x(t)=x_N(t)+x_q(t)$  where $x_N(t)$ is the raw noise filtered through an ideal filter and $x_q(t)$ is the difference which in this case is the quantization noise. Thus I will input about 100-1000 seconds of the same white noise into a 32-bit and a 64-bit filter. (hopefully, I can increase the more precise one to 128 bit in the future) then I record their outputs and subtract the from each other. this should give us the Quantization error $e(t)$:
$e(t)=x_{32}(t)-x_{64}(t)=x_{N_{32}}(t)+x_{q_{32}}(t) - x_{N_{64}}(t)-x_{q_{64}}(t)$
and since $x_{N_{32}}(t)=x_{N_{64}}(t)$ because they are both running through ideal filters:
$e(t)=x_{N}(t)+x_{q_{32}}(t) - x_{N}(t)-x_{q_{64}}(t)$
$e(t)=x_{q_{32}}(t) -x_{q_{64}}(t)$
and since in this case, we are assuming that the higher bit-rate process is essentially noiseless we get the Quantization noise $x_{q_{32}}(t)$.

If we make some assumptions, then we can actually calculate a more precise version of the quantization noise:

"Since aLIGO CDS system uses double precision format, quantization noise is extrapolated assuming that it scales with mantissa length"
-Denis Martynov's Thesis (pg. 173)

From this assumption, we can say that the noise difference between the 32-bit and 64-bit filter outputs:  $x_{q_{32}}(t)-x_{q_{64}}(t)$  is proportional to the difference between their mantissa length. by averaging over many different bit lengths, we can estimate a better quantization noise number.

I am building the code to do this in this file

Mon Sep 27 16:03:15 2021, Ian MacMillan, Summary, Computers, Quantization Noise Calculation Summary

I have coded up the procedure in the previous post: The result does not look like what I would expect.

As shown in Attachment1 I have the power spectrum of the 32-bit output and the 64-bit output as well as the power spectrum of the two subtracted time series as well as the subtracted power spectra of both. unfortunately, all of them follow the same general shape of the raw output of the filter.

I would not expect quantization noise to follow the shape of the filter. I would instead expect it to be more uniform. If anything I would expect the quantization noise to increase with frequency. If a high-frequency signal is run through a filter that has high quantization noise then it will severely degrade: i.e. higher quantization noise.

This is one reason why I am confused by what I am seeing here. In all cases including feeding the same and different white noise into both filters, I have found that the calculated quantization noise is proportional to the response of the filter. this seems wrong to me so I will continue to play around with it to see if I can gain any intuition about what might be happening.

Mon Sep 27 17:04:43 2021, rana, Summary, Computers, Quantization Noise Calculation Summary

I suggest that you

1. change the corner frequency to 10 Hz as I suggested last week. This filter, as it is, is going to give you trouble.
2. Put in a sine wave at 3.4283 Hz with an amplitude of 1, rather than white noise. In this way, its not necessary to do any subtraction. Just make PSD of the output of each filter.
3. Be careful about window length and window function. If you don't do this carefully, your PSD will be polluted by window bleeding.

Thu Sep 30 11:46:33 2021, Ian MacMillan, Summary, Computers, Quantization Noise Calculation Summary

First and foremost I have the updated bode plot with the mode moved to 10 Hz. See Attachment 1. Note that the comparison measurement is a % difference whereas in the previous bode plot it was just the difference. I also wrapped the phase so that jumps from -180 to 180 are moved down. This eliminates massive jumps in the % difference.

Next, I have two comparison plots: 32 bit and 64bit. As mentioned above I moved the mode to 10 Hz and just excited both systems at 3.4283Hz with an amplitude of 1. As we can see on the plot the two models are practically the same when using 64bits. With the 32bit system, we can see that the noise in the IIR filter is much greater than in the State-space model at frequencies greater than our mode.

Note about windowing and averaging: I used a Hanning window with averaging over 4 neighbor points. I came to this number after looking at the results with less averaging and more averaging. In the code, this can be seen as nperseg=num_samples/4 which is then fed into signal.welch

Wed Nov 24 11:02:23 2021, Ian MacMillan, Summary, Computers, Quantization Noise Calculation Summary

I added mpmath to the quantization noise code. mpmath allows me to specify the precision that I am using in calculations. I added this to both the IIR filters and the State-space models although I am only looking at the IIR filters here. I hope to look at the state-space model soon.

Notebook Summary:

I also added a new notebook which you can find HERE. This notebook creates a signal by summing two sine waves and windowing them.

Then that signal is passed through our filter that has been limited to a specific precision. In our case, we pass the same signal through a number of filters at different precisions.

Next, we take the output from the filter with the highest precision, because this one should have the lowest quantization noise by a significant margin, and we subtract the outputs of the lower precision filters from it. In summary, we are subtracting a clean signal from a noisy signal; because the underlying signal is the same, when we subtract them the only thing that should be left is noise. and since this system is purely digital and theoretical the limiting noise should be quantization noise.

Now we have a time series of the noise for each precision level (except for our highest precision level but that is because we are defining it as noiseless). From here we take a power spectrum of the result and plot it.

After this, we can calculate a frequency-dependent SNR and plot it. I also calculated values for the SNR at the frequencies of our two inputs.

This is the procedure taken in the notebook and the results are shown below.

Analysis of Results:

The first thing we can see is that the precision levels 256 and 128 bits are not shown on our graph. the 256-bit signal was our clean signal so it was defined to have no noise so it cant be plotted. The 128-bit signal should have some quantization noise but I checked the output array and it contained all zeros. after further investigation, I found that the quantization noise was so small that when the result was being handed over from mpmath to the general python code it was rounding those numbers to zero. To overcome this issue I would have to keep the array as a mpmath object the entire time. I don't think this is useful because matplotlib probably couldn't handle it and it would be easier to just rewrite the code in C.

The next thing to notice is sort of a sanity check thing. In general, low precision filters yield higher noise than high precision. This is a good quick sanity check. However, this does not hold true at the low end. we can see that 16-bit actually has the highest noise for most of the range. Chris pointed out that at low precisions that quantization noise can become so large that it is no longer a linearly coupled noise source. He also noted that this is prone to happen for low precision coefficients with features far below the Nyquist frequency like I have here. This is one explanation that seems to explain the data especially because this ambiguity is happening at 16-bit and lower as he points out.

Another thing that I must mention, even if it is just a note to future readers, is that quantization noise is input dependent. by changing the input signal I see different degrees of quantization noise.

Analysis of SNR:

One of the things we hoped to accomplish in the original plan was to play around with the input and see how the results changed. I mainly looked at how the amplitude of the input signal scaled the SNR of the output. Below I include a table of the results. These results were taken from the SNR calculated at the first peak (see the last code block in the notebook) with the amplitude of the given sine wave given at the top of each column. this amplitude was given to both of the two sine waves even though only the first one was reported. To see an example, currently, the notebook is set up for measurement of input amplitude 10.

 0.1 Amplitude of input 1 Amplitude 100 Amplitude 1000 Amplitude 4-bit SNR 5.06e5 5.07e5 5.07e5 5.07e5 8-bit SNR 5.08e5 5.08e5 5.08e5 5.08e5 16-bit SNR 2.57e6 8.39e6 3.94e6 1.27e6 32-bit SNR 7.20e17 6.31e17 1.311e18 1.86e18 64-bit SNR 6.0e32 1.28e32 1.06e32 2.42e32 128-bit SNR unknown unknown unknown unknown

As we can see from the table above the SNR does not seem to relate to the amplitude of the input. in multiple instances, the SNR dips or peaks in the middle of our amplitude range.

Wed Nov 24 13:44:19 2021, rana, Summary, Computers, Quantization Noise Calculation Summary

This looks great. I think what we want to see mainly is just the noise in the 32 bit IIR filtering subtracted from the 64 bit one.

It would be good if Tega can look through your code to make sure there's NO sneaky places where python is doing some funny casting of the numbers. I didn't see anything obvious, but as Chris points out, these things can be really sneaky so you have to be next level paranoid to really be sure. Fox Mulder level paranoia.

And, we want to see a comparison between what you get and what Denis Martynov put in an appendix of his thesis when comparing the Direct Form II, with the low-noise form (also some slides from Matt Evans on thsi from a ~decade agoo). You should be able to reproduce his results. He used matlab + C, so I am curious to see if it can be done all in python, or if we really need to do it in C.

And then...we can make this a part of the IFOtest suite, so that we point it at any filter module anywhere in LIGO, and it downloads the data and gives us an estimate of the digital noise being generated.

Tue Dec 7 10:55:25 2021, Ian MacMillan, Summary, Computers, Quantization Noise Calculation Summary

[Ian, Tega]

Tega and I have gone through the IIR Filter code and optimized it to make sure there aren't any areas that force high precision to be down-converted to low precision.

For the new biquad filter we have run into the issue where the gain of the filter is much higher than it should be. Looking at attachments 1 and 2, which are time series comparisons of the inputs and outputs from the different filters, we see that the scale for the output of the Direct form II filter shown in attachment 1 on the right is on the order of 10^-5 where the magnitude of the response of the biquad filter is on the order of 10^2. other than this gain the responses look to be the same.

I am not entirely sure how this gain came into the system because we copied the c code that actually runs on the CDS system into python. There is a gain that affects the input of the biquad filter as shown on this slide of Matt Evans Slides. This gain, shown below as g, could decrease the input signal and thus fix the gain. However, I have not found any way to calculate this g.

With this gain problem we are left with the quantization noise shown in Attachment 4.

State Space:

I have controlled the state space filter to act with a given precision level. However, my code is not optimized. It works by putting the input state through the first state-space equation then integrating the result, which finally gets fed through the second state-space equation.

This is not optimized and gives us the resulting quantization noise shown in attachment 5.

However, the state-space filter also has a gain problem where it is about 85 times the amplitude of the DF2 filter. Also since the state space is not operating in the most efficient way possible I decided to port the code chris made to run the state-space model to python. This code has a problem where it seems to be unstable. I will see if I can fix it

Fri Dec 10 13:02:47 2021, Ian MacMillan, Summary, Computers, Quantization Noise Calculation Summary

I am trying to replicate the simulation done by Matt Evans in his presentation  (see Attachment 1 for the slide in particular).

He defines his input as $x_{\mathrm{in}}=sin(2\pi t)+10^{-9} sin(2\pi t f_s/4)$ so he has two inputs one of amplitude 1 at 1 Hz and one of amplitude 10^-9 at 1/4th the sampling frequency  in this case: 4096 Hz

For his filter, he uses a fourth-order notch filter. To achieve this filter I cascaded two second-order notch filters (signal.iirnotch) both with locations at 1 Hz and quality factors of 1 and 1e6. as specified in slide 13 of his presentation

I used the same procedure outlined here. My results are posted below in attachment 2.

Analysis of results:

As we can see from the results posted below the results don't match. there are a few problems that I noticed that may give us some idea of what went wrong.

First, there is a peak in the noise around 35 Hz. this peak is not shown at all in Matt's results and may indicate that something is inconsistent.

the second thing is that there is no peak at 4096 Hz. This is clearly shown in Matt's slides and it is shown in the input spectrum so it is strange that it does not appear in the output.

My first thought was that the 4kHz signal was being entered at about 35Hz but even when you remove the 4kHz signal from the input it is still there. The spectrum of the input shown in Attachment 3 shows no features at ~35Hz.

The Input filter, Shown in attachment 4 shows the input filter, which also has no features at ~35Hz. Seeing how the input has no features at ~35Hz and the filter has no features at ~35Hz there must be either some sort of quantization noise feature there or more likely there is some sort of sampling effect or some effect of the calculation.

To figure out what is causing this I will continue to change things in the model until I find what is controlling it.

I have included a Zip file that includes all the necessary files to recreate these plots and results.

Mon May 9 15:32:14 2022, Ian MacMillan, Summary, Computers, Quantization Noise Calculation Summary

I made the first pass at a tool to measure the quantization noise of specific filters in the 40m system. The code for which can be found here. It takes the input to the filter bank and the filter coefficients for all of the filters in the filter bank. it then runs the input through all the filters and measures the quantization noise at each instance. It does this by subtracting the 64-bit output from the 32-bit output. Note: the actual system is 64 bit so I need to update it to subtract the 64-bit output from the 128-bit output using the long double format. This means that it must be run on a computer that supports the long double format. which I checked and Rossa does. The code outputs a number of plots that look like the one in Attachment 1. Koji suggested formatting a page for each of the filters that is automatically generated that shows the filter and the results as well as an SNR for the noise source. The code is formatted as a class so that it can be easily added to the IFOtest repo when it is ready.

I tracked down a filter that I thought may have lower thermal noise than the one that is currently used. The specifics of this will be in the DCC document version 2 that I am updating but a diagram of it is found in attachment 2. Preliminary calculations seemed to show that it had lower quantization noise than the current filter realization. I added this filter realization to the c code and ran a simple comparison between all of them. The results in Attachment 3 are not as good as I had hoped. The input was a two-toned sin wave. The low-level broadband signal between 10Hz and 4kHz is the quantization noise. The blue shows the current filter realization and there shows the generic and most basic direct form 2. The orange one is the new filter, which I personally call the Aircraft Biquad because I found it in this paper by the Hughes Aircraft Company. See fig 2 in paper. They call it the "modified canonic form realization" but there are about 20 filters in the paper that also share that name. in the DCC doc I have just given them numbers because it is easier.

Whats next:

1) I need to make the review the qnoisetool code to make it compute the correct 64-bit noise.

a) I also want to add the new filter to the simulation to see how it does

2) Make the output into a summary page the way Koji suggested.

3) complete the updated DCC document. I need to reconcile the differences between the calculation I made and the actual result of the simulation.

Fri Sep 2 13:30:25 2022, Ian MacMillan, Summary, Computers, Quantization Noise Calculation Summary

P. P. Vaidyanathan wrote a chapter in the book "Handbook of Digital Signal Processing: Engineering Applications" called "Low-Noise and Low-Sensitivity Digital Filters" (Chapter 5 pg. 359).  I took a quick look at it and wanted to give some thoughts in case they are useful. The experts in the field would be Leland B. JacksonP. P. VaidyanathanBernard Widrow, and István Kollár.  Widrow and Kollar  wrote the book "Quantization Noise Roundoff Error in Digital Computation, Signal Processing, Control, and Communications" (a copy of which is at the 40m). it is good that P. P. Vaidyanathan is at Caltech.

Vaidyanathan's chapter is serves as a good introduction to the topic of quantization noise. He starts off with the basic theory similar to my own document on the topic. From there, there are two main relevant topics to our goals.

The first interesting thing is using Error-Spectrum Shaping (pg. 387). I have never investigated this idea but the general gist is as poles and zeros move closer to the unit circle the SNR deteriorates so this is a way of implementing error feedback that should alleviate this problem. See Fig. 5.20 for a full realization of a second-order section with error feedback.

The second starts on page 402 and is an overview of state space filters and gives an example of a state space realization (Fig. 5.26). I also tested this exact realization a while ago and found that it was better than the direct form II filter but not as good as the current low-noise implementation that LIGO uses. This realization is very close to the current realization except uses one less addition block.

Overall I think it is a useful chapter. I like the idea of using some sort of error correction and I'm sure his other work will talk more about this stuff. It would be useful to look into.

One thought that I had recently is that if the quantization noise is uncorrelated between the two different realizations then connecting them in parallel then averaging their results (as shown in Attachment 1) may actually yield lower quantization noise. It would require double the computation power for filtering but it may work. For example, using the current LIGO realization and the realization given in this book it might yield a lower quantization noise. This would only work with two similarly low noise realizations. Since it would be randomly sampling two uniform distributions and we would be going from one sample to two samples the variance would be cut in half, and the ASD would show a 1/√2 reduction if using realizations with the same level of quantization noise. This is only beneficial if the realization with the higher quantization noise only has less than about 1.7 times the one with the lower noise. I included a simple simulation to show this in the zip file in attachment 2 for my own reference.

Another thought that I had is that the transpose of this low-noise state-space filter (Fig. 5.26) or even of LIGO's current filter realization would yield even lower quantization noise because both of their transposes require one less calculation.

Wed Aug 31 00:32:00 2022, Koji, Update, General, SOS and other stuff in the clean room 6x

Salvage these (and any other things). Wrap and double-pack nicely. Put the labels. Store them and record the location. Tell JC the location.

Fri Sep 2 15:26:42 2022, Yehonathan, Update, General, SOS and other stuff in the clean room

{Paco, Yehonathan}

BHD Optics box was put into the x-arm last clean cabinet. (attachment 5)

OSEMs were double bagged in a labeled box on the x-arm wire racks. (attachment 1)

SOS Parts (wire clamps, winches, suspension blocks, etc.) were put in a box on the x-arm wire rack. (attachment 3)

2"->3" optic adapter parts were put in a box and stored on the xarm wire rack. (attachment 3)

Magnet gluing parts box was labeled and stored on the xarm rack. (attachment 2)

TT SUS with the optics were stored on the flow bench at the x end. Note: one of the TT SUS was found unsuspended. (attachment 4)

InVac parts were double bagged and stored in a labeled box on the x arm wire rack. (attachment 2)

Wed Aug 31 00:46:56 2022, Koji, Update, General, Vertex Lab area to be cleaned 9x

As marked up in the photos.

Attachment 5: The electronics units removed. Cleaning half way down. (KA)

Attachment 6: Moved most of the units to 1X3B rack ELOG 17125 (KA)

Wed Aug 31 01:22:01 2022, Koji, Update, General, Along the X arm part 1 9x

Attachment 5: RF delay line was accommodated in 1X3B. (KA)

Wed Aug 31 01:24:48 2022, Koji, Update, General, Along the X arm part 2 9x

Wed Aug 31 01:25:37 2022, Koji, Update, General, Along the X arm part 3 9x

Wed Aug 31 01:30:53 2022, Koji, Update, General, Along the X arm part 4

Behind the X arm tube

Wed Aug 31 01:53:39 2022, Koji, Update, General, Along the Y arm part 1 9x

Wed Aug 31 01:54:45 2022, Koji, Update, General, Along the Y arm part 2 8x

Fri Sep 2 15:35:19 2022, Anchal, Update, General, Along the Y arm part 2

The cables in USPS open box were important cables that are part of the new electronics architecture. These are 3 ft D2100103 DB15F to DB9M Reducer Cable that go between coil driver output (DB15M on back) to satellite amplifier coil driver in (DB9F on the front). These have been placed in a separate plastic box, labeled, and kept with the rest of the D-sub cable plastic boxes that are part of the upgrade wiring behind the tube on YARM across 1Y2. I believe JC would eventually store these dsub cable boxes together somewhere later.

Fri Sep 2 15:30:10 2022, Anchal, Update, General, Along the X arm part 1

Attachment 2: The custom cables which were part of the intermediate setup between old electronics architecture and new electronics architecture were found.
These include:

• 2 DB37 cables with custom wiring at their connectors to connect between vacuum flange and new Sat amp box, marked J4-J5 and J6-J7.
• 2 DB15 to dual head DB9 (like a Hydra) cables used to interface between old coil drivers and new sat amp box.

A copy of these cables are in use for MC1 right now. These are spare cables. We put them in a cardboard box and marked the box appropriately.
The box is under the vacuum tube along Yarm near the center.

Wed Aug 31 16:11:37 2022, Koji, Update, General, Vertex Lab area to be cleaned

The analog electronics units piled up along the wall was moved into 1X3B rack which was basically empty. (Attachments 1/2/4)

We had a couple of unused Sun Machines. I salvaged VMIC cards (RFM and Fast fiber networking? for DAQ???) and gave them to Tega.
Attachment 3 shows the eWastes collected this afternoon.

Fri Sep 2 15:40:25 2022, Anchal, Summary, ALS, DFD cable measurements

[Anchal, Yehonathan]

I laid down another temporary cable from Xend to 1Y2 (LSC rack) for also measuring the Q output of the DFD box. Then to get a quick measurement of these long cable delays, we used Moku:GO in oscillator mode, sent 100 ns pulses at a 100 kHz rate from one end, and measured the difference between reflected pulses to get an estimate of time delay. The other end of long cables was shorted and left open for 2 sets of measurements.

I-Mon Cable delay: (955+/- 6) ns / 2 = 477 +/- 3 ns

Q-Mon Cable delay: (535 +/- 6) ns / 2 = 267 +/- 3 ns

Note: We were underestimating the delay in I-Mon cable by about a factor of 2.

I also took the opportunity to take a delay time measurement of DFD delayline. Since both ends of cable were present locally, it made more sense to simply take a transfer function to get a clean delay measurement. This measurement resulted with value of 197.7 +/- 0.1 ns. See attached plot. Data and analysis here.

Tue Jul 6 17:40:32 2021, Koji, Summary, General, Lab cleaning 7x

We held the lab cleaning for the first time since the campus reopening (Attachment 1).
Now we can use some of the desks for the people to live! Thanks for the cooperation.

We relocated a lot of items into the lab.

• The entrance area was cleaned up. We believe that there is no 40m lab stuff left.
• BHD BS optics was moved to the south optics cabinet. (Attachment 2)
• DSUB feedthrough flanges were moved to the vacuum area (Attachment 3)
• Some instruments were moved into the lab.
• The Zurich instrument box
• KEPCO HV supplies
• We moved the large pile of SUPERMICROs in the lab. They are around MC2 while the PPE boxes there were moved behind the tube around MC2 area. (Attachment 4)
• We have moved PPE boxes behind the beam tube on XARM behind the SUPERMICRO computer boxes. (Attachment 7)
• ISC/WFS left over components were moved to the pile of the BHD electronics.
• Front panels (Attachment 5)
• Components in the boxes (Attachment 6)

We still want to make some more cleaning:

- Electronics workbenches
- Stray setup (cart/wagon in the lab)
- Some leftover on the desks
- Instruments scattered all over the lab
- Ewaste removal

Tue Sep 6 09:57:26 2022, JC, Summary, General, Lab cleaning

DB9 Cables have been assorted and placed behind the Y-Arm. Long BNC Cables and Ethernet Cables have been stored under the Y-Arm.

 Quote: We held the lab cleaning for the first time since the campus reopening (Attachment 1). Now we can use some of the desks for the people to live! Thanks for the cooperation. We relocated a lot of items into the lab. The entrance area was cleaned up. We believe that there is no 40m lab stuff left. BHD BS optics was moved to the south optics cabinet. (Attachment 2) DSUB feedthrough flanges were moved to the vacuum area (Attachment 3) Some instruments were moved into the lab. The Zurich instrument box KEPCO HV supplies Matsusada HV supplies We moved the large pile of SUPERMICROs in the lab. They are around MC2 while the PPE boxes there were moved behind the tube around MC2 area. (Attachment 4) We have moved PPE boxes behind the beam tube on XARM behind the SUPERMICRO computer boxes. (Attachment 7) ISC/WFS left over components were moved to the pile of the BHD electronics. Front panels (Attachment 5) Components in the boxes (Attachment 6) We still want to make some more cleaning: - Electronics workbenches - Stray setup (cart/wagon in the lab) - Some leftover on the desks - Instruments scattered all over the lab - Ewaste removal

Thu Aug 25 16:05:51 2022, Yehonathan, Update, SUS, Trying to fix some SUS

I tried to lock the Y/X arms to take some noise budget. However, we noticed that TRX/Y were oscillating coherently together (by tens of percent), meaning some input optics, essentially PR2/3 are swinging. There was no way I could do noise budgeting in this situation.

I set out to debug these optics. First, I notice side motion of PR2 is very weakly damped .

The gain of the side damping loop (C1:SUS-PR2_SUSSIDE_GAIN) was increased from 10 to 150 which seem to have fixed the issue. Attachment 1 shows the current step response of  the PR2 DOFs. The residual Qs look good but there is still some cross-couplings, especially when kicking POS. Need to do some balancing there.

PR3 fixing was less successful in the beginning. I increased the following gains:

C1:SUS-PR3_SUSPOS_GAIN: 0.5 -> 30

C1:SUS-PR3_SUSPIT_GAIN: 3 -> 30

C1:SUS-PR3_SUSYAW_GAIN: 1 -> 30

C1:SUS-PR3_SUSSIDE_GAIN: 10 -> 50

But the residual Q was still > 10. Then I checked the input matrix and noticed that UL->PIT is -0.18 while UR->PIT is 0.39. I changed UL->PIT (C1:SUS-PR3_INMATRIX_2_1) to +0.18. Now the Q became 7. I continue optimizing the gains.

Was able to increase C1:SUS-PR3_SUSSIDE_GAIN: 50 -> 100.

Attachment 2 shows the step response of PR3. The change of the entry of the input matrix was very ad-hoc, it would probably be good to run a systematic tuning. I have to leave now, but the IFO is in a very misaligned state. PR3/2 should be moved to bring it back.

Tue Sep 6 17:39:40 2022, Paco, Update, SUS, LO1 LO2 AS1 AS4 damping loop step responses

I tuned the local damping gains for LO1, LO2, AS1, and AS4 by looking at step responses in the DOF basis (i.e. POS, PIT, YAW, and SIDE). The procedure was:

1. Grab an ndscope with the error point signals in the DOF basis, e.g. C1:SUS-LO1_SUSPOS_IN1_DQ
2. Apply an offset to the relevant DOF using the alignment slider offset (or coil offset for the SIDE DOF) while being careful not to trip the watchdog. The nominal offsets found for this tuning are summarized below:
 POS PIT YAW SIDE LO1 800 300 300 10000 LO2 800 300 400 -10000 AS1 800 500 500 20000 AS4 800 400 400 -10000
1. Tune the damping gains until the DOF shows a residual Q with ~ 5 or more oscillations.
2. The new damping gains are below for all optics and their DOFs, and Attachments #1-4 summarize the tuned step responses as well as the other DOFs (cross-coupled).
 POS PIT YAW SIDE LO1 10.000 5.000 3.000 40.000 LO2 10.000 3.000 3.000 50.000 AS1 14.000 2.500 3.000 85.000 AS4 15.000 3.100 3.000 41.000

Note that during this test, FM5 has been populated for all these optics with a BounceRoll (notches at 16.6, 23.7 Hz) filter, apart from the Cheby (HF rolloff) and the 0.0:30 filters.

Fri Apr 22 07:01:58 2022, Jc, Update, Coil Drivers, Adding Resistors and Reinstalling 8x

[Koji, JC]

Coil Drivers LO2, SR2, AS4, and AS1 have been updated a reinstalled into the system.

LO2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100008

LO2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100530

SR2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100614

SR2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100615

AS1 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100610

AS1 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100611

AS4 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100612

AS4 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100613

Fri Apr 22 12:09:51 2022, Anchal, Update, Coil Drivers, Please update DCC pages

Nice. Please put this information on the DCC pages of the coil driver units. You'll find links to all the units in this document tree LIGO-E2100447. For each page, click on "Change Metadata" from the left panel and add the change made to the resistor (including the resistor name on PCB, previous and new value), and add a link to your previous elog post which has more details like photos, to "Notes and Changes", and upload an updated version of the circuit schematic by creating an annotation in the previous circuit schematic pdf. Every unit that has a serial number in the lab has a DCC page (if not, we should create one) where we should track all such hard changes.

Wed Sep 7 14:32:15 2022, Anchal, Update, SUS, Updated SD coil gain to keep all coil actuation signals digitally same magnitude

The coil driver actuation strength for face coils was increased by 13 times in LO1, LO2, SR2, AS1, AS4, PR2, and PR3.

After the change the damping strenghth of POS, PIT, and YAW were reduced, but not SIDE coil output filter module requires higher digital input to cause same force at the output. This wouldn't be an issue until we try to correct for cross coupling at output matrix where we would want to give similar order of magnitude coefficients for SIDE coil as well. So I did the following changes to make input to all coil output filters (Face and side) same for these particular suspensions:

• C1:SUS-LO1_SUSSIDE_GAIN 40.0 -> 3.077
C1:SUS-AS1_SUSSIDE_GAIN 85.0 -> 6.538
C1:SUS-AS4_SUSSIDE_GAIN 41.0 -> 3.154
C1:SUS-PR2_SUSSIDE_GAIN 150.0 -> 11.538
C1:SUS-PR3_SUSSIDE_GAIN 100.0 -> 7.692
C1:SUS-LO2_SUSSIDE_GAIN 50.0 -> 3.846
C1:SUS-SR2_SUSSIDE_GAIN 140.0 -> 10.769
• C1:SUS-LO1_SDCOIL_GAIN -1.0 -> -13.0
C1:SUS-AS1_SDCOIL_GAIN 1.0 -> 13.0
C1:SUS-AS4_SDCOIL_GAIN -1.0 -> -13.0
C1:SUS-PR2_SDCOIL_GAIN -1.0 -> -13.0
C1:SUS-PR3_SDCOIL_GAIN -1.0 -> -13.0
C1:SUS-LO2_SDCOIL_GAIN -1.0 -> -13.0
C1:SUS-SR2_SDCOIL_GAIN -1.0 -> -13.0

In short, now 10 cts of input to C1:SUS-AS1_ULCOIL would create same actuation on UL as 10 cts of input to C1:SUS-AS1_SDCOIL will on SD.

 Quote: http://nodus.ligo.caltech.edu:8080/40m/16802 [Koji, JC] Coil Drivers LO2, SR2, AS4, and AS1 have been updated a reinstalled into the system.  LO2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100008 LO2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100530 SR2 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100614 SR2 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100615 AS1 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100610 AS1 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100611 AS4 Coil Driver 1 (UL/LL/UR)now has R=100 // 1.2k ~ 92Ohm for CH1/2/3        Unit: S2100612 AS4 Coil Driver 2 (LR/SD)now has R=100 // 1.2k ~ 92Ohm for CH3                    Unit: S2100613

 Quote: http://nodus.ligo.caltech.edu:8080/40m/16791 [JC Koji] To give more alignment ranges for the SUS alignment, we started updating the output resistors of the BHD SUS coil drivers. As Paco has already started working on LO1 alignment, we urgently updated the output Rs for LO1 coil drivers. LO1 Coil Driver 1 now has R=100 // 1.2k ~ 92Ohm for CH1/2/3, and LO1 Coil Driver 2 has the same mod only for CH3. JC has taken the photos and will upload/update an elog/DCC. We are still working on the update for the SR2, LO2, AS1, and AS4 coil drivers. They are spread over the workbench right now. Please leave them as they're for a while. JC is going to continue to work on them tomorrow, and then we'll bring them back to the rack.

 Quote: https://nodus.ligo.caltech.edu:8081/40m/16760 [Yuta Koji] We took out the two coil driver units for PR3 and the incorrect arrangement of the output Rs were corrected. The boxes were returned to the rack. In order to recover the alignment of the PR3 mirror, C1:SUS_PR3_SUSPOS_INMON / C1:SUS_PR3_SUSPIT_INMON / C1:SUS_PR3_SUSYAW_INMON were monitored. The previous values for them were {31150 / -31000 / -12800}. By moving the alignment sliders, the PIT and YAW values were adjusted to be {-31100 / -12700}. while this change made the POS value to be 52340. The original gains for PR3 Pos/Pit/Yaw were {1, 0.52, 0.2}. They were supposed to be reduced to  {0.077, 0.04, 0.015}. I ended up having the gains to be {0.15, 0.1, 0.05}. The side gain was also increased to 50. ---- Overall, the output R configuration for PR2/PR3 are summarized as follows. I'll update the DCC. PR2 Coil Driver 1 (UL/LL/UR) / S2100616 / PCB S2100520 / R_OUT = (1.2K // 100) for CH1/2/3 PR2 Coil Driver 2 (LR/SD) / S2100617 / PCB S2100519 / R_OUT = (1.2K // 100) for CH3 PR3 Coil Driver 1 (UL/LL/UR) / S2100619 / PCB S2100516 / R_OUT = (1.2K // 100) for CH1/2/3 PR3 Coil Driver 2 (LR/SD) / S2100618 / PCB S2100518 / R_OUT = (1.2K // 100) for CH3

Thu Sep 8 11:54:37 2022, JC, Configuration, Lab Organization, Lab Organization

The arms in the 40m laboratory have now been sectioned off. Each arm has been divided up into 15 sections. Along the Y arm, the section are labelled "Section Y1 - Section Y15". For the X arm, they are labelled "Section X1- Section X15". Anything changed or moved will now be updated into the elog with their appropriate section.

Below is an example of Section X6.

Thu Sep 8 12:01:02 2022, JC, Configuration, Lab Organization, Lab Organization

The floor cable cover has been changed out for a new one. This is in Section X11.

Thu Sep 8 16:03:25 2022, Yehonathan, Update, LSC, Realignment, arm locking, gains adjustments

{Anchal, Yehonathan}

We came this morning and the IMC was misaligned. The IMC was realigned and locked. This of course changed the input beam and sent us down to a long alignment journey.

We first use TTs to find beam on BHD DCPD/Camera since it is only single bounce on all optics.

Then, PR2/3 were used to find POP beam while keeping the BHD beam.

Unfortunately, that was not enough. TTs and PRs have some degeneracy which caused us to lose the REFL beam.

Realizing this we went to AS table to find the REFL beam. We found a ghost beam that decieved us for a while. Realizing it was a ghost beam, we moved TT2 in pitch, while keeping the POP DCPD high with PRs, until we saw a new beam on the viewing card.

We kept aligning TT1/2, PR2/3 to maximize the REFL DCPD while keeping the POP DCPD high. We tried to look at the REFL camera but soon realized that the REFL beam is too weak for the camera to see.

At that point we already had some flashing in the arms (we centered the OpLevs in the beginning).

Arms were aligned and locked. We had some issue with the X-ARM not catching lock. We increased the gain and it locked almost immediately. To fix the arms gains correctly we took OLTFs (Attachment) and adjusted the XARM gain to 0.027 to make the UGF at 200Hz.

Both arms locked with 200 Hz UGF from:

From GPS: 1346713049
To GPS: 1346713300

From GPS: 1346713380
To GPS: 1346714166

HEPA turned off:
From GPS: 1346714298
To GPS: 1346714716

Wed Sep 14 15:40:51 2022, JC, Update, Treasure, Plastic Containers

There are brand new empty plastic containers located inside the shed that is outside in the cage. These can be used for organizing new equipment for CDS, or cleanup after Wednesday meetings

Thu Sep 15 11:13:32 2022, Anchal, Summary, ASS, YARM and XARM ASS restored

With limited proof of working for a few times (but robustly), I'm happy to report that ASS on YARM and XARM is working now.

### What is wrong?

The issue is that PR3 is not placed in correct position in the chamber. It is offset enough that to send a beam through center of ITMY to ETMY, it has to reflect off the edge of PR3 leading to some clipping. Hence our usual ASS takes us to this point and results in loss of transmission due to clipping.

Solution: We can not solve this issue without moving PR3 inside the chamber. But meanwhile, we can find new spot positions on ITMY and ETMY, off the center in YAW direction only, which would allow us to mode match properly without clipping. This would mean that there will be YAW suspension noise to Length coupling in this cavity, but thankfully, YAW degree of freedom stays relatively calm in comparison to PIT or POS for our suspensions. Similarly, we need to allow for an offset in ETMX beam spot position in YAW. We do not control beam spot position on ITMX due to lack of enough actuators to control all 8 DOFs involved in mode matching input beam with a cavity. So instead I found the right offset for ITMX transmission error signal in YAW that works well.

I found these offsets (found empirically) to be:

• C1:ASS-YARM_ETM_YAW_L_DEMOD_I_OFFSET: 22.6
• C1:ASS-YARM_ITM_YAW_L_DEMOD_I_OFFSET: 4.2
• C1:ASS-XARM_ETM_YAW_L_DEMOD_I_OFFSET: -7.6
• C1:ASS-XARM_ITM_YAW_T_DEMOD_I_OFFSET: 1

These offsets have been saved in the burt snap file used for running ASS.

### Using ASS

I'll reiterate here procedure to run ASS.

• Get YARM locked to TEM00 mode and atleast 0.4 transmission on C1:LSC-TRY_OUT
• Open sitemap->ASC->c1ass
• Click ! Scripts YARM -> Striptool to open a striptool monitor for ASS error signals.
• Click on ! Scripts YARM -> Dither ON to switch on the dither.
• Wait for all error signals to have settled around zero (this should also maximize the transmission channel (currently maximizing to 1.1).
• Click on ! Scripts YARM -> Freeze Offsets
• Click on ! Scripts YARM -> Offload Offsets
• Click on ! Scripts YARM -> Dither OFF.
• Then proceed to XARM. Get it locked to TEM00 mode and atleast 0.4 transmission on C1:LSC-TRX_OUT
• Open sitemap->ASC->c1ass
• Click ! Scripts XARM -> Striptool to open a striptool monitor for ASS error signals.
• Click on ! Scripts XARM -> Dither ON to switch on the dither.
• Wait for all error signals except C1:ASS-XARM_ITM_PIT_L_DEMOD_I_OUT16 and C1:ASS-XARM_ITM_YAW_L_DEMOD_I_OUT16 to have settled around zero (this should also maximize the transmission channel (currently maximizing to 1.1).
• Click on ! Scripts XARM -> Freeze Offsets
• Click on ! Scripts XARM -> Offload Offsets
• Click on ! Scripts XARM -> Dither OFF.
Thu Sep 15 16:19:33 2022, Yehonathan, Update, LSC, POX-POY noise budget

Doing POX-POY noise measurement as a poor man's FPMI for diagnostic purposes. (Notebook in /opt/rtcds/caltech/c1/Git/40m/measurements/LSC/POX-POY/Noise_Budget.ipynb)

The arms were locked individually using POX11 and POY11. The optical gain was estimated to be by looking at the PDH signal of each arm: the slope was computed by taking the negative peak to positive peak counts and assuming that the arm length change between those peaks is lambda/(2*Finesse), where lambda = 1um and the arm finesse is taken to be 450.

Xarm peak-to-peak counts is ~ 850 while Yarm's is ~ 1100. This gives optical gains of 3.8e11 cts/m and 4.95e11 cts/m respectively.

Next, ETMX actuation TF is measured (attachments 1,2) by exciting C1:LSC-ETMX/Y_EXC and measuring at C1:LSC-X/YARM_IN1_DQ and calibrating with the optical gain.

Using these calibrations I plot the POX-POY (attachment 3) and POX+POY (attachment 4) total noise measurements using two methods:

1. Plotting the calibrated IN and OUT channels of XARM-YARM (blue and orange). Those two curves should cross at the UGF (200Hz in this case).

2. Plotting the calibrated XARM-YARM IN channels times 1-OLTF (black).

The UGF bump can be clearly seen above the true noise in those plots.

However, POX+POY OUT channel looks too high for some reason making the crossing frequency between IN and OUT channels to be ~ 300Hz. Not sure what was going on with this.

Next, I will budget this noise with the individual noise contributions.

Mon Sep 19 17:02:57 2022, Paco, Summary, General, Power Outage 220916 -- restored all

## Restore lab

[Paco, Tega, JC, Yehonathan]

We followed the instructions here. There were no major issues, apart from the fb1 ntp server sync taking long time after rebooting once.

## ETMY damping

[Yehonathan, Paco]

We noticed that ETMY had to much RMS motion when the OpLevs were off. We played with it a bit and noticed two things: Cheby4 filter was on for SUS_POS and the limiter on ULCOIL was on at 0 limit. We turned both off.

We did some damping test and observed that the PIT and YAW motion were overdamped. We tune the gain of the filters in the following way:

SUSSIDE_GAIN 1250->50

SUSPOS_GAIN 200->150

SUSYAW_GAIN 60->30

These action seem to make things better.

Tue Sep 20 07:03:04 2022, Paco, Summary, General, Power Outage 220916 -- restored all

[JC, Tega, Paco ]

I would like to mention that during the Vacuum startup, after the AUX pump was turned on, Tega and I were walking away while the pressure decreases. While we were, valves opened on their own. Nobody was near the VAC Desktop during this. I asked Koji if this may be an automatic startup, but he said the valves shouldn't open unless they are explicitely told to do so. Has anyone encountered this before?

Quote:

## Restore lab

[Paco, Tega, JC, Yehonathan]

We followed the instructions here. There were no major issues, apart from the fb1 ntp server sync taking long time after rebooting once.

## ETMY damping

[Yehonathan, Paco]

We noticed that ETMY had to much RMS motion when the OpLevs were off. We played with it a bit and noticed two things: Cheby4 filter was on for SUS_POS and the limiter on ULCOIL was on at 0 limit. We turned both off.

We did some damping test and observed that the PIT and YAW motion were overdamped. We tune the gain of the filters in the following way:

SUSSIDE_GAIN 1250->50

SUSPOS_GAIN 200->150

SUSYAW_GAIN 60->30

These action seem to make things better.

Tue Sep 20 15:40:07 2022, yehonathan, Update, BHD, Trying doing AC lock

We resume the LO phase locking work. MICH was locked with an offset of 80 cts. LO and AS beams were aligned to maximize the BHD readout visibility on ndscope.

We lock the LO phase on a fringe (DC locking) actuating on LO1.

Attachment 1 shows BHD readout (DCPD_A_ERR = DCPD_A - DCPD_B) spectrum with and without fringe locking while LO2 line at 318 Hz is on. It can be seen that without the fringe locking the dithering line is buried in the A-B noise floor. This is probably due to multiple fringing upconversion. We figured that trying to directly dither-lock the LO phase might be too tricky since we cannot resolve the dither line when the LO phase is unlocked.

We try to handoff the lock from the fringe lock to the AC lock in the following way: Since the AC error signal reads the derivative of the BHD readout it is the least sensitive to the LO phase when the LO phase is locked on a dark fringe, therefore we offset the LO to realize an AC error signal. LO phase offset is set to ~ 40 cts (peak-to-peak counts when LO phase is uncontrolled is ~ 400 cts).

We look at the "demodulated" signal of LO1 from which the fringe locking error signal is derived (0 Hz frequency modulation 0  amplitude) and the demodulated signal of LO2 where a ~ 700 Hz line is applied. We dither the LO phase at ~ 50Hz to create a clear signal in order to compare the two error signals. Although the 50 Hz signal was clearly seen on the fringe lock error signal it was completely unresolved in the LO2 demodulated signal no matter how hard we drove the 700Hz line and no matter what demodulation phase we chose. Interestingly, changing the demodulation phase shifted the noisy LO2 demodulated signal by some constant. Will post a picture later.

Could there be some problem with the modulation-demodulation model? We should check again but I'm almost certain we saw the 700Hz line with an SNR of ~ 100 in diaggui, even with the small LO offset changes in the 700Hz signal phase should have been clearly seen in the demodulated signal. Maybe we should also check that we see the 50Hz side-bands around the 700Hz line on diaggui to be sure.

Tue Sep 20 18:18:07 2022, Anchal, Summary, SUS, ETMX, ETMY damping loop gain tuning

[Paco, Anchal]

We performed local damping loop tuning for ETMs today to ensure all damping loops' OLTF has a Q of 3-5. To do so, we applied a step on C1:SUS-ETMX/Y_POS/PIT/YAW_OFFSET, and looked for oscillations in the error point of damping loops (C1:SUS-EMTX/Y_SUSPOS/PIT/YAW_IN1) and increased or decreased gains until we saw 3-5 oscillations before the error signal stabilizes to a new value. For side loop, we applied offset at C1:SUS-ETMX/Y_SDCOIL_OFFSET and looked at C1:SUS-ETMX/Y_SUSSIDE_IN1. Following are the changes in damping gains:

C1:SUS-ETMX_SUSPOS_GAIN : 61.939   ->   61.939
C1:SUS-ETMX_SUSPIT_GAIN :   4.129     ->   4.129
C1:SUS-ETMX_SUSYAW_GAIN : 2.065     ->   7.0
C1:SUS-ETMX_SUSSIDE_GAIN : 37.953  ->   50

C1:SUS-ETMY_SUSPOS_GAIN : 150.0     ->   41.0
C1:SUS-ETMY_SUSPIT_GAIN :   30.0       ->   6.0
C1:SUS-ETMY_SUSYAW_GAIN : 30.0       ->   6.0
C1:SUS-ETMY_SUSSIDE_GAIN : 50.0      ->   300.0

Wed Sep 21 10:10:22 2022, Yehonathan, Update, CDS, C1SU2 crashed

{Yehonathan, Anchal}

Came this morning to find that all the BHD optics watchdogs tripped. Watchdogs were restored but the all ADCs were stuck.

We realized that the C1SU2 model crashed.

We run /scripts/cds/restartAllModels.sh to reboot all machines. All the machines rebooted and burt restored to yesterday around 5 PM.

The optics seem to have returned to good alignment and are damped.

Fri Sep 23 10:05:38 2022, JC, Update, VAC, N2 Interlocks triggered

[Chub, Anchal, Tega, JC]

After replacing an empty tank this morning, I heard a hissing sound coming from the nozzle area. It turns out that this was from the copper tubing. The tubing we slightly broken and this was comfirmed with soapy water bubbles. This caused the N2 pressure to drop and the Vac interlocks to be triggered. Chub and I went ahead and replaced the fitting in connected this back to normal. Anchal and Tega have used the Vacuum StartUp procedures to restore the vacuum to normal operation.

Adding screenshot as the pressure is decreasing now.

Wed Jul 20 11:58:45 2022, Paco, Summary, General, Jenne laser kaput?

[Paco, Yehonathan, JC]

We were trying to setup the Jenne laser to characterize the response of three 1811s that Yehonathan is using for his WOPA experiment (in QIL). We hooked up a ~ 5 VDC power supply to the bias tee and looked to see if there was any DC response in the REF PD. We used a DB9 breakout board and a DB9 cable, and saw some current being drawn. The DC current was a bit too high (500 mA), so we turned the DC voltage off, and realized the VDC power was reversed, probably along the DB9 cable which we didn't check before. As we flipped the power supply leads and turned power back on, we could no longer see any current even though the voltage was now right (or was it???). We would like to debug this laser, and continue using it if it still works (!), but there is negligible documentation either here or in the wiki, so if there are any known places to look at it would be helpful to know them.

Wed Jul 20 14:12:07 2022, Paco, Summary, General, Jenne laser kaput!

[Koji, Yehonathan, Paco]

Koji pointed out that this laser was always driven with a current driver (which was not nearby), and after finding it on one of the rolling carts, we hooked up the system but found that the laser driver displayed open circuit near the usual 20mA operating point. We therefore have to conclude that this laser is no more. We will look for a reasonable replacement.

 Quote: [Paco, Yehonathan, JC] We were trying to setup the Jenne laser to characterize the response of three 1811s that Yehonathan is using for his WOPA experiment (in QIL). We hooked up a ~ 5 VDC power supply to the bias tee and looked to see if there was any DC response in the REF PD. We used a DB9 breakout board and a DB9 cable, and saw some current being drawn. The DC current was a bit too high (500 mA), so we turned the DC voltage off, and realized the VDC power was reversed, probably along the DB9 cable which we didn't check before. As we flipped the power supply leads and turned power back on, we could no longer see any current even though the voltage was now right (or was it???). We would like to debug this laser, and continue using it if it still works (!), but there is negligible documentation either here or in the wiki, so if there are any known places to look at it would be helpful to know them.

Wed Jul 20 15:58:52 2022, Koji, Summary, General, Jenne laser kaput!

For troubleshooting, the proper laser driver (found beneath the AG network analyzer) was connected.
The current ~1mA was provided and the driver detected the "open circuit", which means the laser diode was busted.

https://dcc.ligo.org/LIGO-T060240

The laser diode in the parts list is: "GTRAN GaAs Strained QW Laser Diode, Part # LD-1060".

Mon Jul 25 09:05:50 2022, Paco, Summary, General, Testing 950nm laser found in trash pile

[Paco, Yehonathan]

==== Late elog from Friday ====

Koji provided us with a QFLD-950-3S (QPHOTONICS) salvaged from Aidan's junk pile (LD is alive according to him). We tested the Jenne laser setup with this just to decide if we should order another one, and it worked.

The laser driver anode and cathode pins (8/9, 4/5 respectively) on the rear DB9 port from the  ILX Lightwave LDX-3412 driver were connected to the corresponding anode and cathode pins in the laser package (5, and 9; note the numbers are reversed between driver and laser). Then, interlock pins 1 and 2 in the driver were shorted to enable operation. This is all illustrated in Attachments #1-2.

After setting a limit of 27.6 mA current in the driver, we slowly increased the actual current to ~ 19 mA until we could see light on a beam card. We can go ahead and get a 1060 nm replacement.

Fri Aug 5 17:03:31 2022, Yehonathan, Summary, General, Testing 950nm laser found in trash pile

I set out to test the actuation bandwidth of the 950nm laser. I hooked the laser to the output of the bias tee of PD testing setup. I connected the fiber coming out of the laser to the fiber port of 1611 REF PD.

The current source was connected to the DB9 input of the PD testing setup. I turned on the current source and set the current to 20mA. I measured with a fluke ~ 2V at the REF PD DC port.

I connected the AC port of the bias tee to the RF source of the network analyzer and the AC port of the REF PD to the B port of the network analyzer. Attachment 2 shows the setup.

I took a swept sine measurement (attachment) from 100kHz to 500MHz.

It seems like the bandwidth is ~ 1MHz which is weird considering the spec sheet says that the pulse rise time is 0.5ns. To make sure we are not limited by the bandwidth of the cables I looped the source and the input of the network analyzer using the cables used for the previous measurement and observed that the bandwidth is a few 100s of MHz.

Wed Sep 28 19:15:56 2022, Koji, Update, General, Testing 950nm laser found in trash pile

I don't know what was wrong with the past setup but the 950nm laser (QPHOTONICS QFLD-950-3S) just worked fine up to ~300MHz with basically the same setup.

A 20dB coupler picks up a small amount of the driving signal from the source signal of the network analyzer. This was fed to CHR. The fiber-coupled NewFocus PD RF output was connected to CHA.
The calibration of the response was done with the thru response (connect the source signal to the CHA via all the long cables).

Attachment 1 shows the response CHA/CHR. The output is somewhat flat up to 20MHz and goes down towards 100MHz, but still active up to 500MHz as long as the normalization with the New Focus PD works.
The structure around 200MHz~300MHz changes with how the wires of the clips are arranged. I have twisted and coiled them as shown and the notch disappeared. For the permanent setup we should keep the lines as short as possible and take care of the stray capacitance and the inductance.

Attachment 2 shows the setup at the network analyzer side. Nothing special.

Attachment 3 shows the setup at the laser side. The DB9 connector on the Jenne's laser has the negative output of the LD driver connected to the coax core and the positive output connected to the shield of the coax. Therefore the coax core (red clip) has to be connected to Pin 9 and the coax shield (black clip) to PIn 5.

Sat Oct 1 13:09:49 2022, Anchal, Update, IMC, WFS turned on

I turned on WFS on IMC at:

PDT: 2022-10-01 13:09:18.378404 PDT
UTC: 2022-10-01 20:09:18.378404 UTC
GPS: 1348690176.378404

The following channels are being saved in frames at 1024 Hz rate:

• C1:IOO-MC_TRANS_PIT_ERR (Same as C1:IOO-MC_TRANS_PIT_OUT)
• C1:IOO-MC_TRANS_YAW_ERR (Same as C1:IOO-MC_TRANS_YAW_OUT)
• C1:IOO-MC_TRANS_SUM_ERR (Same as C1:IOO-MC_TRANS_SUMFILT_OUT)

We can keep it running over the weekend as we will not use the interferometer. I'll keep an eye on it with occasional log in. We'll post the time when we switch it off again.

The IMC lost lock at:

UTC    Oct 03, 2022    01:04:16    UTC
Central    Oct 02, 2022    20:04:16    CDT
Pacific    Oct 02, 2022    18:04:16    PDT

GPS Time = 1348794274

The WFS loops kept running and thus took IMC to a misaligned state. Between the above two times, IMC was locked continuously with very brief lock loss events, and had all WFS loops running.

Mon Oct 3 08:35:59 2022, Tega, Update, IMC, Adding IMC channels to frames for NN test

[Rana]

For the upcoming NN test on the IMC, we need to add some more channels to the frames. Can someone please add the MC2 TRANS SUM, PIT, YAW at 256 Hz? and then make sure they're in frames.

and even though its not working correctly, it would be good if someone can turn the MC WFS on for a little while. I'd just like to get some data to test some code. If its easy to roughly close the loops, that would be helpful too.

[Anchal]

Currently, none of these channels are being written on frames. From simulink model, it seems the channels:

• C1:IOO-MC_TRANS_SUMFILT_OUT_DQ

• C1:IOO-MC_TRANS_PIT_OUT_DQ

• C1:IOO-MC_TRANS_YAW_OUT_DQ

are supposed to be DQed but are not present in the /opt/rtcds/caltech/c1/chans/daq/C1MCS.ini file. I tried simply adding these channels to the file and rerunning the daqd_ services but that caused 0x2000 error on c1mcs model. In my attempt, I did not know what chnnum to give for these channels so I omitted that and maybe that is the issue.

The only way I know to fix this is to make and install c1mcs model again which would bring these channels into C1MCS.ini file. But We'll have to run activateDQ.py if we do that which I am not totally sure if it is in running condition right now. @Christopher Wipf do you have any suggestions?

[Rana]

aren't they all filtered? If so, perhaps we can choose whatever is the equivalent naming at the LIGO sites rather than roll our own again.

@Tega Edo can we run activateDQ.py or will that break everything now?

[Tega]

@Rana Adhikari Looking into this now.

@Anchal Gupta The only problem I see with activateDQ.py is the use of the deprecated print function, i.e. print var instead of print(var). After fixing that, it runs OK and does not change the input INI files as they already have the required channel names. I have created a temporary folder, /opt/rtcds/caltech/c1/chans/daq/activateDQtests/, which is populated with copies of the original INI files, a modified version of activateDQ.py that does not overwrite the original input files, and a script file difftest.sh that compares the input and output files so we can test the functionality of activateDQ.py in isolation. Furthermore, looking through the code suggests that all is well. Can you look at what I have done to check that this is indeed the case? If so, your suggestion of rebuilding and installing the updated c1mcs model and running activateDQ.py afterward should do the trick.

I tested the code with:

cd /opt/rtcds/caltech/c1/chans/daq/activateDQtests/

./activateDQ.py

which creates output files with an _ prefix, for example _C1MCS.ini is the output file for C1MCS.ini, then I ran

./difftest

to compare all the input and corresponding output files.

Note that the channel names you are proposing would change after running activateDQ.py, i.e.

C1:IOO-MC_TRANS_SUMFILT_OUT_DQ -> C1:IOO-MC_TRANS_SUM_ERR

C1:IOO-MC_TRANS_PIT_OUT_DQ -> C1:IOO-MC_TRANS_PIT_ERR

C1:IOO-MC_TRANS_YAW_OUT_DQ -> C1:IOO-MC_TRANS_YAW_ERR

My question is this: why aren't we using the correct channel names in the first place so that we have less work to do later on when we finally decide to stop using this postbuild script?

[Anchal]

Yeah I found that these ERR channels are acquired and stored. I don't think we should do this either. Not sure what was the original motivation for this change. I tried commenting out this part of activateDQ.py and remaking and reinstalled c1mcs but it seems that activateDQ.py is called as postbuild script automatically on install and it uses some other copy of this file as my changes did not take affect and the DQ name change still happened.

[Tega]

Ah, we encountered the same puzzle as well. Chris found out that our models have SCRIPT=activateDQ.py embedded in the cds parameter block description, see attached image. We believe this is what triggers the postbuild script call to activateDQ.py. As for the file location, modern rtcds would have it in /opt/rtcds/caltech/c1/post_build, but I am not sure where ours is located. I did a quick search for this but could not find it in the usual place so I looked around for a bit and found this:

controls@rossa> find /opt/rtcds/userapps/ -name "activateDQ.py"

/opt/rtcds/userapps/trunk.bak/cds/c1/scripts/activateDQ.py

/opt/rtcds/userapps/tags/H2OAT_RCG2.5.1/cds/c1/scripts/activateDQ.py

/opt/rtcds/userapps/branches/StanfordGuardianDev/cds/c1/scripts/activateDQ.py

/opt/rtcds/userapps/branches/branch-2.3/cds/c1/scripts/activateDQ.py

/opt/rtcds/userapps/branches/branch-2.4/cds/c1/scripts/activateDQ.py

/opt/rtcds/userapps/trunk/cds/c1/scripts/activateDQ.py

My guess is the last one /opt/rtcds/userapps/trunk/cds/c1/scripts/activateDQ.py.

Maybe we can ask @Yuta Michimura since he wrote this script?

Anyway, we could also try removing SCRIPT=activateDQ.py from the cds parameter block description to see if that stops the postbuild script call, but keep in mind that doing so would also stop the update of the OSEM and oplev channel names. This way we know what script is being used since we will have to run it after every install (this is a bad idea).

controls@c1sus:~ 0$env | grep script CDS_SCRIPTS_PATH=:/opt/rtcds/userapps/release/cds/c1/scripts:/opt/rtcds/userapps/release/cds/common/scripts:/opt/rtcds/userapps/release/isc/c1/scripts:/opt/rtcds/userapps/release/isc/common/scripts:/opt/rtcds/userapps/release/sus/c1/scripts:/opt/rtcds/userapps/release/sus/common/scripts It looks like the guess was correct. Note that in the newer version of rtcds, we can use rtcds env instead of env to see what is going on. Mon Oct 3 13:11:22 2022, Yehonathan, Update, BHD, Some comparison of LO phase lock schemes I pushed a notebook and a Finesse model for comparing different LO phase locking schemes. Notebook is on https://git.ligo.org/40m/bhd/-/blob/master/controls/compare_LO_phase_locking_schemes.ipynb, Here's a description of the Finesse modeling: I use a 40m kat model https://git.ligo.org/40m/bhd/-/blob/master/finesse/C1_w_initial_BHD_with_BHD55.kat derived from the usual 40m kat file. There I added and EOMs (in the spaces between the BS and ITMs and in front of LO2) to simulate audio dithering. A PD was added at a 5% pickoff from one of the BHD ports to simulate the RFPD recently installed on the ITMY table. First I find the nominal LO phase by shaking MICH and maximizing the BHD response as a function of the LO phase (attachment 1). Then, I run another simulation where I shake the LO phase at some arbitrary frequency and measure the response at different demodulation schemes at the RFPD and at the BHD readout. The optimal responses are found by using the 'max' keyword instead of specifying the demodulation phase. This uses the demodulation phase that maximizes the signal. For example to extract the signal in the 2 RF sideband scheme I use: pd3 BHD55_2RF_SB$f1 max $f2 max$fs max nPickoffPDs

I plot these responses as a function of LO phase relative to the nominal phase divided by 90 degrees (attachment 2). The schemes are:

1. 2 RF sidebands where 11MHz and 55MHz on the LO and AS ports are used.

2. Single RF sideband (11/55 MHz) together with the LO carrier. As expected, this scheme is useful only when trying to detect the amplitude quadrature.

3. Audio dithering MICH and using it together with one of the LO RF sidebands. The actuation strength is chosen by taking the BS actuation TF 1e-11 m/cts*(50/f)**2 and using 10000 cts giving an amplitude of 3nm for the ITMs.

For LO actuation I can use 13 times more actuation strentgh becasue its coild drivers' output current is 13 more then the old ones.

4. Double audio dithering of LO2+MICH detecting it directly at the BHD readout (attachment 3).

Without noise considerations, it seems like double audio dithering is by far the best option and audio+RF is the next best thing.

The next thing to do is to make some noise models in order to make the comparison more concrete.

This noise model will include Input noises, residual MICH motion, and laser noise. Displacement noise will not be included since it is the thing we want to be detected.

Tue Sep 13 14:12:03 2022, Yehonathan, Update, BHD, Trying LO phase locking again

[Paco, Yehonathan]

Summary:  We locked LO phase using the DC PD (A - B) error point without saturating the control point, i.e. not a "bang bang" control.

Some suspensions were improved so we figure we should go back to trying to lock the LO phase.

We misalign ETMs and lock MICH using AS55. We put a small MICH offset by putting C1:LSC-MICH_OFFSET = -80.

AS and LO beams were aligned to overlap by maximizing the BHD signal visibility.

BHD DCPDs were balanced by misaligning the AS beam and using the LO beam only.

We measure the transfer function between the DCPDs and find the coherence is 1 at 1 Hz (because of seismic motion) so we measure the ratio between them to be 0.3db.

AS beam is aligned again to overlap with the LO beam. For the work below, we use the largest MICH OFFSET we could impinge before losing the lock = +90. This has the effect of increasing our optical gain.

We started using the HPC LOCK IN screen to dither POS on the different BHD SUS. We first started with AS1 (freq = 137.137 Hz, gain = 1000). The sensing matrix element was chosen accordingly (from the demodulated output) and fed to the LO_PHASE; because this affected the AS port alignment this was of course not the best choice. We moved over to LO2 (freq = 318.75 Hz, gain = 1000) but for the longest time couldn't see the dither line at the error point (A-B).

After this we added comb60 notch filters at DCPD_A and DCPD_B input signals. We ended up just feeding the (A-B) error point to LO1, and trying to lock mid fringe, which suceeded without saturation. The gain of the LO_PHASE filter was set to 0.2 (previously 20; attributable to the newly unclipped LO beam intensity?), and again we only enabled FM4 and FM5 for this. After this a dither line at 318.75 Hz finally appeared in the A-B error point! To be continued...

Thu Sep 15 21:12:53 2022, Paco, Update, BHD, LO phase "dc" control

Locked the LO phase with a MICH offset=+91. The LO is midfringe (locked using the A-B zero crossing), so it's far from being "useful" for any readout but we can at least look at residual noise spectra.

I spent some time playing with the loop gains, filters, and overall lock acquisition, and established a quick TF template at Git/40m/measurements/BHD/HPC_LO_PHASE_TF.xml

So far, it seems that actuating on the LO phase through LO2 POS requires 1.9 times more strength (with the same "A-B" dc sensing). After closing the loop by FM4, and FM5, actuating on LO2 with a filter gain of 0.4 closes the loop robustly. Then, FM3 and FM6 can be enabled and the gain stepped up to 0.5 without problem. The measured UGF (Attachment #1) here was ~ 20 Hz. It can be increased to 55 Hz but then it quickly becomes unstable. I added FM1 (boost) to the HPC_LO_PHASE bank but didn't get to try it.

The noise spectra (Attachment #2) is still uncalibrated... but has been saved under Git/40m/measurements/BHD/HPC_residual_noise_spectra.xml

Tue Sep 27 10:50:11 2022, Paco, Update, BHD, calibrated LO phase noise

Locked LO phase to ITMX single bounce beam at the AS port, using the DCPD (A-B) error point and actuating on LO1 POS. For this the gain was tuned from 0.6 to 4.0. A rough Michelson fringe calibration gives a counts to meters conversion of ~0.212 nm/count, and the OLTF looks qualitatively like the one in a previous measurement (~ 20 dB at 1 Hz, UGF = 30 Hz). The displacement was then converted to phase using lambda=1e-6; I'm not sure what the requirement is on the LO phase (G1802014 says 1e-4 rad/rtHz at 1 Hz, but our requirement doc says 1 to 20 nrad/rtHz (rms?)... anyways wit this rough calibration we are still off in either case.

The balancing gain is obvious at DC in the individual DCPD spectra, and the common mode rejection in the (A-B) signal is also appreciable. I'll keep working on refining this, and implementing a different control scheme.

Wed Sep 28 16:37:26 2022, Paco, Update, BHD, calibrated LO phase noise; update

[yuta, paco]

Update; the high frequency ( > 100 Hz) drop is of course not real and comes from a 4th order LP filter in the HPC demod I filter which I haven't accounted for. Furthermore, we have gone through the calibration factors and corrected a factor of 2 in the optical gain. Then, I also added the CLTF to show in loop and out of loop error respectively. The updated plot, though not final, is in Attachment #1.

Wed Sep 28 21:54:08 2022, Paco, Update, BHD, calibrated LO phase noise; update

Repeated the LO phase noise measurement, this time with the LO - ITMY single bounce, and a couple of fixes Koji hinted at including:

1. The DEMOD angle was the missing piece! The previous error point showed lower noise than the individual DCPDs because the demodulation angle had not been checked. I corrected it so that the error point in LO_PHASE control was exactly equal to the LO-ITMY single bounce fringe. With this, the gain on the servo had to be adjusted from 4.00 to 0.12, still using FM4, FM5, and this time also FM8 (BLP600).
2. Turned off 60 Hz harmonics comb notches on DCPDs, they are unecessary.
3. Acquired noise spectra down to 0.1 Hz, with 0.03 Hz bin width to increase resolution and identify resonant SUS noise near 1 Hz.

This time, after alignment the fringe amplitude was 500 counts. Attachment #1 shows the updated plot with the calibrated noise spectra for the individual DCPD signals A and B as well as their rms values. Attachment #2 shows the error point, in loop and the estimated out of loop spectra with their rms as well. The peak at ~ 240 Hz is quite noticeable in the error point time series, and dominates the high frequency rms noise. The estimated rms out of loop noise is ~ 9.2 rad, down to 100 mHz.

Fri Sep 30 20:18:55 2022, Paco, Update, BHD, LO phase noise with different actuation points

[Paco, Koji]

We took lo phase noise spectra actuating on the for different optics-- LO1, LO2, AS1, and AS4. The servo was not changed during this time with a gain of 0.2, and we also took a noise spectrum without any light on the DCPDs. The plot is shown in Attachment #1, calibrated in rad/rtHz, and shown along with the rms values for the different suspension actuation points. The best one appears to be AS1 from this measurement, and all the optics seem to show the same 270 Hz (actually 268 Hz) resonant peak.

## 268 Hz noise investigation

Koji suspected the observed noise peak belongs to some servo oscillation, perhaps of mechanical origin so we first monitored the amplitude in an exponentially averaging spectrum. The noise didn't really seem to change too much, so we decided to try adding a bandstop filter around 268 Hz. After the filter was added in FM6, we turned it on and monitored the peak height as it began to fall slowly. We measured the half-decay time to be 264 seconds, which implies an oscillation with Q = 4.53 * f0 * tau ~ 3.2e5. This may or may not be mechanical, further investigation might be needed, but if it is mechanical it might explain why the peak persisted in Attachment #1 even when we change the actuation point; anyways we saw the peak drop ~ 20 dB after more than half an hour... After a while, we noticed the 536 Hz peak, its second harmonic, was persisting, even the third harmonic was visible.

So this may be LO1 violin mode & friends -

We should try and repeat this measurement after the oscillation has stopped, maybe looking at the spectra before we close the LO_PHASE control loop, then closing it carefully with our violin output filter on, and move on to other optics to see if they also show this noise.

Mon Oct 3 15:19:05 2022, Paco, Update, BHD, LO phase noise and control after violin mode filters

[Anchal, Paco]

We started the day by taking a spectrum of C1:HPC-LO_PHASE_IN1, the BHD error point, and confirming the absence of 268 Hz peaks believed to be violin modes on LO1. We then locked the LO phase by actuating on LO2, and AS1. We couldn't get a stable loop with AS4 this morning. In all of these trials, we looked to see if the noise increased at 268 Hz or its harmonics but luckily it didn't. We then decided to add the necessary output filters to avoid exciting these violin modes. The added filters are in the C1:SUS-LO1_LSC bank, slots FM1-3 and comprise bandstop filters at first, second and third harmonics observed previously (268, 536, and 1072 Hz); bode plots for the foton transfer functions are shown in Attachment #1. We made sure we weren't adding too much phase lag near the UGF (~ 1 degree @ 30 Hz).

We repeated the LO phase noise measurement by actuating on LO1, LO2 and AS1, and observe no noise peaks related to 268 Hz this time. The calibrated spectra are in Attachment #2. Now the spectra look very similar to one another, which is nice. The rms is still better when actuating with AS1.

[Paco]

After the above work ended, I tried enabling FM1-3 on the C1:HPC_LO_PHASE control filters. These filters boost the gain to suppress noise at low frequencies. I carefully enabled them when actuating on LO1, and managed to suppress the noise by another factor of 20 below the UGF of ~ 30 Hz. Attachment #3 shows the screenshot of the uncalibrated noise spectra for (1) unsupressed (black, dashed), (2) suppressed with FM4-5 (blue, solid), and (3) boosted FM1-5 suppression (red).

Next steps:

• Compare LO-ITMY and LO-ITMX single bounce noise spectra and MICH.
• Compare DC locking scheme versus BH55 once it's working.
Thu Aug 4 19:01:59 2022, Tega, Update, Computers, Front-end machine in supermicro boxes

Koji and JC looked around the lab today and found some supermicro boxes which I was told to look into to see if they have any useful computers.

Boxes next to Y-arm cabinets (3 boxes: one empty)

We were expecting to see a smaller machine in the first box - like top machine in attachement 1 - but it turns out to actually contain the front-end we need, see bottom machine in attachment 1. This is the same machine as c1bhd currently on the teststand. Attachment 2 is an image of the machine in the second box (maybe a new machine for frambuilder?). The third box is empty.

Boxes next to X-arm cabinets (3 boxes)

Attachement 3 shows the 3 boxes each of which contains the same FE machine we saw earlier at the bottom of attachement 1. The middle box contains the note shown in attacment 4.

Box opposite Y-arm cabinets (1 empty box)

In summary, it looks like we have 3 new front-ends, 1 new front-end with networking issue and 1 new tower machine (possibly a frame builder replacement).

Mon Aug 8 17:16:51 2022, Tega, Update, Computers, Front-end machine setup

Added 3 FE machines - c1ioo, c1lsc, c1sus -  to the teststand following the instructions in elog15947. Note that we also updated /etc/hosts on chiara by adding the names and ip of the new FE since we wish to ssh from there given that chiara is where we land when we connect to c1teststand.

Two of the FE machines - c1lsc & c1ioo - have the 6-core X5680 @ 3.3GHz processor and the BIOS were already mostly configured because they came from LLO I believe. The third machine - c1sus - has the 6-core X5650 @ 2.67GHz processor and required a complete BIOS config according to the doc.

Next Step:  I think the next step is to get the latest RTS working on the new fb1 (tower machine), then boot the frontends from there.

KVM switch note:

All current front-ends have the ps/2 keyboard and mouse connectors except for fb1, which only has usb ports. So we may not be able to connect to fb1 using a ps/2 KVM switch that works for all the current front-ends. The new tower machine does have a ps/2 connector so if we decide to use that as the bootserver and framebuilder, then we should be fine.

Wed Aug 10 20:51:14 2022, Tega, Update, Computers, CDS upgrade Front-end machine setup

Here is a summary of what needs doing following the chat with Jamie today.

Jamie brought over the KVM switch shown in the attachment and I tested all 16 ports and 7 cables and can confirm that they all work as expected.

TODO

1. Do a rack space budget to get a clear picture of how many front-ends we can fit into the new rack

2. Look into what needs doing and how much effort would be needed to clear rack 1X7 and use that instead of the new rack. The power down on Friday would present a good opportunity to do this work on Monday, so get the info ready before then.

3. Start mounting front-ends, KVM and dolphin network switch

Tue Aug 16 18:22:59 2022, Tega, Update, Computers, c1teststand rack mounting for CDS upgrade

[Tega, Yuta]

I keep getting confused about the purpose of the teststand. The view I am adopting going forward is its use as a platform for testing the compatibility of new hardware upgrade, instead of thinking of it as an independent system that works with old hardware.

The initial idea of clearing 1X7 cannot be done for now, because I missed the deadline for providing a detailed enough plan before Monday power up of the lab, so we are just going to go ahead and use the new rack as was initially intended and get the latest hardware and software tested here.

We mounted the DAQ, subnet and dolphin IX switches, see attachement 1. The mounting ears that came with the dolphin switch did not fit and so could not be used for mounting. We looked around the lab and decided to used one of the NavePoint mounting brackets which we found next to the teststand, see attachment 2.

We plan to move the new rack to the current location of the teststand and use the power connection from there. It is also closer to 1X7 so that moving the front-ends and switches to 1X7 should be straight forward after we complete all CDS upgrade testing.

Wed Aug 17 11:10:51 2022, rana, Update, Computers, c1teststand rack mounting for CDS upgrade

we want to be able to run SimPlant on the teststand, test our new controls algorithms, test watchdogs, and any other software upgrades. Ideally in the steady state it will run some plants with suspensions and cavities and we will develop our measurement scripts on there also (e.g. IFOtest).

 Quote: [Tega, Yuta] I keep getting confused about the purpose of the teststand. The view I am adopting going forward is its use as a platform for testing the compatibility of new hardware upgrade, instead of thinking of it as an independent system that works with old hardware.
Mon Aug 22 19:02:15 2022, Tega, Update, Computers, c1teststand rack mounting for CDS upgrade II

[Tega, JC]

Moved the rack to the location of the test stand just behind 1X7 and plan to remove the other two small test stand racks to create some space there.  We then mounted the c1bhd I/O chassis and 4 front-end machines on the test stand (see attachment 1).

Installed the dolphin IX cards on all 4 front-end machines: c1bhd, c1ioo, c1sus, c1lsc. I also removed the dolphin DX card that was previously installed on c1bhd.

Found a single OneStop host card with a mini PCI slot mounting plate in a storage box (see attachment 2). Since this only fits into the dual PCI riser card slot on c1bhd, I swapped out the full-length PCI slot OneStop host card on c1bhd and installed it on c1lsc, (see attachments 3 & 4).

Tue Aug 23 22:30:24 2022, Tega, Update, Computers, c1teststand OS upgrade - I

[JC, Tega, Chris]

After moving the test stand front-ends, chiara (name server) and fb1 (boot server) to the new rack behind 1X7, we powered everything up and checked that we can reach c1teststand via pianosa and that the front-ends are still able to boot from fb1. After confirming these tests, we decided to start the software upgrade to debian 10. We installed buster on fb1 and are now in the process of setting up diskless boot. I have been looking around for cds instructions on how to do this and I found the CdsFrontEndDebian10page which contains most of the info we require. The page suggests that it may be cleaner to start the debian10 installation on a front-end that is connected to an I/O chassis with at least 1 ADC and 1 DAC card, then move the installation disk to the boot server and continue from there, so I moved the disk from fb1 to one of the front-ends but I had trouble getting it to boot. I decided to do a clean install on another disk on the c1lsc front-end which has a host adapter card that can be connected to the c1bhd I/O chassis. We can then mount this disk on fb1 and use it to setup the diskless boot OS.

Fri Aug 26 14:05:09 2022, Tega, Update, Computers, rack reshuffle proposal for CDS upgrade 6x

[Tega, Jamie]

Here is a proposal for what we would like to do in terms of reshuffling a few rack-mounted equipments for the CDS upgrade.

• Frequency Distribution Amp - Move the unit from 1X7 to 1X6 without disconnecting the attached cables. Then disconnect power and signal cables one at a time to enable optimum rerouting for strain relief and declutter.

• GPS Receiver Tempus LX - Move the unit from 1X7 to 1X6 without disconnecting the attached cables. Then disconnect power and signal cables one at a time to enable optimum rerouting for strain relief and declutter.

• PEM & ADC Adapter - Move the unit from 1X7 to 1X6 without disconnecting the attached cables. Disconnect the single signal cable from the rear of the ADC adapter to allow for optimum rerouting for strain relief.

• Martian Network Switch - Make a note of all connections, disconnect them, move the switch to 1X7 and reconnect ethernet cables.

•  MARTIAN NETWORK SWITCH CONNECTIONS # LABEL # LABEL 1 Tempus LX (yellow,unlabeled) 13 FB1 2 1Y6 HUB 14 FB 3 C0DCU1 15 NODUS 4 C1PEM1 16 5 RFM-BYPASS 17 CHIARA 6 MEGATRON/PROCYON 18 7 MEGATRON 19 CISC/C1SOSVME 8 BR40M 20 C1TESTSTAND [blue/unlabelled] 9 C1DSCL1EPICS0 21 JetStar [blue/unlabelled] 10 OP340M 22 C1SUS [purple] 11 C1DCUEPICS 23 unknown [88/purple/goes to top-back rail] 12 C1ASS 24 unknown [stonewall/yellow/goes to top-front rail]

I believe all of this can be done in one go followed by CDS validation. Please comment so we can improve the plan. Should we move FB1 to 1X7 and remove old FB & JetStor during this work?

Attachment 1: Reshuffling proposal

Attachment 2: Front of 1X7 Rack

Attachment 3: Rear of 1X7 Rack

Attachment 4: Front of 1X6 Rack

Attachment 5: Rear of 1X6 Rack

Attachment 6: Martian switch connections

Sun Aug 28 23:14:22 2022, Jamie, Update, Computers, rack reshuffle proposal for CDS upgrade

@tega This looks great, thank you for putting this together.  The rack drawing in particular is great.  Two notes:

1. In "1X6 - proposed" I would move the "PEM AA + ADC Adapter" down lower in the rack, maybe where "Old FB + JetStor" are, after removing those units since they're no longer needed.  That would keep all the timing stuff together at the top without any other random stuff in between them.  If we can't yet remove Old FB and the JetStor then I would move the VME GPS/Timing chassis up a couple units to make room for the PEM module between the VME chassis and FB1.
2. We'll eventually want to move FB1 and Megatron into 1X7, since it seems like there will be room there.  That will put all the computers into one rack, which will be very nice.  FB1 should also be on the KVM switch as well.

I think most of this work can be done with very little downtime.

Mon Aug 29 15:15:46 2022, Tega, Update, Computers, 3 FEs from LLO got delivered today

[JC, Tega]

We got the 3 front-ends from LLO today. The contents of each box are:

1. FE machine
2. OSS adapter card for connecting to I/O chassis
3. PCI riser cards (x2)
4. Timing Card and cable
5. Power cables, mounting brackets and accompanying screws
Tue Aug 30 15:21:27 2022, Tega, Update, Computers, 3 FEs from LHO got delivered today

[Tega, JC]

We received the remaining 3 front-ends from LHO today. They each have a timing card and an OSS host adapter card installed. We also receive 3 dolphin DX cards. As with the previous packages from LLO, each box contains a rack mounting kit for the supermicro machine.

Mon Sep 19 20:21:06 2022, Tega, Update, Computers, 1X7 and 1X6 work

[Tega, Paco, JC]

We moved the GPS network time server and the Frequency distribution amplifier from 1X7 to 1X6 and the PEM AA, ADC adapter and Martian network switch from 1X6 to 1X7. Also mounted the dolphin IX switch at the rear of 1X7 together with the DAQ and martian switches. This cleared up enough space to mount all the front-ends, however, we found that the mounting brackets for the frontends do not fit in the 1X7 rack, so I have decided to mount them on the upper part of the test stand for now while we come up with a fix for this problem. Attachments 1 to 3 show the current state of racks 1X6, 1X7 and the teststand.

Attachment 1: Front of racks 1X6 and 1X7

Attachment 2: Rear of rack 1X7

Attachment 3: Front of teststand rack

## Plan for the remainder of the week

Tuesday

• Setup the 6 new front-ends to boot off the FB1 clone.
• Test PCIe I/O cables by connecting them btw the front-ends and teststand I/O chassis one at a time to ensure they work
• Then lay the fiber cables to the various I/O chassis.

Wednesday

• Migrate the current models on the 6 front-ends to the new system.
• Replace RFM IPC parts with dolphin IPC parts in c1rfm model running c1sus machine
• Replace the RFM parts in c1iscex and c1iscey models
• Drop c1daf and c1oaf models from c1isc machine, since the front-ends have only have 6 cores
• Build and install models

Thursday [CAN I GET THE IFO ON THIS DAY PLEASE?]

• Complete any remaining model work
• Connect all I/O chassis to their respective (new) front-end and see if we can start the models (Need to think of a safe way to do this. Should we disconnect coil drivers b4 starting the models?)

Friday

• Tie-up any loose ends
Tue Sep 20 23:06:23 2022, Tega, Update, Computers, Setup the 6 new front-ends to boot off the FB1 clone

We wired the front-ends for power, DAQ and martian network connections. Then moved the I/O chassis from the buttom of the rack to the middle just above the KVM switch so we can leave the top og the I/O chassis open for access to the ports of OSS target adapter card for testing the extension fiber cables.

Attachment 1 (top -> bottom)

c1sus2

c1iscey

c1iscex

c1ioo

c1sus

c1lsc

When I turned on the test stand with the new front-ends, after a few minutes, the power to 1x7 was cut off due to overloading I assume. This brought down nodus, chiara and FB1. After Paco reset the tripped switch, everything came back without us actually doing anything, which is an interesting observation.

After this event, I moved the test stand power plug to the side wall rail socket. This seems fine so far. I then brought chiara (clone) and FB1 (clone) online. Here are some changes I made to get things going:

## Chiara (clone)

• Edited '/etc/dhcp/dhcpd.conf' to update the MAC address of the front-ends to match the new machines, then run
• sudo service isc-dhcp-server restart
• then restart front-ends
• Edited '/etc/hosts' on chiara to include c1iscex and c1iscey as these were missing

## FB1 (clone)

Getting the new front-ends booting off FB1 clone:

1. I found that the KVM screen was flooded with setup info about the dolphin cards on the LLO machines. This actually prevented login using the KVM switch for two of these machines.  Strangely, one of them 'c1sus' seemed to be fine, see attachment 2, so I guessed this was bcos the dolphin network was already configured earlier when we were testing the dolphin communications. So I decided to configure the remaining dolphin cards. To do so, we do the following

Dolphin Configuration:

1a. Ideally running

sudo /opt/DIS/sbin/dis_mkconf -fabrics 1 -sclw 8 -stt 1 -nodes c1lsc c1sus c1ioo c1iscex c1iscey c1sus2 -nosessions

should set up all the nodes, but this did not happen. In fact, I could no longer use the '/opt/DIS/sbin/dis_admin' GUI after running this operation and restarting the 'dis_networkmgr.service' via

sudo systemctl restart dis_networkmgr.service

so  I logged into each front-end and configured the dolphin adapter there using

sudo /opt/DIS/sbin/dis_config

After which I shut down FB1 (clone) bcos restarting it earlier didn't work, I waited a few minutes and then started it.  Everything was fine afterward, although I am not quite sure what solved the issue as I tried a few things and I was glad to see the problem go!

1b. I later found after configuring all the dolphin nodes that 2 of them failed the '/opt/DIS/sbin/dis_diag' test with an error message suggesting three possible issues of which one was 'faulty cable'. I looked at the units in question and found that swapping both cables with the remaining spares solved the problem. So it seems like these cables are faulty (need to double-check this). Attachment 3 shows the current state of the dolphin nodes on the front-ends and the dolphin switch.

2. I noticed that the NFS mount service for the mount points '/opt/rtcds' and '/opt/rtapps' in /etc/fstab exited with an error, so I ran

sudo mount -a

3. edit '/etc/hosts' to include c1iscex and c1iscey as these were missing

## Front-ends

To test the PCIe extension fiber cables that connect the front-ends to their respective I/O chassis, we run the following command (after booting the machine with the cable connected):

controls@c1lsc:~\$ lspci -vn | grep 10b5:3
Subsystem: 10b5:3120
Subsystem: 10b5:3101

If we see the output above, then both the cable and OSS card are fine (We know from previous tests that the OSS card on the I/O chassis is good). Since we only have one I/O chassis, we repeat the step above 8 times, also cycling through the six new front-end as we go so that we are also testing the installed OSS host adapter cards. I was able to test 4 cables and 4 OSS host cards (c1lsc, c1sus, c1ioo, c1sus2), but the remaining results were inconclusive (i.e. it seems to suggest that 3 out of the remaining 5 fiber cables are faulty, which in itself would be considered unfortunate but I found the reliability if the test to be in question when I went back to test the functionality to the 2 remaining OSS host cards using a cable that passed the test earlier and it didn't pass. After a few retries, I decided to call it a day b4 I lose my mind) and need to be redone again tomorrow.

Note: We were unable to lay the cables today bcos these tests were not complete, so we are a bit behind the plan. Would see if we can catch up tomorrow.

Quote:

## Plan for the remainder of the week

Tuesday

• Setup the 6 new front-ends to boot off the FB1 clone.
• Test PCIe I/O cables by connecting them btw the front-ends and teststand I/O chassis one at a time to ensure they work
• Then lay the fiber cables to the various I/O chassis.

Wed Sep 21 17:16:14 2022, Tega, Update, Computers, Setup the 6 new front-ends to boot off the FB1 clone

[Tega, JC]

We laid 4 out of 6 fiber cables today. The remaining 2 cables are for the I/O chassis on the vertex so we would test the cables the lay it tomorrow. We were also able to identify the problems with the 2 supposedly faulty cable, which are not faulty. One of them had a small bend in the connector that I was able to straighten out with a small plier and the other was a loose connection in the switch part. So there was no faulty cable, which is great! Chris wrote a matlab script that does the migration of all the model files. I am going through them, i.e. looking at the CDS parameter block to check that all is well. Next task is to build and install the updated models. Also need to update the '/opt/rtcds' and '/opt/rtapps' directory to the latest in the 40m chiara system.

Thu Sep 22 20:57:16 2022, Tega, Update, Computers, build, install and start 40m models on teststand

[Tega, Chris]

We built, installed and started all the 40m models on the teststand today. The configuration we employ is to connect c1sus to the teststand I/O chassis and use dolphin to send the timing to other frontends. To get this far, we encounterd a few problems that was solved by doing the following:

0. Fixed frontend timing sync to FB1 via ntp

1. Set the rtcds enviroment variable CDS_SRC=/opt/rtcds/userapps/trunk/cds/common/src in the file '/etc/advligorts/env'

2. Resolved chgrp error during models installation using sticky bits on chiara, i.e. sudo chmod g+s -R /opt/rtcds/caltech/c1/target

3. Replaced sqrt with lsqrt in RMSeval.c to eliminate compilation error for c1ioo

4. Created a symlink for 'activateDQ.py' and 'generate_KisselButton.py' in '/opt/rtcds/caltech/c1/post_build'

5. Installed and configured dolphin for new frontend 'c1shimmer'

6. Replaced 'RFM0' with 'PCIE' in the ipc file, '/opt/rtcds/caltech/c1/chans/ipc/C1.ipc'

We still have a few issues namely:

1. The user models are not running smoothly. the cpu usage jumps to its maximum value every second or so.

2. c1omc seems to be unable to get its timing from its IOP model (Issue resolved by changing the CDS block parameter 'specific_cpu' from 6 to 4 bcos the new FEs only have 6 cores, 0-5)

3. The need to load the dolphin-proxy-km library and start the rts-dolphin_daemon service whenever we reboot the front-end

Fri Sep 23 19:07:03 2022, Tega, Update, Computers, Work to improve stability of 40m models running on teststand

[Chris, Tega]

## Timing glitch investigation:

• Moved dolphin transmit node from c1sus to c1lsc bcos we suspect that the glitch might be coming from the c1sus machine (earlier c1pem on c1sus was running faster then realtime).
• Installed and started c1oaf to remove the shared memory IPC error to/from c1lsc model
• /opt/DIS/sbin/dis_diag gives two warnings on c1sus2
• [WARN] IXH Adapter 0 - PCIe slot link speed is only Gen1
• [WARN] Node 28 not reachable, but is an entry in the dishosts.conf file - c1shimmer is currently off, so this is fine.

## DAQ network setup:

• Added the fixed DAQ IPV4 address and port for all the front-ends to '/etc/advligorts/subscriptions.txt' for cps_recv service
• Edited '/etc/advligorts/master' by including all the iop and user models '.ini' files in '/opt/rtcds/caltech/c1/chans/daq/' containing channel info and the corresponding tespoint files in '/opt/rtcds/caltech/c1/target/gds/param/'
• Created systemd environment file for each front-end in '/diskless/root/etc/advligorts/' containing the argument for local data concentrator and daq data transmitter (local_dc_args and cps_xmit_args). We currently have staggered the delay (-D waitValue) times of the front-ends by setting it to the last number in the daq ip address when we were facing timing glitch issues, but should probably set it back to zero to see if it has any effect.

## Other:

• Edited /etc/resolv.conf on fb1 and 'diskless/root' to enable name resolution via for example host c1shimmer but the file gets overwritten on chiara for some reason

## Issues:

1. Frame writing is not working at the moment. It did at some point in the past for a couple of days but stopped working earlier today and we can't quite figure out why.
2. We can't get data via diaggui or ndscope either. Again, we recall the working in the past too but not sure why it has stopped working now.
3. The cpu load on c1su2 is too high so we should split into two models
4. We still get the occassional IPC glitch both for shared memory and dolphin, see attachments
Thu Sep 29 15:12:02 2022, JC, Update, Computers, Setup the 6 new front-ends to boot off the FB1 clone

[Jamie, Christopher, JC]

This morning we decided to label the the fiber optic cables. While doing this, we noticed that the ends had different label, 'Host' and 'Target'. Come to find out, the fiber optic cables are directional. Four out of Six of the cables were reversed. Luckily, 1 cable for the 1Y3 IO Chassis has a spare already laid (The cable we are currently using).  Chris, Jamie, and I have begun reversing these cable to there correct position.

 Quote: [Tega, JC] We laid 4 out of 6 fiber cables today. The remaining 2 cables are for the I/O chassis on the vertex so we would test the cables the lay it tomorrow. We were also able to identify the problems with the 2 supposedly faulty cable, which are not faulty. One of them had a small bend in the connector that I was able to straighten out with a small plier and the other was a loose connection in the switch part. So there was no faulty cable, which is great! Chris wrote a matlab script that does the migration of all the model files. I am going through them, i.e. looking at the CDS parameter block to check that all is well. Next task is to build and install the updated models. Also need to update the '/opt/rtcds' and '/opt/rtapps' directory to the latest in the 40m chiara system.

Tue Oct 4 21:00:49 2022, Chris, Update, Computers, Failed takeover attempt with the new front ends

[Jamie, JC, Chris]

Today we made a failed attempt to take over the 40m hardware with the new front ends on the test stand.

As an initial test, we connected the new c1iscey to its I/O chassis using the OneStop fiber link. This went smoothly, so we tried to proceed with the rest of the system, which uncovered several problems. Consequently, we’ve reverted control back to the old front ends tonight, and will regroup and make another attempt tomorrow.

Status summary:

• c1iscey worked on the first try
• c1lsc worked, after we sorted out which of the two OneStop cables run to its rack we needed to use
• c1sus2 sort of worked (its models have been crashing sporadically)
• c1ioo had a busted OneStop cable, and worked after that was replaced
• c1sus refused to work with the fiber OneStop cables (we tried several, including the known working one from c1ioo), but we jury-rigged it to run over a copper cable, after nudging the teststand rack a bit closer to the chassis
• c1iscex refused to work with the fiber OneStop cables, and substituting copper was not an option, so we were stuck

There are various pathologies that we've seen with the OneStop interface cards in the I/O chassis. We don't seem to have the documentation for these cards, but our interpretive guesses are as follows:

• When working, it is supposed to illuminate all the green LEDs along the top of the card, and the four next to the connector. In this state, you can run lspci -vt on the host, and see the various PLX/Contec/etc devices that populate the chassis.
• When the cable is unplugged or bad, only four green LEDs illuminate on the card, and none by the connector. No devices from the chassis can be seen from the host.
• On c1iscex and c1sus, when a fiber link is plugged in, it turns on all the LEDs along the top of the card, but the four next to the connector remain dark. We’re not sure yet what this is trying to tell us, but lspci finds no devices from the chassis, same as if it is unplugged.
• Also, sometimes on c1iscex, no LEDs would illuminate at all (possibly the card was not seated properly).

Tomorrow, we plan to swap out the c1iscex I/O chassis for the one in the test stand, and see if that lets us get the full system up and running.

Thu Oct 6 07:29:30 2022, Chris, Update, Computers, Successful takeover attempt with the new front ends

[JC, Chris]

Last night’s CDS upgrade attempt succeeded in taking over the IFO. If the IFO users are willing, let’s try to run with it today.

The new system was left in control of the IFO hardware overnight, to check its stability. All looks OK so far.

The next step will be to connect the new FEs, fb1, and chiara to the martian network, so they’re directly accessible from the control room workstations (currently the system remains quarantined on the teststand network). We’ll also need to bring over the changes to models, scripts, etc that have been made since Tega’s last sync of the filesystem on chiara.

The previous elog noted a mysterious broken state of the OneStop link between FE and IO chassis, where all green LEDs light up on the OneStop board in the IO chassis, except the four next to the fiber link connector. This was seen on c1sus and c1iscex. It was recoverable last night on c1iscex, by fully powering down both FE computer and chassis, waiting a bit, and then powering up chassis and computer again. Currently c1sus is running with a copper OneStop cable because of the fiber link troubles we had, but this procedure should be tried to see if one of the fiber links can be made to work after all.

In order to string the short copper OneStop cable for c1sus, we had to move the teststand rack closer to the IO chassis, up against the back of 1X6/1X7. This is a temporary state while we prepare to move the FEs to their new rack. It hopefully also allows sufficient clearance to the exit door to pass the upcoming fire inspection.

At first, we connected the teststand rack’s power cables to the receptacle in 1X7, but this eventually tripped 1X7’s circuit breaker in the wall panel. Now, half of the teststand rack is on the receptacle in 1X6, and the other half is on 1X7 (these are separate circuits).

After the breaker trip, daqd couldn’t start. It turned out that no data was flowing to it, because the power cycle caused the DAQ network switch to forget a setting I had applied to enable jumbo frames on the network. The configuration has now been saved so that it should apply automatically on future restarts. For future reference, the web interface of this switch is available by running firefox on fb1 and navigating to 10.0.113.254.

When the FE machines are restarted, a GPS timing offset in /sys/kernel/gpstime/offset sometimes fails to initialize. It shows up as an incorrect GPS time in /proc/gps and on the GDS_TP MEDM screens, and prevents the data from getting timestamped properly for the DAQ. This needs to be looked at and fixed soon. In the meantime, it can be worked around by setting the offset manually: look at the value on one of the FEs that got it right, and apply it using sudo sh -c "echo CORRECT_OFFSET >/sys/kernel/gpstime/offset".

In the first ~30 minutes after the system came up last night, there were transient IPC errors, caused by drifting timestamps while the GPS cards in the FEs got themselves resynced to the satellites. Since then, timing has remained stable, and no further errors occurred overnight. However, the timing status is still reported as red in the IOP state vectors. This doesn’t seem to be an operational problem and perhaps can be ignored, but we should check it out later to make sure.

Also, the DAC cards in c1ioo and c1iscey reported FIFO EMPTY errors, triggering their DACKILL watchdogs. This situation may have existed in the old system and gone undetected. To bypass the watchdog, I’ve added the optimizeIO=1 flag to the IOP models on those systems, which makes them skip the empty FIFO check. This too should be further investigated when we get a chance.

Wed Sep 21 17:01:59 2022, Paco, Update, BHD, BH55 RFPD installed - part I

## Optical path setup

We realized the DCPD - B beam path was already using a 95:5 beamsplitter to steer the beam, so we are repurposing the 5% pickoff for a 55 MHz RFPD. For the RFPD we are using a gold RFPD labeled "POP55 (POY55)" which was on the large optical table near the vertex. We have decided to test this in-situ because the PD test setup is currently offline.

Radhika used a Y1-1025-45S mirror to steer the B-beam path into the RFPD, but a lens should be added next in the path to focus the beam spot into the PD sensitive area. The current path is illustrated by Attachment #1.

We removed some unused OPLEV optics to make room for the RFPD box, and these were moved to the optics cabinet along Y-arm [Attachment #2].

[Anchal, Yehonathan]

## PD interfacing and connections

In parallel to setting up the optical path configuration in the ITMY table, we repurposed a DB15 cable from a PD interface board in the LSC rack to the RFPD in question. Then, an SMA cable was routed from the RFPD RF output to an "UNUSED" I&Q demod board on the LSC rack. Lucky us, we also found a terminated REFL55 LO port, so we can draw our demod LO from there. There are a couple (14,15,20,21) ADC free inputs after the WF2 and WF3 whitening filter interfaces.

## Next steps

• Finish alignment of BH55 beam to RFPD
• Test RF output of RFPD once powered
• Modify LSC model, rebuild and restart
Thu Sep 22 19:51:58 2022, Anchal, Update, BHD, BH55 LSC Model Updates - part II

I updated follwoing in teh rtcds models and medm screens:

• c1lsc
• Connected BH55_I and BH55_Q to phase rotation and creation of output channels.
• Replaced POP55 with BH55 in the RFPD input matrix.
• Send BH55_I and BH55_Q over IPC to c1hpc
• Added BH55 RFPD model in LSC screen, in RFPD input matrix, whitening box. Some work is still remaining.
• c1hpc
• Added recieving BH55_I and BH55_Q.
• Added BH55_I and BH55_Q to sensing matrix through filter modules. Now these can be used to control LO phase.
• Added BH55 signals to the medm screen.
• c1scy
• Updated SUS model to new sus model that takes care of data acquisition rates and also adds BIASPOS, BIASPIT and BIASYAW filter modules at alignment sliders.

### Current state:

• All models built and installed without any issue or error.
• On restarting all models, I first noticed 0x2000 error on c1lsc, c1scy and c1hpc. But these errors went away with doing daqd restart on fb1.
• BH55 FM buttons are not connected to antialiasing analog filter. Need to do this and update medm screen accordingly.
• The IPC from c1lsc to c1hpc is not working. One sender side, it does not show any signal which needs to be resolved.
Fri Sep 23 19:04:12 2022, Anchal, Update, BHD, BH55 LSC Model Updates - part III

### BH55

I further updated LSC model today with following changes:

• BH55 whitening switch binary output signal is now routed to correct place.
• Switching FM1 which carries dewhitening digital filter will always switch on corresponding analog whitening before ADC input.
• The whitening can be triggered using LSC trigger matrix as well.
• The ADC_0 input to LSC subsystem is now a single input and channels are separated inside the subsystem.

The model built and installed with no issues.

Further, the slow epics channels for BH55 anti-aliasing switch and whitening switch were added in /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db

### IPC issue resolved

The IPC issue that we were facing earlier is resolved now. The BH55_I and BH55_Q signal after phase rotation is successfully reaching c1hpc model where it can be used to lock LO phase. To resolve this issue, I had to restart all the models. I also power cycled the LSC I/O chassis during this restart as Tega suspected that such a power cycle is required while adding new dolphin channels. But there is no way to find out if that was required or not. Good news is that with the new cds upgrade, restarting rtcds models will be much easier and modular.

### ETMY Watchdog Updated

[Anchal, Tega]

Since ETMY does not use HV coil driver anymore, the watchdog on ETMY needs to be similar to other new optics. We made these updated today. Now ETMY watchdog while slowly ramps down the alignment offsets when it is tripped.

Thu Sep 29 18:01:14 2022, Anchal, Update, BHD, BH55 LSC Model Updates - part IV

### More model changes

c1lsc:

• The whitening switch controls were also shifted accordingly.
• the slow epics channels for BH55 anti-aliasing switch and whitening switch were added in /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db

c1mcs:

• MC1, MC2, and MC3 are running on new suspension models now.

c1hpc:

• DCPD_A and DCPD_B have been renamed to BHDC_A and BHDC_B following naming convention at other ports.
• After the input summing matrix, the signals are called BHDC_SUM and BHDC_DIFF now.
• BHDC_SUM and BHDC_DIFF can be directly using in sensing matrix bypassing the dither demodulation (to be used for DC locking)
• BH55_I and BH55_Q are also sent for dither demodulation now (to be used in double dither method, RF and audio).
• SHMEM channel names to c1bac were changed.

c1bac:

• Conformed with new SHMEM channel names from c1hpc
Fri Sep 30 18:30:12 2022, Anchal, Update, ASS, Model Changes

I updated c1ass model today to use PR2 PR3 instead of TT1 TT2 for YARM ASS. This required changes in c1su2 too. I have split c1su2 into c1su2 (LO1., LO2, AS1, AS4) and c1su3 (SR2, PR2, PR3). Now the two models are using 31 and 21 CPU out of 60 which was earlier 55/60. All changed compiled correctly and have been uploaded. Models have been restared and medm screens have been updated.

### Model changes

#### c1su2:

• Everything related to SR2, PR2, and PR3 have been moved to c1su3.
• Extra binary output channels are also distributed between c1su2 and c1su3. BO_4 and BO5 have been moved to c1su3.

c1su3:

• Added IPC receiving from ASS for PR2 and PR3

c1ass:

• Inputs to TT1 and TT2 PIT and YAW filter modules have been terminated to ground.
• The ASS outputs for YARM have been renamed to PR2 and PR3 from TT1 and TT2.
• IPC sending blocks added to send PR2 and PR3 ASC signals to c1su3.

### To do:

• Updated YARM ASS output matrix to handle change in coil driver actuation on PR2 and PR3 in comparison to TT1 and TT2.
• Yuta suggested dithering PR2 and PR3 for input beam pointing for YARM alignment.
Fri Sep 23 14:10:19 2022, Radhika, Update, BHD, BH55 RFPD installed - part I

I placed a lens in the B-beam path to focus the beam spot onto the RFPD [Attachment 1]. To align the beam spot onto the RFPD, Anchal misaligned both ETMs and ITMY so that the AS and LO beams would not interfere, and the PD output would remain at some DC level (not fringing). The RFPD response was then maximized by scanning over pitch and yaw of the final mirror in the beam path (attached to the RFPD).

Later Anchal noticed that there was no RFPD output (C1:LSC-BH55_I_ERR, C1:LSC-BH55_Q_ERR). I took out the RFPD and opened it up, and the RF OUT SMA to PCB connection wire was broken [Attachment 2]. I re-soldered the wire and closed up the box [Attachment 3]. After placing the RFPD back, we noticed spikes in C1:LSC-BH55_I_ERR and C1:LSC-BH55_Q_ERR channels on ndscope. We suspect there is still a loose connection, so I will revisit the RFPD circuit on Monday.

Fri Sep 23 18:31:46 2022, rana, Update, BHD, BH55 RFPD installed - part I

A design flaw in these initial LIGO RFPDs is that the SMA connector is not strain releieved by mounting to the case. Since it is only mounted to the tin can, when we attach/remove cables, it bends the connector, causing stress on the joint.

To get around this, for this gold box RFPD, connect the SMA connector to the PCB using a S shaped squiggly wire. Don't use multi-strand: this is usually good, since its more flexible, but in this case it affects the TF too much. Really, it would be best to use a coax cable, but a few-turns cork-screw, or pig-tail of single-core wire should be fine to reduce the stress on the solder joint.

 Quote: Later Anchal noticed that there was no RFPD output (C1:LSC-BH55_I_ERR, C1:LSC-BH55_Q_ERR). I took out the RFPD and opened it up, and the RF OUT SMA to PCB connection wire was broken [Attachment 2]. I re-soldered the wire and closed up the box [Attachment 3]. After placing the RFPD back, we noticed spikes in C1:LSC-BH55_I_ERR and C1:LSC-BH55_Q_ERR channels on ndscope. We suspect there is still a loose connection, so I will revisit the RFPD circuit on Monday.

Mon Sep 26 11:39:37 2022, Paco, Update, BHD, BH55 RFPD installed - part II

[Paco, Anchal]

We followed rana's suggestion for stress relief on the SMA joint in the BH55 RFPD that Radhika resoldered. We used a single core, pigtailed wire segment after cleaning up the solder joint on J7 (RF Out) and also soldered the SMA shield to the RF cage (see Attachment #1). This had a really good effect on the rigidity of the connection, so we moved back to the ITMY table.

We measured the TEST in to RF Out transfer function using the Agilent network analyzer, just to see the qualitative features (resonant gain at around 55 MHz and second harmonic suppression at around 110 MHz) shown in Attachment #2. We used 10kOhm series resistance in test input path to calibrate the measured transimpedance in V/A. The RFPD has been installed in the ITMY table and connected to the PD interface box and IQ demod boards in the LSC rack as before.

Measurement files

Thu Oct 6 11:12:14 2022, Anchal, Update, BHD, BH55 RFPD installation complete

[Yuta, Paco, Anchal]

BH55 RFPD installation was still not complete until yesterday because of a peculiar issue. As soon as we would increase the whitening gain on this photodiode, we saw spikes coming in at around 10 Hz. Following events took place while debugging this issue:

• We first thought that RFPD might be bad as we had just picked it up from what we call the graveyard table.
• Paco fixed the bad connection issue at RF out and we confired RFPD transimpedance by testing it. See 40m/17159.
• We tried changing the whitening filter board but that did not help.
• We used BH55 RFPD to lock MICH by routing the demodulation board outputs to AS55 channels on WF2 board. We were able to lock MICH and increase whitening gain without the presence of any spikes. This ruled out any issue with RFPD.
• Yuta and I tried swapping the whitening filter board but the problem persisted, which made us realize that the issue could be in the acromag that is writing the whitening gain for BH55 RFPD.
• We combed through the /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_LSCPDs.db file to check if the whitening gain DAC channels are written twice but that was not the case. But changing the scan rate of the whitening gain output channel did change the rate at which teh spikes were coming.
• This proved that some other process is constantly writing zero on these outputs.
• It tuned out that all unused channels of acromags for c1iscaux are still defined and made to write 0 through /cvs/cds/caltech/target/c1iscaux/C1_ISC-AUX_SPARE.db file. I don't think we need this spare file. If someone wants to use spare channels, they can quickly add it to dB file and restart the modbusIOC service on c1iscaux, it takes less than 2 minutes to do it. I vote to completely get rid of this file or atleast not use it in the cmd file.
• After removing the violating channels, the problem with BH55 RFPD is resolved.

The installation of BH55 RFPD is complete now.

Thu Oct 6 12:02:21 2022, Anchal, Update, CDS, CDS Upgrade Plan

[Chris, Anchal]

Chris and I discussed our plan for CDS upgrade which amounts to moving new FEs, new chiara, and new FB1 OS system tomartian network.

### Preparation:

• Chiara (clone) (will be called "New Chiara" henceforth) will be resynced to existing chiara to get all model and medm changes.
• All models on New Chiara will be rebuilt, and reinstalled.
• All running servies on existing chiara will be printed and stored for comparison later.
• New Chiara's OS drive will be updraged to Debian 10 and all services will be restored:
• DHCP
• DNS
• NFS
• rsync
• Existing fb1 DAQ network card (10 GBps ethernet card) will be verified.
• Make a list of all fb1 file system mounts and their UUIDs.

Date: Fri Oct 7, 2022
Time: 11:00 am (After coffee and donuts)
Minimum required people: Chris, Anchal, JC (the more the merrier)

Steps:

1. Ensure a snapshot of all channels is present from Oct 6th on New Chiara.
2. Shutdown all machines:
1. All slow computers (Except c1vac).
Computer List: ssh into the computers and run:
sudo systemctl stop modbusIOC.service
sudo shutdown -h now
1. c1susaux
2. c1susaux2
3. c1auxex
4. c1auxey1
5. c1psl
6. c1iscaux
2. All fast computers. Run on rossa:
/cvs/cds/rtcds/caltech/c1/Git/40m/scripts/cds/stopAllModels.sh
Disconnect left ethernet cables from the back of these computers.
3. Power off all I/O chassis
4. Swap the oneStop cables on all I/O chassis to fiber cables. On c1sus, connect the copper oneStop cable to teststand c1sus FE.
5. Tun on all I/O chassis.
3. Exchange chairas.
1. Connect old chiara to teststand network.
2. Connect New Chiara to martian network.
3. Turn on both old and new chiara.
4. Ensure all services are running on New Chiara by comparing with the list made earlier during preparation.
4. fb1.
1. Move fb1(clone)'s OS drive into existing fb1 (on 1X6)
2. Turn on fb1 (on 1X6).
3. Ensure fb1 is mounting all it's file systems correctly.
5. New FEs
1. Connect the network switch for new FEs to martian network. Make sure that old chiara is not connected to this same switch.
2. Turn on the new FEs. All models should start on boot in sequence.
3. Check if all models have green lights.
6. Burt restore using latest snapshot available.
7. Perform tests:
1. Check if all local damping loops are working as before.
2. Check if all IPC channels are transmitting and receiving correctly.
3. Check if IMC is able to lock.
4. Try single arm locking
5. Try MICH locking.
8. Make contingency plan on how to revert to old system if something fails.
Thu Oct 6 18:50:57 2022, Anchal, Summary, BHD, BH55 meas diff angle estimation and LO phase lock attempts

[Yuta, Paco, Anchal]

### BH55 meas diff

We estimated meas diff angle for BH55 today by following this elog post. We used moku:lab Moku01 to send a 55 MHz tone to PD input port of BH55 demodulation board. Then we looked at I_ERR and Q_ERR signals. We balanced the gain on I channel to 1.16 to get the two signals to same peak to peak heights. Then we changed the mead diff angle to 91.97 to make the "bounding box" zero. Our understanding is that we just want the ellipse to be along x-axis.

We also aligned beam input to BH55 bit better. We used the single bounce beam from aligned ITMY as the reference.

### LO phase lock with single RF demodulation

We attempted to lock LO phase with just using BH55 demodulated output.

Configuration:

• ITMX, ETMs were significantly misaligned.
• At BH port, overlapping beams are single bounce back from ITMY and LO beam.

We expected that we would be able to lock to 90 degree LO phase just like DC locking. But now we understand that we can't beat the light with it's own phase modulated sidebands.

The confusion happened because it would work with Michelson at the dark port output of michelson, amplitude modulation is generated at 55 MHz. We tried to do the same thing as was done for DC locking with single bounce  and then michelson, but we should have seen this beforehand. Lesson: Always write down expectation before attempting the lock.

ELOG V3.1.3-