40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 13 of 330  Not logged in ELOG logo
ID Date Author Type Category Subject
  16005   Wed Apr 7 17:38:51 2021 AnchalUpdateSUSTrying to uncouple only PIT and YAW first

To test if our method is working at all, we went for the simpler case of just uncoupling PIT and YAW. This is also because the sensor used for these two degrees of freedom is similar (the MC Trans WFS).

We saw a successful decrease in cross-coupling between PIT and YAW over the first 50 iterations that we tried. Here are some results:


Final output matrix:

Output matrix for uncoupling PIT and YAW from eachother
PIT YAW COILS
1.01858 1.16820 UL
0.98107 -0.79706 UR
-0.98107 0.79706 LL
-1.01858 -1.16820 LR

Plots:

  • Attachment 1 shows distance of sensing matrix from identity as iterations go.
  • Attachment 2 shows the off-diagonal elements of sensing matrix as the iterations increase.
    • It is worth noting that PIT -> YAW coupling was the main element that was reduced successfully while the YAW -> PIT was reducing but much more slowly.
    • Most of the remaining cross coupling in the end was from YAW -> PIT.
  • Attachment 3 shows first 10 oscillations in the time series data during excitation of some of the iterations.
  • Attachment 4 shows the cross spectral density of the sensed data during excitation with each other. This has been normalized by reference PSD data (taken with no excitation) of the sensed DOFs involved in the CSD calculation.
  • Attachment 5 shows the TF estimate made by normalizing CSD data column wise by the diagonal elements. The excitation frequency point in these plots become the Sensing matrix in the calculation.
    • One can notice how the PIT -> YAW element is going down in these plots.
    • Even though we are using only the real value of the sensing matrix, the imaginary values are also going down.

Next, tried uncoupling POS and PIT:

  • Next, we tried to uncouple POS and PIT. We expect them to be more coupled than with YAW.
  • At the time of writing this post, 15 iterations of this attempt have been completed and it is not looking good sad.
  • The distance of the sensing matrix from identity is growing at an accelerated rate.
  • The POS output matrix column seems to be trying to go towards the negative of PIT output matrix column! Why? We don't know.
  • We have seen in the past that once POS transforms into PIT or YAW, it just makes the output matrix worse as no feedback actually goes into the POS column. Eventually, the IMC will cease to remain locked.
  • So, I'm cancelling this attempt for now. Will consider more alternatives later.
  16004   Wed Apr 7 13:07:03 2021 JordanUpdateSUSCoM on 3"->2" Adapter Ring for SOS

Adding the chamfer around the edge of the optic ring did not change the center of mass relative to the plane from the suspension wires.

The CoM was .0003" away from the plane. Adding the chamfer moved it closer by .0001". See the attached photo.

I've also attached the list of the Moments of Inertia of the SOS Assembly.

  16003   Wed Apr 7 02:50:49 2021 KojiUpdateSUSFlange Inspections

Basically I went around all the chambers and all the DB25 flanges to check the invac cable configurations. Also took more time to check the coil Rs and Ls.

Exceptions are the TTs. To avoid unexpected misalignment of the TTs, I didn't try to disconnect the TT cables from the flanges.

Upon the disconnection of the SOS cables, the following steps are taken to avoid large impact to the SOSs

  • The alignment biases were saved or recorded.
  • Gradually moved the biases to 0
  • Turned off the watchdogs (thus damping)

After the measurement, IMC was lock and aligned. The two arms were locked and aligned with ASS. And the PRM alignment (when "misalign" was disengaged) was checked with the REFL CCD.
So I believe the SOSs are functioning as before, but if you find anything, please let me know.
 

  16002   Tue Apr 6 21:17:04 2021 KojiSummaryGeneralPSL HEPA investigation

- Last week we found both of the PSL HEPA units were not running.

- I replaced the capacitor of the north unit, but it did not solve the issue. (Note: I reverted the cap back later)
- It was found that the fans ran if the variac was removed from the chain.
- But I'm not certain that we can run the fans in this configuration with no attendance considering fire hazard.

@3AM: UPON LEAVING the lab, I turned off the HEPA. The AC cable was not warm, so it's probably OK, but we should wait for the continuous operation until we replace the scorched AC cable.


The capacitor replacement was not successful. So, the voltages on the fan were checked more carefully. The fan has the three switch states (HIGH/OFF/LOW). If there is no load (SW: OFF), the variac out was as expected. When the load was LOW or HIGH, it looked as if the motor is shorted (i.e. no voltage difference between two wires).

I thought the motors may have been shorted. But if the load resistance was measured with the fluke meter, it showed some resistance

- North Unit: SW LOW 4.6Ohm / HIGH 6.0Ohm
- South Unit: SW LOW 6.0Ohm / HIGH 4.6Ohm (I believe the internal connection is incorrect here)

I believed the motors are alive! Then the fans were switched on with the variac removed... they ran. So I set the switch LOW for the north unit and HIGH for the south unit.

Then I inspected the variac:

  • The AC output has some liquid leaking (oil?) (Attachment 1)
  • The AC plug on the variac out has a scorch mark (Attachments 2/3)

So, this scorched AC plug/cable connected directly to the AC right now. I'm not 100% confident about the safety of this configuration.
Also I am not sure what was wrong with the system.

  • Has the variac failed first? Because of the heat? I believe that the HEPA was running @30% most of the time. Maybe the damage was already there at the failure in Nov 2020?
  • Or has the motor stopped at some point and this made the variac failed?
  • Was the cable bad and the heat made the variac failed (then the problem is still there).

So, while I'm in the lab today, I'll keep the HEPA running, but upon my taking off, I'll turn it off. We'll discuss what to do in the meeting tomorrow.

 

  16001   Tue Apr 6 18:46:36 2021 Anchal, PacoUpdateSUSUpdates on recent efforts

As mentioned in last post, we earlier made an error in making sure that all time series arrays go in with same sampling rate in CSD calculation. When we fixed that, our recursive method just blew out in all the efforts since then.


We suspect a major issue is how our measured sensing matrix (the cross-coupling matrix between different degrees of freedom on excitation) has significant imaginary parts in it. We discard the imaginary vaues and only use real parts for iterative method, but we think this is not the solution.

Here we present cross-spectral density of different channels representing the three sensed DOFs (normalized by ASD of no excitation data for each involved component) and the sensing matrix (TF estimate) calculated by normalizing the first cross spectral density plots column wise by the diagonal values. These are measured with existing ideal output matrix but with the new input matrix. This is to get an idea of how these elements look when we use them.

Note, that we used only 10 seconds of data in this run and used binwidth of 0.25Hz. When we used binwidth of 0.1 Hz, we found that the peaks were broad and highest at 13.1 Hz instead of 13 Hz which is the excitation frequency used in these measurements.


How should we proceed?

  • We feel that we should figure out a way to use the imaginary value of the sensing matrix, either directly or as weights representing noise in that particular data point.
  • Should we increase the excitation amplitude? We are currently using 500 counts of excitation on coil output.
  • Are there any other iterative methods for finding the inverse of the matrix that we should be aware of? Our current method is rudimentary and converges linearly.
  • Should we use the absolute value of the sensing matrix instead? In our experience, that is equivalent to simply taking ratios of the PSD of each channel and does not work as well as the TF estimate method.
  15999   Tue Apr 6 15:42:57 2021 YehonathanUpdateBHDSOS assembly

We got some dumbells from Re-Source Manufacturing (see attached). I picked 3 in random and measured their dimensions:

1. 0.0760" in diameter, 0.0860" in length

2. 0.0760" in diameter, 0.0860" in length

3. 0.0760" in diameter, 0.0865" in length

In accordance with the Schematics.

  15998   Tue Apr 6 11:13:01 2021 JonUpdateCDSFE testing

I/O chassis assembly

Yesterday I installed all the available ADC/DAC/BIO modules and adapter boards into the new I/O chassis (c1bhd, c1sus2). We are still missing three ADC adapter boards and six 18-bit DACs. A thorough search of the FE cabinet turned up several 16-bit DACs, but only one adapter board. Since one 16-bit DAC is required anyway for c1sus2, I installed the one complete set in that chassis.

Below is the current state of each chassis. Missing components are highlighted in yellow. We cannot proceed to loopback testing until at least some of the missing hardware is in hand.

C1BHD

Component Qty Required Qty Installed
16-bit ADC 1 1
16-bit ADC adapter 1 0
18-bit DAC 1 0
18-bit DAC adapter 1 1
16-ch DIO 1 1

C1SUS2

Component Qty required Qty Installed
16-bit ADC 2 2
16-bit ADC adapter 2 0
16-bit DAC 1 1
16-bit DAC adapter 1 1
18-bit DAC 5 0
18-bit DAC adapter 5 5
32-ch DO 6 6
16-ch DIO 1 1

Gateway for remote access

To enable remote access to the machines on the test stand subnet, one machine must function as a gateway server. Initially, I tried to set this up using the second network interface of the chiara clone. However, having two active interfaces caused problems for the DHCP and FTS servers and broke the diskless FE booting. Debugging this would have required making changes to the network configuration that would have to be remembered and reverted, were the chiara disk to ever to be used in the original machine.

So instead, I simply grabbed another of the (unused) 1U Supermicro servers from the 1Y1 rack and set it up on the subnet as a standalone gateway server. The machine is named c1teststand. Its first network interface is connected to the general computing network (ligo.caltech.edu) and the second to the test-stand subnet. It has no connection to the Martian subnet. I installed Debian 10.9 anticipating that, when the machine is no longer needed in the test stand, it can be converted into another docker-cymac for to run additional sim models.

Currently, the outside-facing IP address is assigned via DHCP and so periodically changes. I've asked Larry to assign it a static IP on the ligo.caltech.edu domain, so that it can be accessed analogously to nodus.

  15997   Tue Apr 6 07:19:11 2021 JonUpdateCDSNew SimPlant cymac

Yesterday Chris and I completed setup of the Supermicro machine that will serve as a dedicated host for developing and testing RTCDS sim models. It is currently sitting in the stack of machines in the FE test stand, though it should eventually be moved into a permanent rack.

It turns out the machine cannot run 10 user models, only 4. Hyperthreading was enabled in the BIOS settings, which created the illusion of there being 12 rather than 6 physical cores. Between Chris and Ian's sim models, we already have a fully-loaded machine. There are several more of these spare 6-core machines that could be set up to run additional models. But in the long term, and especially in Ian's case where the IFO sim models will all need to communicate with one another (this is a self-contained cymac, not a distributed FE system), we may need to buy a larger machine with 16 or 32 cores.

IPMI was set up for the c1sim cymac. I assigned the IPMI interface a static IP address on the Martian network (192.168.113.45) and registered it in the usual way with the domain name server on chiara. After updating the BIOS settings and rebooting, I was able to remotely power off and back on the machine following these instructions.

Set up of dedicated SimPlant host

Although not directly related to the FE testing, today I added a new machine to the test stand which will be dedicated to running sim models. Chris has developed a virtual cymac which we plan to run on this machine. It will provide a dedicated testbed for SimPlant and other development, and can host up to 10 user models.

I used one of the spare 12-core Supermicro servers from LLO, which I have named c1sim. I assigned it the IP address 192.168.113.93 on the Martian network. This machine will run in a self-contained way that will not depend on any 40m CDS services and also should not interfere with them.

  15996   Mon Apr 5 22:26:01 2021 gautamUpdateLSCPRMI 1f locking (no ETMs) recovered

Since it seems like the entire electronics chain has no obvious failure, I decided to compensate for the apparent increased optical gain by turning the flat whitening gain down from +18dB to 0dB. Then, after some fiddling around with alignment, settings etc, I was able to lock the PRMI once again, with the ETMs misaligned, using REFL55_I to sense PRCL, and REFL55_Q to sense MICH. Some sensing matrices attached. Some notes:

  1. I made some changes to the presentation so that the units of the sensing matrix are now in [W/m] sensed on a photodiode. 
    • The info printed on the plot is also more verbose, I now indicate explicitly the actuator driven to make the measurement, and also the drive frequency. The various gains used to convert counts to Watts on a photodiode are also indicated.
    • I thought about printing the actual matrix too but haven't arrived at a clean prez style yet.
    • This is to facilitate easier comparison to Finesse models / analytic calcs.
    • I take into account all the gains from the photodetector to the servo error point where the measurement is made. However, the splitting fractions between various photodiodes is not included, so you will have to do that yourself when comparing to a Finesse model.
    • For example, in pg2 of Attachment #1, the measured gain of PRCL sensed in REFL55_I is ~2e6 W/m. But only ~4% of IFO REFL ends up on the REFL55 photodiode. So, this measurement would be consistent with a Finesse simulated optical gain of ~50MW/m, which is in the ballpark of what I do actually see.
  2. There seems to be reasonable agreement between Finesse and these measurements. But why should the old settings / locking have worked at all then?
  3. I tried two schemes for MICH actuation today.
    • The first was the usual BS + PRM combo, and I got the sensing matrix on pg 1 of Attachment #1. With this scheme, the MICH/PRCL orthogonality is a joke.
    • Then I changed the MICH actuator to the ITMs, and got the sensing matrix on pg 2 of Attachment #1. With this scheme, the orthogonality looks much better. I think the slight non-orthogonality in the 11/33 MHz photodiodes may even be reasonable, since the 11 MHz field isn't a good sensor of the anti-symmetric modes, but have to confirm by calculation/simulation. But certainly the separation of signals is much cleaner when the ITMs are used to control MICH.

So there is clearly something funky with the nominal MICH actuation scheme (MICH suspension, PRM suspension or both), which we should get to the bottom of before trying any low noise locking. I think using the ITMs as the MICH actuator in the full lock will not be a good low nosie strategy, as we would then be "polluting" all our suspended optics with our control loops, which seems highly suboptimal for technical noise sources like coil driver noise etc.

  15995   Mon Apr 5 08:25:59 2021 Anchal, PacoUpdateGeneralRestore MC from early quakes

[Paco, Anchal]

Came in a little bit after 8 and found the MC unlocked and struggling to lock for the past 3 hours. Looking at the SUS overview, both MC1 and ITMX Watchdogs had tripped so we damped the suspensions and brought them back to a good state. The autolocker was still not able to catch lock, so we cleared the WFS filter history to remove large angular offsets in MC1 and after this the MC caught its lock again.

Looks like two EQs came in at around 4:45 AM (Pacific) suggested by a couple of spikes in the seismic rainbow, and this.

  15994   Sat Apr 3 00:42:40 2021 gautamUpdateLSCPRFPMI locking with half input power

Summary:

I wanted to put my optomechanical instability hypothesis to the test. So I decided to cut the input power to the IMC by ~half and try locking the PRFPMI. However, this did not improve the stability of the buildup in the arm cavities, while the control was solely on the ALS error signal

Details:

  1. The waveplate I installed for this purpose was rotated until the MC RFPD DCMON channel reported ~half it's nominal value.
  2. I adjusted the IMC servo gains appropriately to compensate. IMC lock was readily realized.
  3. I increased the whitening gains on the POX, POY and REFL165 photodiodes by 6dB, to compensate for the reduced light levels.
    • One day soon, we will have remote power control, and it'd be nice to have this process be automated.
    • Really, we should have de-whitening filters that undo these flat gains in addition to undoing the frequency dependent whitening.
    • I'm not sure the quality of the electronics is good enough though, for the changing electronics offsets to not be a problem.
    • One possibility is that we can normalize some signals by the DC light level at that port, but I still think compensating the changing optical gain as far upstream as possible is best, and the whitening gain is the convenient stage to do this.
  4. Recovered single arm POX/POY locking. 
  5. Then I decided to try and lock the PRFPMI with the reduced input power.

Basically, with some tweaks to loop gains, it worked, see Attachment #1. Note that the lower right axis shows the IMC transmission and is ~7500 cts, vs the nominal ~15,000 cts.

Discussion:

Cutting the input power did not have the effect I hoped it would. Basically, I was hoping to zero the optical CARM offset while the IFO was entirely under ALS control, and have the arm transmission be stable (or at least, stay in the linear regime of REFL11). However, the observation was that the IFO did the usual "buzzing" in and out of the linear regime. Right now, this is not at all a problem - once the IR error signal is blended in, and DC control authority is transferred to that signal, the lock acquisition can proceed just fine. And I guess it is cool that we can lock the IFO at ~half the input power, something to keep in mind when we have the remote controlled waveplate, maybe we always want to lock at the lowest power possible such that optomechanical transients are not a problem. 

I also don't think this test directly disputes my claim that the residual CARM noise when the arm cavities are under purely ALS control is smaller than the CARM linewidth.

What does this mean for my hypothesis? I still think it is valid, maybe the power has to be cut even further for the optomechanics to not be a problem. In Finesse (see Attachment #2), with 0.3 W input power to the back of the PRM, and with best guesses for the 40m optical losses in the PRC and arms, I still see that considerable phase can be eaten up due to the optomechanical resonance around ~100 Hz, which is where the digital CARM loop UGF is. So I guess it isn't entirely unreasonable that the instability didn't go away?


After this work, I undid all the changes I made for the low power lock test. I confirmed that IMC locking, POX/POY locking, and the dither alignment systems all function as expected after I reverted the system.

  15993   Fri Apr 2 15:22:54 2021 gautamUpdateSUSMatrix results, new measurement set to trigger

How should I try to understand why PIT and YAW are so different? 

Quote:
New output matrix for MC2 (C1:SUS-MC2_TO_COIL_ii_jj_GAIN)
  POS PIT YAW
UL 1 1.022 0.6554
UR 1 0.9776 -1.2532
LL 1 -0.9775 1.2532
LR 1 -1.0219 -0.6554
  15992   Fri Apr 2 15:17:23 2021 gautamSummaryGeneralHEPA AC cord replacement

From the last failure, I had ordered 2 extra capacitors (they are placed on top of the PSL enclosure above where the capacitors would normally be installed). If the new capacitors lasted < 6months, may be symptomatic of some deeper problem though, e.g. the HEPA fans themselves need replacing. We don't really have a good diagnostic of when the failure happened I guess as we don't have any channel recording the state of the fans.

Quote:

I think the PSL HEPA (both 2 units) are not running. The switches were on. And the variac was changed from 60% to 0%~100% a few times but no success.
I have no troubleshooting power anymore today. The main HEPA switch was turned off.

  15991   Fri Apr 2 14:51:20 2021 AnchalUpdateSUSBug found, need to redo the balancing

Last run gave similar results as the quick run we did earlier. The code has been unable to strike out couplings with POS. We found the bug which is causing this. This was because the sampling rate of MC_F channel is different from the test-point channels used for PIT and YAW. Even though we were aware of it, we made an error in handling it while calculating CSD. Due to this, CSD calculation with POS data was performed by the code with zero padding which made it think that no PIT/YAW <-> POS coupling exist. blushHence our code was only able to fix PIT <-> YAW couplings.

We'll need to do another run with this bug fixed. I'll update this post with details of the new measurement.

 

  15990   Fri Apr 2 01:26:41 2021 gautamUpdateElectronicsREFL55 chain checkout again, seems fine

[koji, gautam]

Summary:

We could not find problems with any individual piece of the REFL55 electronics chain, from photodiode to ADC.  Nevertheless, the PRMI fringes witnessed by REFL55 is ~x10 higher than ~two weeks ago, when the PRMI could be repeatably and reliably locked using REFL55 signals (ETMs misaligned).

Details:

  1. Koji prepared a spare whitening board. However, before he swapped it in, he checked the existing board and found no problems with it.
    • 20mV input signal, 100 Hz was injected into the two REFL55 channels on the whitening board.
    • The flat whitening gain was set to +45 dB.
    • The signal levels seen in CDS was consistent with what is expected given an ADC conversion factor of 3276.8 cts/V.
  2. Tried putting the REFL55 demodulated outputs into the next two channels, 5&6, (currently unused) on the same whitening board.
    • After setting the whitening gains of these two channels also to +18dB, the saturation of the ADCs when the PRMI was fringing persisted.
  3. With the dark noise of the whitening filter, we enabled/disabled the on board frequency dependent whitening, and reasoned that the time domain increase in RMS seemed reasonable. So we decided to investigate parts of the electronics chain upstream of the whitening board, since we couldn't find anything obviously wrong with the whitening board.
  4. Injected -10dBm RF signal (=0.2 Vpp) into the RF input on the REFL55 demod board, and saw ~3500 cts-pp signal in CDS. This is totally consistent with my recent characterization of 16,000 cts/V for this demod board at the "nominal" + 18dB whitening gain setting. So the demodulator seems to function as advertised.
  5. Decided to repeat my test of using the Jenne laser to test the whole chain end-to-end.
    • In summary, we recovered the results (RF transimpedance of the PD, and signal levels in CDS for a known AM determined by the reference NF1611 PD) I reported there.
    • So it would seem that the entire REFL55 electronics chain performs as expected.
    • The only remaining explanation is that the optical gain of the PRMI has increased - but how?? 
    • Similar jumps in the REFL55 signal levels have occurred multiple times in the past, and each time, I was able to recover the "nominal" performance by this procedure (though I have no idea why that should work at all).
    • So I am highly skeptical that this has anything to do with the IFO optical gain, but that is the only difference between our AM laser based test and the "live" operating conditions when the signals are saturated.

Discussion and next steps:

Q: Koji asked me what is the problem with this apparent increased optical gain - can't we just compensate by decreasing the whitening gain?
A: I am unable to transition control of the PRMI (no ETMs) from 3f to 1f, even after reducing the whitening gain on the REFL55 channels to prevent the saturation. So I think we need to get to the bottom of whatever the problem is here.

Q: Why do we need to transfer the control of the vertex to the 1f signals at all?
A: I haven't got a plot in the elog, but from when I had the PRFPMI locked last year, the DARM noise between 100-1kHz had high coherence with the MICH control signal. I tried some feedforward to try and cancel it but never got anywhere. It isn't a quantitative statement but the 1f signals are expected to be cleaner?

Koji pointed out that the MICH signal is visible in the REFL55 channels even when the PRM is misaligned, so I'm gonna look back at the trend data to see if I can identify when this apparent increase in the signal levels occurred and if I can identify some event in the lab that caused it. We also discussed using the ratio of MICH signals in REFL and AS to better estimate the losses in the REFL path - the Faraday losses in particular are a total unknown, but in the AS path, there is less uncertainty since we know the SRM transmission quite precisely, and I guess the 6 output steering mirrors can be assumed to be R=99%. 

  15989   Thu Apr 1 23:55:33 2021 KojiSummaryGeneralHEPA AC cord replacement

I think the PSL HEPA (both 2 units) are not running. The switches were on. And the variac was changed from 60% to 0%~100% a few times but no success.
I have no troubleshooting power anymore today. The main HEPA switch was turned off.

  15988   Thu Apr 1 21:13:54 2021 AnchalUpdateSUSMatrix results, new measurement set to trigger
New Input matrix used for MC2 (C1:SUS-MC2_INMATRIX_ii_jj
  UL UR LR LL SIDE
POS 0.2464 0.2591 0.2676 0.2548 -0.1312
PIT 1.7342 0.7594 -2.494 -1.5192 -0.0905
YAW 1.2672 -2.0309 -0.9625 2.3356 -0.2926
SIDE 0.1243 -0.1512 -0.1691 0.1064 0.9962

New output matrix for MC2 (C1:SUS-MC2_TO_COIL_ii_jj_GAIN)
  POS PIT YAW
UL 1 1.022 0.6554
UR 1 0.9776 -1.2532
LL 1 -0.9775 1.2532
LR 1 -1.0219 -0.6554

Measured Sensing Matrix (Cross Coupling) (Sensed DOF x Excited DOF)
  Excited POS Excited PIT Excited YAW
Sensed POS 1 1.9750e-5 -3.5615e-6
Sensed PIT 0 1 -6.93550e-2
Sensed YAW 0 -2.4429e-4 1

A longer measurement is set to trigger at 5:00 tomorrow on April 2nd, 2021. This measurement will run for 35 iterations with an excitation duration of 120s and bandwidth for CSD measurement set to 0.1 Hz. The script is set to trigger in a tmux session named 'cB' on pianosa.

  15987   Thu Apr 1 18:48:45 2021 gautamUpdateSUSMC2 Coil Balancing Test Results Success??

In these results, can you also include the new matrix and what the relative imbalances were?

  15986   Thu Apr 1 18:16:28 2021 KojiUpdateElectronicsElectronics Packaging for assembly work

All small components are packed in the boxes. They are ready to ship.

 

  15985   Thu Apr 1 18:01:06 2021 Anchal, PacoUpdateSUSMC2 Coil Balancing Test Results Success??

After fixing a few things we felt were wrong in our implementation of the algorithm, we ran the coil balancing for 12 iterations with just 11s per excitation and still taking CSD with 0.1 Hz bandwidth. This time we saw the distance of sensing matrix from identity going down.


Performance Analysis

  • Attachment 1 shows the trend of distance of Sensing matrix from identity matrix over iterations.
  • Attachment 2 shows the trend of off-diagonal terms in sensing matrix over iterations.
  • Attachment 3 shows the ASD for the different sensed DOF when excited in different DOFs with the new output matrix. This is the better truth of what happened by the end. The true sensing matrix is proportional to the peak heights in this plot. Rows are different sensed DOFs (POS, PIT, YAW) and columns are excited DOFs (POS, PIT, YAW). The black dotted curves are ASD when no excitation was present.

Next step

  • We want to run it for longer, more iterations and more duration to get better averaging. Hopefully, this will do a better job. We'll try running this new code tomorrow at 5:00am.
  • We'll work on using uncertainties of measured data.
  • Use awg to excite all DOF together at different frequencies and make the code faster.
  15984   Thu Apr 1 13:56:49 2021 Anchal, PacoUpdateSUSMC2 Coil Balancing Test Results

The coil balancing attempt failed. The off-diagonal values in the measured sensing matrices either remained the same or increased.


The attempt in the morning was too slow. By the time we reached, it had reached to iteration 7 only and still nowhere near optimum sensing matrix had reached. We still needed to see if the optimum would eventually reach if more iterations happened.

<Radhika came for shadowing us and learning about 40m>

So we worked a bit on speeding up the data loading process and then ran the code again which now was running much faster. Still within 1 hr or so, we saw it had reached to iteration 7 with no sign of sensing matrix getting any better.

<Paco left for vaccination>

To determine if the method would work in principle, I decided to stop the current run and start with a 0.5 Hz bandwidth run (so about 7 averages with 8s duration data and welch method). This completed 20 iterations before Gautum came. But it was clear now that the method is not converging to a better solution. Need to find a bug in the implementation of the algorithm mentioned in last post or find a better algoritm.


Attachment 1 is the plot of how the sensing matrix's distance from the identity matrix increased over iterations in the last run.

Attachment 2 is the plot for different off-diagonal terms in the sensing matrix. It is clear that POS->PIT,YAW coupling is not being measured properly as it remains constant.

Attachment 3 Gautum told us that there is some naming error in nds and MC_TRANS_PIT/YAW can be read through C1:IOO-MC_TRANS_PIT_ERR and C1:IOO-MC_TRANS_YAW_ERR channels instead. To test if they indeed point to same values, we did a test of exciting YAW degree through LOCKIN1 and seeing if the peaks are visible in the channels. This was also done to give Radhika an opportunity to do something I could confidently mentor about. and to experience using diaggui.

  15983   Thu Apr 1 00:50:06 2021 gautamUpdateSUSPRMdiag

I spent some time investigating the PRM this evening, trying out some of the stuff we discussed in the meeting.

  1. I used one of the SUS lockin oscillator to drive the butterfly mode (UL=+1, UR=-1, LL=-1, LR=+1) of the optic, @4.45 Hz, Amplitude=450cts (Oplev loops were engaged during the measurement).
  2. Used the sensing matrix infrastructure to demodulate the Oplev PIT/YAW error signals, using the other lockin channel (so that oscillator is just for demodulation, it doesn't actuate on the suspension).
  3. The digital demod phase was adjusted to put all of the demodulated signal in one ("I") quadrature.
  4. The balancing of the coils was perturbed by adding small amounts of the naive PIT (YAW) DoF to the pringle-mode actuation, while simultaneously looking to minimize the demodulated signal in YAW (PIT). 

Basically, my finding tonight was that I could not improve (make the pringle mode actuation witnessed by the Oplev QPD smaller) by +/- perturbing the butterfly actuation with of 0.05%, 0.5% and 1% of PIT (I didn't try YAW, or other values of PIT, as none of these seemed to do any good). It seems highly unlikely that the existing coil gains (these come after the output matrix) and the actual coil/magnet pairs are so perfectly tuned, so there must be something wrong with my method. I'll try more combos tomorrow. Separately, I verified that the naive PIT (YAW) moves the optic mainly, i.e. to the eye), in PIT(YAW) as judged by the REFL spot on the camera and the readback of the Oplev QPD.

For this work, I made a few changes to filter banks:

  1. Added 2Hz wide BPs centered around 4.45 Hz to the "SIGNAL" FM of the BS and PRM suspension lockins.
  2. Added 100mHz LPFs to the I and Q demodulated outputs.
  3. Made a copy of Kiwamu's perturbcoilbalance_fourosem.py (in scripts/SUS) to add little bit of PIT/YAW to the pringle actuation.

I noticed that the filters/switch states/gains for LOCKIN1 and LOCKIN2 are not consistent within either PRM or BS suspension, or across suspensions. Several filter INs/OUTs were also disabled - something for the SUSdiag team to note, whenever this is scripted, the script should check that the signal is indeed making it end-to-end.

  15982   Wed Mar 31 22:58:32 2021 Anchal, PacoUpdateSUSMC2 Coil Balancing Test

A cross-coupling test has been set to trigger at 05:00 am on April 1st, 2021. The script is waiting on tmux session 'cB' on pianosa. /scripts/SUS/OutMatCalc/MC2crossCoupleTest.py is being used here. The script will switch on oscillator in LOCKIN1 of MC2 at 13 Hz and 200 counts and would send it along the POS, PIT and YAW vectors on output matrix one by one, each for 2 minutes. It will take data from C1:IOO-MC_F_DQ, C1:IOO-MC_TRANS_PIT_ERR and C1:IOO-MC_TRANS_YAW_ERR and use it to measure 'sensing matrix' S. Sensing matrix S is defined as the cross-coupling between excited and sensed DOF and we ideally want it to be an identity matrix. The code will use the measured S to create a guess matrix A which on being multiplied by ideal coil output matrix would give us a rotated coil output matrix O. This guess O will be applied and the measurement will be repeated. On each iteration, next, A matrix is defined by:

A_{k+1} = (1 + \beta) A_k - \beta S_k A_k

This recursive algorithm converges A to the inverse of initial S. The above relation is derived by noticing that in steady state A S = \mathrm{I} \Rightarrow A = A S A \Rightarrow A = A - \beta(A S A - A). I've taken this idea from a mathematics paper I found on some more complex stuff (c.f. https://doi.org/10.31219/osf.io/yrvck).

At each iteration, all three matrices A, O and S will be stored in a text file for analysis later.

The code has the error-catching capability and would restore the optic to the status quo if an error occurs or watchdogs trip due to earthquakes.

  15981   Wed Mar 31 03:56:37 2021 KojiSummaryElectronicsA bunch of electronics received

We have received 9x 18bit DAC adapter boards (D1000654)

  15980   Wed Mar 31 00:40:32 2021 KojiUpdateElectronicsElectronics Packaging for assembly work

I've worked on packing the components for the following chassis
- 5 16bit AI chassis
- 4 18bit AI chassis
- 7 16bit AA chassis
- 8 HAM-A coil driver chassis
They are "almost" ready for shipment. Almost means some small parts are missing. We can ship the boxes to the company while we wait for these small parts.

  • DB9 Female Ribbon Receptacle AFL09B-ND Qty100 (We have 10) -> Received 90 on Apr 1st
  • DB9 Male Ribbon Receptacle CMM09-G Qty100 (We have 10) -> Received 88 on Apr 1st
  • 4-40 Pan Flat Head Screw (round head, Phillips) 1/2" long Qty 50 -> Found 4-40 3/8" Qty50 @WB EE on Apr 1st (Digikey H782-ND)
  • Keystone Chassis Handle 9106 36-9106-ND Qty 50 -> Received 110 on Apr 1st
  • Keystone Chassis Ferrule 9121 NKL PL 36-9121-ND Qty 100 -> Received 55 on Apr 1st
  • Chassis Screws 4-40 3/16" Qty 1100 -> Received 1100 on Apr 1st
  • Chassis Ear Screws 6-32 1/2" 91099A220 Qty 150 -> Received 400 of 3/8" on Apr 1st
  • Chassis Handle Screws 6-32 1/4" 91099A205 Qty 100 -> included in the above
  • Powerboard mounting screw 4-40 Pan Flat Head Screw (round head, Phillips) 1/4" long Qty 125 -> Received 100 on Apr 1st

And some more additional items to fill the emptying stock.

  • 18AWG wires (we have orange/blue/black 1000ft, I'm sending ~1000ft black/green/white)
  • Already consumed 80% of 100ft 9pin ribbon cable (=only 20ft left in the stock)
  15979   Tue Mar 30 18:21:34 2021 JonUpdateCDSFront-end testing

Progress today:

Outside Internet access for FE test stand

This morning Jordan and I ran an 85-foot Cat 6 Ethernet cable from the campus network switch in the office area (on the ligo.caltech.edu domain) to the FE test stand near 1X6. This is to allow the test-stand subnet to be accessed for remote testing, while keeping it invisible to the parallel Martian subnet.

Successful RTCDS model compilation on new FEs

The clone of the chiara:/home/cds disk completed overnight. Today I installed the disk in the chiara clone. The NFS mounts (/opt/rtcds, /opt/rtapps) shared with the other test-stand machines mounted without issue.

Next, I attempted to open the shared Matlab executable (/cvs/cds/caltech/apps/linux64/matlab/bin/matlab) and launch Simulink. The existing Matlab license (/cvs/cds/caltech/apps/linux64/matlab/licenses/license_chiara_865865_R2015b.lic) did not work on this new machine, as they are machine-specific, so I updated the license file. I linked this license to my personal license, so that the machine license for the real chiara would not get replaced. The original license file is saved in the same directory with a *.bak postfix. If this disk is ever used in the real chiara machine, this file should be restored. After the machine license was updated, Matlab and Simulink loaded and allowed model editing.

Finally, I tested RTCDS model compilation on the new FEs using the c1lsc model as a trial case. It encountered one path issue due to the model being located at /opt/rtcds/userapps/release/isc/c1/models/isc/ instead of /opt/rtcds/userapps/release/isc/c1/models/. This seems to be a relic of the migration of the 40m models from the SVN to a standalone git repo. This was resolved by simply symlinking to the expected location:

$ sudo ln -s /opt/rtcds/userapps/release/isc/c1/models/isc/c1lsc.mdl /opt/rtcds/userapps/release/isc/c1/models/c1lsc.mdl

The model compilation then succeeded:

controls@c1bhd$ cd /opt/rtcds/caltech/c1/rtbuild/release

controls@c1bhd$ make clean-c1lsc
Cleaning c1lsc...
Done

controls@c1bhd$ make c1lsc
Cleaning c1lsc...
Done
Parsing the model c1lsc...
Done
Building EPICS sequencers...
Done
Building front-end Linux kernel module c1lsc...
make[1]: Warning: File 'GNUmakefile' has modification time 28830 s in the
future
make[1]: warning:  Clock skew detected.  Your build may be incomplete.
Done
RCG source code directory:
/opt/rtcds/rtscore/branches/branch-3.4
The following files were used for this build:
/opt/rtcds/caltech/c1/userapps/release/cds/common/src/cdsToggle.c
/opt/rtcds/userapps/release/cds/c1/src/inmtrxparse.c
/opt/rtcds/userapps/release/cds/common/models/FILTBANK_MASK.mdl
/opt/rtcds/userapps/release/cds/common/models/rtbitget.mdl
/opt/rtcds/userapps/release/cds/common/models/SCHMITTTRIGGER.mdl
/opt/rtcds/userapps/release/cds/common/models/SQRT_SWITCH.mdl
/opt/rtcds/userapps/release/cds/common/src/DB2MAG.c
/opt/rtcds/userapps/release/cds/common/src/OSC_WITH_CONTROL.c
/opt/rtcds/userapps/release/cds/common/src/wait.c
/opt/rtcds/userapps/release/isc/c1/models/c1lsc.mdl
/opt/rtcds/userapps/release/isc/c1/models/IQLOCK_WHITENING_TRIGGERING.mdl
/opt/rtcds/userapps/release/isc/c1/models/PHASEROT.mdl
/opt/rtcds/userapps/release/isc/c1/models/RF_PD_WITH_WHITENING_TRIGGERING.mdl
/opt/rtcds/userapps/release/isc/c1/models/UGF_SERVO_40m.mdl
/opt/rtcds/userapps/release/isc/common/models/FILTBANK_TRIGGER.mdl
/opt/rtcds/userapps/release/isc/common/models/LSC_TRIGGER.mdl

Successfully compiled c1lsc
***********************************************
Compile Warnings, found in c1lsc_warnings.log:
***********************************************
[warnings suppressed]

As did the installation:

controls@c1bhd$ make install-c1lsc
Installing system=c1lsc site=caltech ifo=C1,c1
Installing /opt/rtcds/caltech/c1/chans/C1LSC.txt
Installing /opt/rtcds/caltech/c1/target/c1lsc/c1lscepics
Installing /opt/rtcds/caltech/c1/target/c1lsc
Installing start and stop scripts
/opt/rtcds/caltech/c1/scripts/killc1lsc
/opt/rtcds/caltech/c1/scripts/startc1lsc
Performing install-daq
Updating testpoint.par config file
/opt/rtcds/caltech/c1/target/gds/param/testpoint.par
/opt/rtcds/rtscore/branches/branch-3.4/src/epics/util/updateTestpointPar.pl
-par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_210330_170634.par
-gds_node=42 -site_letter=C -system=c1lsc -host=c1lsc
Installing GDS node 42 configuration file
/opt/rtcds/caltech/c1/target/gds/param/tpchn_c1lsc.par
Installing auto-generated DAQ configuration file
/opt/rtcds/caltech/c1/chans/daq/C1LSC.ini
Installing Epics MEDM screens
Running post-build script

safe.snap exists

We are ready to start building and testing models.

  15978   Tue Mar 30 17:27:04 2021 YehonathanUpdateCDSc1auxey assembly

{Yehonathan, Jon}

We poked (looked in situ with a flashlight, not disturbing any connections) around c1auxex chassis to understand better what is the wiring scheme.

To our surprise, we found that nothing was connected to the RTNs of the analog input Acromag modules. From previous experience and the Acromag manual, there can't be any meaningful voltage measurement without it.

I also did some rewiring in the Acromag chassis to improve its reliability. In particular, I removed the ground wires from the DIN rail and connected them using crimp-on butt splices.

 

  15977   Mon Mar 29 19:32:46 2021 gautamUpdateLSCREFL55 whitening checkout

I repeated the usual whitening board characterization test of:

  • driving a signal (using awggui) into the two inputs of the whitening board using a spare DAC channel available in 1Y2
  • demodulating the response using the LSC sensing matrix infrastructure
  • Stepping the whitening gain, incrementing it in 3dB steps, and checking if the demodulated lock-in outputs increase in the expected 3dB steps.

Attachment #1 suggests that the steps are equal (3dB) in size, but note that the "Q" channel shows only ~half the response of the I channel. The drive is derived from a channel of an unused AI+dewhite board in 1Y2, split with a BNC Tee, and fed to the two inputs on the whitening filter. The impedance is expected to be the same on each channel, and so each channel should see the same signal, but I see a large asymmetry. All of this checked out a couple of weeks ago (since we saw ellipses and not circles) so not sure what changed in the meantime, or if this is symptomatic of some deeper problem.

Usually, doing this and then restoring the cabling returns the signal levels of REFL55 to nominal levels. Today it did not - at the nominal whitening gain setting of +18dB flat gain, when the PRMI is fringing, the REFL55 inputs are frequently reporting ADC overflows. Needless to say, all my attempts today evening to transition the length control of the vertex from REFL165 to REFL55 failed.

I suppose we could try shifting the channels to (physical) Ch5 and Ch6 which were formerly used to digitize the ALS DFD outputs and are currently unused (from Ch3, Ch4) on this whitening filter and see if that improves the situation, but this will require a recompile of the RTCDS model and consequent CDS bootfest, which I'm not willing to undertake today. If anyone decides to do this test, let's also take the opportunity to debug the BIO switching for the delay line.

  15976   Mon Mar 29 17:55:50 2021 JonUpdateCDSFront-end testing

Cloning of chiara:/home/cvs underway

I returned today with a beefier USB-SATA adapter, which has an integrated 12 V supply for powering 3.5" disks. I used this to interface a new 6 TB 3.5" disk found in the FE supplies cabinet.

I decided to go with a larger disk and copy the full contents of chiara:/home/cds. Strictly, the FEs only strictly need the RTS executables in /home/cvs/rtcds and /home/cvs/rtapps. However, to independently develop models, the shared matlab binaries in /home/cvs/caltech/... also need to be exposed. And there may be others I've missed.

I began the clone around 12:30 pm today. To preserve bandwidth to the main disk, I am copying not the /home/cds disk directly, but rather its backup image at /media/40mBackup.

Set up of dedicated SimPlant host

Although not directly related to the FE testing, today I added a new machine to the test stand which will be dedicated to running sim models. Chris has developed a virtual cymac which we plan to run on this machine. It will provide a dedicated testbed for SimPlant and other development, and can host up to 10 user models.

I used one of the spare 12-core Supermicro servers from LLO, which I have named c1sim. I assigned it the IP address 192.168.113.93 on the Martian network. This machine will run in a self-contained way that will not depend on any 40m CDS services and also should not interfere with them. However, if there are concerns about having it present on the network, it can be moved to the outside-facing switch in the office area. It is not currently running any RTCDS processes.

Set-up was carried out via the following procedure:

  • Installed Debian 10.9 on an internal 480 GB SSD.
  • Installed cdssoft repos following Jamie's instructions.
  • Installed RTS and Docker dependencies:
    $ sudo apt install cpuset advligorts-mbuf-dkms advligorts-gpstime-dkms docker.io docker-compose
  • Configured scheduler for real-time operation:
    $ sudo /sbin/sysctl kernel.sched_rt_runtime_us = -1
  • Reserved 10 cores for RTS user models (plus one for IOP model) by adding the following line to /etc/default/grub:
    GRUB_CMDLINE_LINUX_DEFAULT="isolcpus=nohz,domain,1-11 nohz_full=1-11 tsc=reliable mce=off"
    followed by the commands:
    $ sudo update-grub
    $ sudo reboot now
  • Downloaded virtual cymac repo to /home/controls/docker-cymac.

I need to talk to Chris before I can take the setup further.

  15975   Mon Mar 29 17:34:52 2021 ranaUpdateSUSMC2 Coil Balancing updates

I think there's been some mis-communication. There's no updated Hang procedure, but there is the one that Anchal, Paco and I discussed, which is different from what is in the elog.

We'll discuss again, and try to get it right, but no need to make multiple forks yet.

  15974   Mon Mar 29 17:11:54 2021 gautamUpdateSUSMC2 Coil Balancing updates

For this technique to work, (i) the WFS loops must be well tuned and (ii) the beam must be well centered on MC2. I am reasonably certain neither is true. For MC2 coil balancing, you can use a HeNe, there is already one on the table (not powered), and I guess you can use the MC2 trans QPD as a sensor, MC won't need to be locked so you can temporarily hijack that QPD (please don't move anything on the table unless you're confident of recovering everything, it should be possible to do all of this with an additional steering mirror you can install and then remove once your test is done). Then you can do any variant of the techniques available once you have an optical lever, e.g. single coil drive, pringle mode drive etc to do the balancing.

I think Hang had some technique he tried recently as well, maybe that is an improvement.

  15973   Mon Mar 29 17:07:17 2021 gautamSummarySUSMC3 new Input Matrix not providing stable loop

I suppose you've tried doing the submatrix approach, where SIDE is excluded for the face DoFs? Does that give a better matrix? To me, it's unreasonable that the side OSEM senses POS motion more than any single face OSEM, as your matrix suggests (indeed the old one does too). If/when we vent, we can try positioning the OSEMs better.

  15972   Mon Mar 29 10:44:51 2021 Paco, AnchalUpdateSUSMC2 Coil Balancing updates

We ran the coil balancing procedure 4 times while iterating through the output matrix optimization.

Attachment 1, pages 1 to 4 show the progression of cross coupling from current output matrix (which is theoretical ideal) to the latest iteration. We plot the sensed DOF ASD which we used to determine the cross coupling when different excitations are fed using the LOCKIN1 feeding 13Hz oscillation of 200 counts amplitude along the vector defined in output matrix. That means, when we change the output matrix, in subsequent tests, we alos change the exciation direction along with it.

Unfortunately, we don't see a very good optimizations over iterations. While we see some peaks going down in sensed PIT and sensed POS (through MC_F), we rather see an increase in cross coupling in the sensed YAW.


Scripts:

  • For running the tests, we used script in scripts/SUS/OutMatCalc/crossCoupleTest.py and wrote commanding scripts in the /users/anchal/20210329_MC2_TestingNewOutMat .
  • The optimization code is at in scripts/SUS/OutMatCalc/outMatOptimize.py.
  • The code reads sensed DOF data using nds2 and calculated cross spectral density among the sensed DOF at the excitation frequencies.
  • This is normalized by the power spectral density of reference data (no excitation) and power spectral density of position data to create a TF estimate.
  • The real values of the sensor matrix thus created is used to get the inverse matrix.
  • The inverse matrix is first normalized along each row by diagonal elements to get 1 there and then multiplied by previous output matrix to create a new output matrix.
  • I guess, reading the code will be a better way of understanding this algorithm.
  15971   Sun Mar 28 14:16:25 2021 AnchalSummarySUSMC3 new Input Matrix not providing stable loop

Rana asked us to write out here the new MC3 input matrix we have calculated along with the old one. The new matrix is not working out for us as it can't keep the suspension loops stable.


Matrices:

Old (Current) MC3 Input Matrix (C1:SUS-MC3_INMATRIX_ii_jj)
  UL UR LR LL SD
POS 0.288 0.284 0.212 0.216 -0.406
PIT 2.658 0.041 -3.291 -0.674 -0.721
YAW 0.605 -2.714 0.014 3.332 0.666
SIDE 0.166 0.197 0.105 0.074 1

 

New MC3 Input Matrix (C1:SUS-MC3_INMATRIX_ii_jj)
  UL UR LR LL SIDE
POS 0.144 0.182 0.124 0.086 0.586
PIT 2.328 0.059 -3.399 -1.13 -0.786
YAW 0.552 -2.591 0.263 3.406 0.768
SIDE -0.287 -0.304 -0.282 -0.265 0.871

Note that the new matrix has been made so that the norm of each row is the same as the norm of the corresponding row in the old (current) input matrix.


Peak finding results:

  Guess Values Fittted Values
PIT Resonant Freq. (Hz) 0.771 0.771
YAW Resonant Freq. (Hz) 0.841 0.846
POS Resonant Freq. (Hz) 0.969 0.969
SIDE Resonant Freq. (Hz) 0.978 0.978
PIT Resonance Q 600 345
YAW Resonance Q 230 120
POS Resonance Q 200 436
SIDE Resonance Q 460 282
PIT Resonance Amplitude 500 750
YAW Resonance Amplitude 1500 3872
POS Resonance Amplitude 3800 363
SIDE Resonance Amplitude 170 282

Note: The highest peak on SIDE OSEM sensor free swinging data as well as the DOF basis data created using existing input matrix, comes at 0.978 Hz. Ideally, this should be 1 Hz and in MC1 and MC2, we see the resonance on SIDE DOF to show near 0.99 Hz. If you look closely, there is a small peak present near 1 Hz actually, but it is too small to be the SIDE DOF eigenfrequency. And if it is indeed that, then which of the other 4 peaks is not the DOF we are interested in?

On possiblity is that the POS eigenfrequency which is supposed to be around 0.97 Hz is split off in two peaks due to some sideways vibration and hence these peaks get strongly coupled to SIDE OSEM as well.

P.S. I think something is wrong and out limited experience is not enough to pinpoint it. I can show up more data or plots if required to understand this issue. Let us know what you all think.

  15970   Fri Mar 26 11:54:37 2021 Paco, AnchalUpdateSUSMC2 Coil Balancing updates

[Paco, Anchal]

  • Today we spent the morning testing the scripts under ~/c1/scripts/SUS/OutMatCalc/ that automate the procedure (which we have been doing by hand) and catch any "bad" behavior instances that we have identified. In such instances, the script sets up to restore the IMC state smoothly.
  • After some testing and debugging, we managed to get some data for MC2 using ~/c1/scripts/SUS/OutMatCalc/getCrossCouplingData.py
  15969   Fri Mar 26 10:25:37 2021 YehonathanUpdateBHDSOS assembly

I measure some of the dowel pins we got from Mcmaster with a caliper.

One small pin is 0.093" in diameter and 0.376" in length. The other sampled small pin has the same dimensions.

One big pin is 0.187" in diameter and 0.505" in length. The other is 0.187" in diameter and 0.506" in length.

The dowels meet our requirements.

  15968   Thu Mar 25 18:05:04 2021 gautamUpdateElectronicsStuffed HV coil drivers received from Screaming Circuits

I think the only part missing for assembly now are 4 2U chassis. The PA95s need to be soldered on as well (they didn't arrive in time to send to SC). The stuffed boards are stored under my desk. I inspected one board, looks fine, but of course we will need to run some actual bench tests to be sure.

  15967   Thu Mar 25 17:39:28 2021 gautamUpdateComputer Scripts / ProgramsSpot position measurement scripts "modernized"

I want to measure the spot positions on the IMC mirrors. We know that they can't be too far off centerBasically I did the bare minimum to get these scripts in /opt/rtcds/caltech/c1/scripts/ASS/MC/ running on rossa (python3 mainly). I confirmed that I get some kind of spot measurement from this, but not sure of the data quality / calibration to convert the demodulated response into mm of decentering on the MC mirrors. Perhaps it's something the MC suspension team can look into - seems implausible to me that we are off by 5mm in PIT and YAW on MC2? The spot positions I get are (in mm from the center):

MC1 P          MC2P           MC3P           MC1Y          MC2Y           MC3Y

0.640515    -5.149050    0.476649    -0.279035    5.715120    -2.901459

A future iteration of the script should also truncate the number of significant figures per a reasonable statistical error estimation.

  15966   Thu Mar 25 16:02:15 2021 gautamSummarySUSRepeated measurement of coil Rs & Ls for PRM/BS

Method

Since I am mainly concerned with the actuator part of the OSEM, I chose to do this measurement at the output cables for the coil drivers in 1X4. See schematic for pin-mapping. There are several parts in between my measurement point and the actual coils but I figured it's a good check to figure out if measurements made from this point yield sensible results. The slow bias voltages were ramped off under damping (to avoid un-necessarily kicking the optics when disconnecting cables) and then the suspension watchdogs were shutdown for the duration of the measurement.

I used an LCR meter to measure R and L - as prescribed by Koji, the probe leads were shorted and the readback nulled to return 0. Then for R, I corroborated the values measured with the LCR meter against a Fluke DMM (they turned out to be within +/- 0.5 ohms of the value reported by the BK Precision LCR meter which I think is reasonable).

Result

                   PRM
Pin1-9 (UL)   / R = 30.6Ω / L=3.23mH  
Pin2-10 (LL)  / R = 30.3Ω / L=3.24mH
Pin3-11 (UR) / R = 30.6Ω / L=3.25mH
Pin4-12 (LR) / R = 31.8Ω / L=3.22mH
Pin5-13 (SD) / R = 30.0Ω / L=3.25mH

                       BS
Pin1-9 (UL)   / R = 31.7Ω / L=3.29mH
Pin2-10 (LL)  / R = 29.7Ω / L=3.26mH
Pin3-11 (UR) / R = 29.8Ω / L=3.30mH
Pin4-12 (LR) / R = 29.7Ω / L=3.27mH
Pin5-13 (SD) / R = 29.0Ω / L=3.24mH

Conclusions

On the basis of this measurement, I see no problems with the OSEM actuators - the wire resistances to the flange seem comparable to the nominal OSEM resistance of ~13 ohms, but this isn't outrageous I guess. But I don't know how to reconcile this with Koji's measurement at the flange - I guess I can't definitively rule out the wire resistance being 30 ohms and the OSEMs being ~1 ohm as Koji measured. How to reconcile this with the funky PRM actuator measurement? Possibilities, the way I see it, are:

  1. Magnets on PRM are weird in some way. Note that the free-swinging measurement for the PRM showed some unexpected features.
  2. The imbalance is coming from one of the drive chain - could be a busted current buffer for example.
  3. The measurement technique was wrong.
  15965   Thu Mar 25 15:31:24 2021 gautamUpdateIOOWFS servos

The servos are almost certainly not optimal - but we have the IFO sort of working, so before we make any changes, let's make a strong case for it. Once the loop TFs and noises (e.g. the sensing noise reinjection you maybe saw) are fully characterized and a new loop is shown to perform better, then we can make the changes, but until then, let's continue using the "nominal" configuration and keep all the WFS loops on wink. I turned everything back on.

BTW, MC2_ASCPIT_IN1 isn't the correct channel to measure the sensing noise re-injection, you need some other sensor, e.g. is the MC transmission (de)stabilized. 0-20 Hz is where I expect the WFS is actually measuring above the sensing noise.

  15964   Thu Mar 25 14:58:16 2021 YehonathanUpdateBHDSOS SmCo magnets Inspection

Redoing magnet measurement of envelope 1:

Magnet # Max Magnetic Field (kG)
1

2.89

2 2.82
3 2.86
4 2.9
5 2.86
6 2.73
7 2.9
8 2.88
9 2.85
10 2.93

Moving on to inspect and measure envelope 3 (the last one):

Magnet # Max Magnetic Field (kG)
1 2.92
2 2.85
3 2.93
4 2.97
5 2.9
6 3.04
7 2.9
8 2.92
9 3
10 2.92
11 2.94
12 2.92
13 2.92
14 2.95
15 3.02
16 2.91
17 2.89
18 2.9
19 2.86
20 2.9
21 2.92
22 2.9
23 2.87
24 2.93
25 2.85
26 2.88
27 2.92
28 2.9
29 2.9
30 2.89
31 2.83
32 2.83
33 2.8
34 2.94
35 2.88
36 2.91
37 2.9
38 2.91
39 2.94
40 2.88

 

  15963   Thu Mar 25 14:16:33 2021 gautamUpdateCDSc1auxey assembly

It might be a good idea to configure this box for the new suspension config - modern Satellite Amp, HV coil driver etc. It's a good opportunity to test the wiring scheme, "cross-connect" type adapters etc.

Quote:

Next, the feedthroughs need to be wired and the channels need to be bench tested.

  15962   Thu Mar 25 12:11:53 2021 YehonathanUpdateCDSc1auxey assembly

I finished prewiring the new c1auxey Acromag chassis (see attached pictures). I connected all grounds to the DIN rail to save some wiring. The power switches and LEDs work as expected.

I configured the DAQ modules using the old windows machine. I configured the gateway to be 192.168.114.1. The host machine still needs to be setup.

Next, the feedthroughs need to be wired and the channels need to be bench tested.

  15961   Thu Mar 25 11:46:31 2021 Paco, AnchalUpdateSUSMC2 Coil Balancing updates

Proof-of-principle

  • We excited PIT and YAW dofs using LOCKIN1 in MC2 on Monday.
  • We analyzed this data in a simple analysis explained in Attachment 1 python notebook (also present at /users/anchal/20210323_AnalyszingCoilActuationBalance/)
  • Basically, we tried to estimate the cross coupling in 2x2 matrix from actuated DOF to sensed DOF, inverted it, and applied it to output matrix to undo the cross coupling.
  • Attachments 2 and 3 show how much we performed in undoing the cross coupling.
  • The ratio of 13.5 Hz peaks shows how much coupling is still present.

Going towards 3x3 Coil balancing:

  • In a conversation with Rana yesterday, we understood that we can use MC_F data as POS sensing data out of the loop.
  • So today, we repreated the excitation measurements while exciting POS, PIT and YAW dofs from LOCKIN1 on MC2 and measuring C1:IOO-MC_F, C1:SUS-MC2_ASCPIT_IN1 and C1:SUS-MC2_ASCPIT_IN2.
  • Data from MC_F is converted into units of um using factor 9.57e-8 um/Hz.
  • We changed the excitation amplitude in order to see cross coupling peaks when they were not visible with low excitation.
  • The data was measured while new calculated input matrix was loaded which from our calculations diagonalized the sensing matrix of OSEMs.

Some major changes:

  • We actually found that the C1:SUS-MC2_ASCPIT_IN1 showed a broadband increase in noise today (from Monday) by factor of about 100 in range 0-20 Hz.
  • We were not sure why this changed from our 22nd March measurement.
  • We checked if the gain values in the loops changed in alst 3 days, but they didn't.
  • Then we realized that the WFS1_PIT and WFS2_PIT switched that we turned ON on Tuesday were the only changes that were made in the loop.
  • We turned back OFF C1:IOO-WFS1_PIT_SW1 and C1:IOO-WFS2_PIT_SW1. This actually brought back the noise level of C1:SUS-MC2_ASCPIT_IN1 down to what it was on Monday.

 

  15960   Wed Mar 24 22:54:49 2021 gautamUpdateLSCNew day, new problems

I thought I'd get started on some of the tests tonight. But I found that this problem had resurfaced. I don't know what's so special about the REFL55 photodiode - as far as I can tell, other photodiodes at the REFL port are running with comparable light incident on it, similar flat whitening gain, etc etc. The whitening electronics are known to be horrible because they use the quad LT1125 - but why is only this channel problematic? To describe the problem in detail:

  • I had checked the entire chain by putting an AM field on the REFL 55 photodiode, and corroborating the pk-to-pk (counts) value measured in CDS with the "nominal" setting of +18dB flat whitening gain against the voltage measured by a "reference" PD, in this case a fiber coupled NF1611.
  • In the above test, I confirmed that the measured signal was consistent with the value reported by the NF1611.
  • So, at least on Friday, the entire chain worked just fine. The PRMI PDH fringes were ~6000cts-pp in this condition.
  • Today, I found that while trying to acquire PRMI lock, the PDH fringes witnessed in REFL55 were saturating the ADC. I lowered the whitening gain to 0 dB (so a factor of 8). Then the PDH fringes were ~20,000cts-pp. So, overall, the gain of the chain seems to have gone up by a factor of ~25. 
  • Given my NF1611 based test, the part of the chain I am most suspicious of is the whitening filter. But I don't have a good mechanism that explains this observation. Can't be as simple as the input impedance of the LT1125 being lowered due to internal saturations, because that would have the opposite effect, we would measure a tiny signal instead of a huge one

I request Koji to look into this, time permitting, tomorrow. In slightly longer term, we cannot run the IFO like this - the frequency of occurrence is much too high and the "fix" seems random to me, why should sweeping the whitening gain fix the problem? There was some suggestion of cutting the PCB trace and putting a resistor to limit the current draw on the preceeding stage, but this PCB is ancient and I believe some traces are buried in internal layers. At the same time, I am guessing it's too much work to completely replace the whitening electronics with the aLIGO style units. Anyone have any bright ideas?


Anyway, I managed to lock the PRMI (ETMs misaligned) using REFL165I/Q. Then, instead of using the BS as the MICH actuator, I used the two ITMs (equal magnitude, opposite sign in the LSC output matrix).

  • The digital demod phase in this config is different from what is used when the arm cavities are in play (under ALS control). Probably the difference is telling us something about the reflectivity of the arm cavity for various sideband fields, from which we can extract some useful info about the arm cavity (length, losses etc). But that's not the focus here - the correct digital demod phase was 11 degrees. See Attachment #1 for the sensing matrix. I've annotated it with some remarks.
  • The signals appear much more orthogonal when actuating on the ITMs. However, I was still only able to null the MICH line sensed in the PRCL sensor to a ratio of 1/5 (while looking at peaks on DTT). I was unable to do better by fine tuning either the digital demod phase, or the relative actuation strength on each ITM
  • The PRCL loop had a UGF of ~120 Hz, MICH loop ~60 Hz.
  • With the PRMI locked in this config, I tried to measure the appropriate loop gain and sign if I were to use the REFL55 photodiode instead of REFL165 - but I didn't have any luck. Unsurprising given the known electronics issues I guess...

I didn't get around to running any of the other tests tonight, will continue tomorrow.


Update Mar 26: Attachments #2 and #3 show that there is clearly something wrong with the whitening electronics associated with REFL55 channels - with the PSL shutter closed (so the only "signal" being digitized should be the electronics noise at the input of the whitening stage), the I and Q channels don't show similar profiles, and moreover, are not consistent (the two screenshots are from two separate sweeps). I don't know what to make of the parts of the sweep that don't show the expected "steps". Until ndscope gets a log-scaled y-axis option, we have to live with the poor visualization of the gain steps which are dB (rather than linearly) spaced. For this particular case, StripTool isn't an option either because the Q channel as a negative offset, and I opted agains futzing with the cabling at 1Y2 to give a small fixed positive voltage instead. I will emphasize that on Friday, this problem was not present, because the gain balance of the I and Q channels was good to within 1dB.

  15959   Wed Mar 24 19:02:21 2021 JonUpdateCDSFront-end testing

This evening I prepared a new 2 TB 3.5" disk to hold a copy of /opt/rtcds and /opt/rtapps from chiara. This is the final piece of setup before model compilation can be tested on the new front-ends. However chiara does not appear to support hot-swapping of disks, as the disk is not recognized when connected to the live machine. I will await confirmation before rebooting it. The new disk is not currently connected.

  15958   Wed Mar 24 15:24:13 2021 gautamUpdateLSCNotes on tests

For my note-taking:

  1. Lock PRMI with ITMs as the MICH actuator. Confirm that the MICH-->PRCL contribution cannot be nulled. ✅  [15960]
  2. Lock PRMI on REFL165 I/Q. Check if transition can be made smoothly to (and from?) REFL55 I/Q.
  3. Lock PRMI. Turn sensing lines on. Change alignment of PRM / BS and see if we can change the orthogonality of the sensing.
  4. Lock PRMI. Put a razor blade in front of an out-of-loop photodiode, e.g. REFL11 or REFL33. Try a few different masks (vertical half / horizontal half and L/R permutations) and see if the orthogonality (or lack thereof) is mask-dependent.
  5. Double check the resistance/inductance of the PRM OSEMs by measuring at 1X4 instead of flange. ✅  [15966]
  6. Check MC spot centering.

If I missed any of the tests we discussed, please add them here.

  15957   Wed Mar 24 09:23:49 2021 PacoUpdateSUSMC3 new Input Matrix

[Paco]

  • Found IMC locked upon arrival
  • Loaded newest MC3 Input Matrix coefficients using /scripts/SUS/InMatCalc/writeMatrix.py after unlocking the MC, and disabling the watchdog. 
  • Again, the sens signals started increasing after the WD is reenabled with the new Input matrix, so I manually tripped it and restored the old matrix; recovered MC lock.
  • Something is still off with this input matrix that makes the MC3 loop unstable.
  15956   Wed Mar 24 00:51:19 2021 gautamUpdateLSCSchnupp asymmetry

I used the Valera technique to measure the Schnupp asymmetry to be \approx 3.5 \, \mathrm{cm}, see Attachment #1. The data points are points, and the zero crossing is estimated using a linear fit. I repeated the measurement 3 times for each arm to see if I get consistent results - seems like I do. Subtle effects like possible differential detuning of each arm cavity (since the measurement is done one arm at a time) are not included in the error analysis, but I think it's not controversial to say that our Schnupp asymmetry has not changed by a huge amount from past measurements. Jamie set a pretty high bar with his plot which I've tried to live up to. 

  15955   Tue Mar 23 09:16:42 2021 Paco, AnchalUpdateComputersPower cycled C1PSL; restored C1PSL

So actually, it was the C1PSL channels that had died. We did the following to get them back:

  • We went to this page and tried the telnet procedure. But it was unable to find the host.
  • So we followed the next advice. We went to the 1X1 rack and manually hard shut off C1PSL computer by holding down the power button until the LEDs went off.
  • We wait for 5-7 seconds and switched it back on.
  • By the time we were back in control room, the C1PSL channels were back online.
  • The mode cleaner however was struggling to keep the lock. It was going in and out of lock.
  • So we followed the next advice and did burt restore which ran following command:
    burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Mar/22/17:19/c1psl.snap -l /tmp/controls_1210323_085130_0.write.log -o /tmp/controls_1210323_085130_0.nowrite.snap -v 
  • Now the mode cleaner was locked but we found that the input switch of C1IOO-WFS1_PIT and C1IOO-WFS2_PIT filter banks were off. Which meant that only YAW sensors were in loop in the lock.
  • We went back in dataviewer and checked when these channels were shut down. See attachments for time series.
  • It seems this happened yesterday, March 22nd near 1:00 pm (20:00:00 UTC). We can't find any mention of anyone else doing it on elog and we left by 12:15pm.
  • So we shut down the PSL shutter (C1:PSL-PSL_ShutterRqst) and switched off MC autolocker (C1:IOO-MC_LOCK_ENABLE).
  • Switched on C1:IOO-WFS1_PIT_SW1 and C1:IOO-WFS2_PIT_SW1.
  • Turned back on PSL shutter (C1:PSL-PSL_ShutterRqst) and MC autolocker (C1:IOO-MC_LOCK_ENABLE).
  • Mode cleaner locked back easily and now is keeping lock consistently. Everything looks normal.
ELOG V3.1.3-