ID |
Date |
Author |
Type |
Category |
Subject |
15949
|
Fri Mar 19 22:24:54 2021 |
gautam | Update | LSC | PRMI investigations: what IS the matrix?? |
I did all these checks today.
Quote: |
I will check (i) REFL55 transimpedance, (ii) cable loss between AP table and 1Y2 and (iii) is the beam well centered on the REFL55 photodiode.
|
- The transimpedance was measured to be ~420 ohms at 55 MHz (-4.3 dB relative to the assumed 700V/A of the NF1611), so close to what I measured in June (the data download didn't work apparently and so I don't have a plot but it can readily be repeated). The DC levels also checked out - with 20mA drive current for the Jenne laser, I measured ~2.3 V on the NF1611 (10kohm DC transimpedance) vs ~13mV on the DC output of the REFL55 PD (50 ohm DC transimpedance).
- Time domain confirmation of the above statement is seen in Attachment #1. The Agilent was used to drive the Jenne laser with 0dBm RF signal @ 55 MHz. Ch1 (yellow) is the REFL55 PD output, Ch2 (blue) is the NF1611 RFPD, measured at the AP table (sorry for the confusing V/div setting).
- Re-connected the cabling at the AP table, and measured the signal at 1Y2 using the scope Rana conveniently left there, see Attachment #2. Though the two scopes are different, the cable+connector loss estimated from the Vpp of the signal at the AP table vs that at 1Y2 is 1.5 dB, which isn't outrageous I think.
- For the integrated test, I left the AM laser incident on the REFL55 photodiode, reconnected all the cabling to the CDS system, and viewed the traces on ndscope, see Attachment #3. Again, I think all the numbers are consistent.
- REFL55 demod board has an overall conversion gain (including the x10 gain of the daughter board preamp) of ~5V I/F per 1V RF.
- There is a flat 18 dB whitening gain.
- The digitized signal was ~13000 ctspp - assuming 3276.8 cts/V, that's ~4Vpp. Undoing the flat whitening gain and the conversion efficiency, I get 13000 / 3276.8 / (10^(18/20)) / 5 ~ 100 mVpp, which is in good agreement with Attachment #3 (pardon the thin traces, I didn't realize it looked so bad until I closed everything).
So it would seem that there is nothing wrong with the sensing electronics. I also think we can rule out any funkiness with the modulation depths since they have been confirmed with multiple different measurements.
One thing I checked was the splitting ratios on the AP table. Jenne's diagram is still accurate (assuming the components are labelled correctly). Let's assume 0.8 W makes it through the IMC to the PRM - then, I would expect, according to the linked diagram, 0.8 W * 0.8 * (1-5.637e-2) * 0.8 * 0.1 * 0.5 * 0.9 ~ 22 mW to make it onto the REFL55 PD. With the PRM aligned and the beam centered on the PD (using DC monitor but I also looked through an IR viewer, looked pretty well centered), I measured 500 mV DC level. Assuming 50 ohm DC transimpedance, that's 500 / 50 / 0.8 ~ 12.5 mW of power on this photodiode, which while is consistent with what's annotated on Jenne's diagram, is ~50% off from expectation. Is the uncertainty in the Faraday transmission and IMC transmission enough to account for this large deviation?
If we want more optical gain, we'd have to put more light on this PD. I suppose we could have ~10x the power since that's what is on IMC REFL when the MC is unlocked? If we want x100 increase in optical gain, we'd also have to increase the transimpedance by 10. I'll double check the simulation but I"m inclined to believe that the sensing electronics are not to blame.
Unconnected to this work but I feel like I'm flying blind without the wall StripTool traces so I restored them on zita (ran /opt/rtcds/caltech/c1/scripts/general/startStrip.sh). |
Attachment 1: IMG_9140.jpg
|
|
Attachment 2: IMG_9141.jpg
|
|
Attachment 3: REFL55.png
|
|
15948
|
Fri Mar 19 19:15:13 2021 |
Jon | Update | CDS | c1auxey assembly |
Today I helped Yehonathan get started with assembly of the c1auxey (slow controls) Acromag chassis. This will replace the final remaining VME crate. We cleared the far left end of the electronics bench in the office area, as discussed on Wed. The high-voltage supplies and test equipment was moved together to the desk across the aisle.
Yehonathan has begun assembling the chassis frame (it required some light machining to mount the DIN rails that hold the Acromag units). Next, he will wire up the switches, LED indicator lights, and Acromag power connectors following the the documented procedure. |
15947
|
Fri Mar 19 18:14:56 2021 |
Jon | Update | CDS | Front-end testing |
Summary
Today I finished setting up the subnet for new FE testing. There are clones of both fb1 and chiara running on this subnet (pictured in Attachment 2), which are able to boot FEs completely independently of the Martian network. I then assembled a second FE system (Supermicro host and IO chassis) to serve as c1sus2, using a new OSS host adapter card received yesterday from LLO. I ran the same set of PCIe hardware/driver tests as was done on the c1bhd system in 15890. All the PCIe tests pass.
Subnet setup
For future reference, below is the procedure used to configure the bootserver subnet.
- Select "Network" as highest boot priority in FE BIOS settings
- Connect all machines to subnet switch. Verify fb1 and chiara eth0 interfaces are enabled and assigned correct IP address.
- Add c1bhd and c1sus2 entries to
chiara:/etc/dhcp/dhcpd.conf :
host c1bhd {
hardware ethernet 00:25:90:05:AB:46;
fixed-address 192.168.113.91;
}
host c1bhd {
hardware ethernet 00:25:90:06:69:C2;
fixed-address 192.168.113.92;
}
- Restart DHCP server to pick up changes:
$ sudo service isc-dhcp-server restart
- Add c1bhd and c1sus2 entries to
fb1:/etc/hosts :
192.168.113.91 c1bhd
192.168.113.92 c1sus2
- Power on the FEs. If all was configured correctly, the machines will boot.
C1SUS2 I/O chassis assembly
- Installed in host:
- DolphinDX host adapter
- One Stop Systems PCIe x4 host adapter (new card sent from LLO)
- Installed in chassis:
- Channel Well 250 W power supply (replaces aLIGO-style 24 V feedthrough)
- Timing slave
- Contec DIO-1616L-PE module for timing control
Next time, on to RTCDS model compilation and testing. This will require first obtaining a clone of the /opt/rtcds disk hosted on chiara. |
Attachment 1: image_72192707_(1).JPG
|
|
Attachment 2: image_50412545.JPG
|
|
15946
|
Fri Mar 19 15:31:56 2021 |
Aidan | Update | Computers | Activated MATLAB license on donatella |
Activated MATLAB license on donatella |
15945
|
Fri Mar 19 15:26:19 2021 |
Aidan | Update | Computers | Activated MATLAB license on megatron |
Activated MATLAB license on megatron |
15944
|
Fri Mar 19 11:18:25 2021 |
gautam | Update | LSC | PRMI investigations: what IS the matrix?? |
From Finesse simulation (and also analytic calcs), the expected PRCL optical gain is ~1 MW/m (there is a large uncertainty, let's say a factor of 5, because of unknown losses e.g. PRC, Faraday, steering mirrors, splitting fractions on the AP table between the REFL photodiodes). From the same simulation, the MICH optical gain in the Q-phase signal is expected to be a factor of ~10 smaller. I measured the REFL55 RF transimpedance to be ~400 ohms in June last year, which was already a little lower than the previous number I found on the wiki (Koji's?) of 615 ohms. So we expect, across the ~3nm PRCL linewidth, a PDH horn-to-horn voltage of ~1 V (equivalently, the optical gain in units of V/m for PRCL is ~0.3 GV/m).
In the measurement, the MICH gain is indeed ~x10 smaller than the PRCL gain. However, the measured optical gain (~0.1GV/m, but this is after the x10 gain of the daughter board) is ~10 times smaller than what is expected (after accounting for the various splitting fractions on the AS table between REFL photodiodes). We've established that the modulation depth isn't to blame I think. I will check (i) REFL55 transimpedance, (ii) cable loss between AP table and 1Y2 and (iii) is the beam well centered on the REFL55 photodiode.
Basically, with the 400ohm transimpedance gain, we should be running with a whitening gain of 0dB before digitization as we expect a signal of O(1V). We are currently running at +18dB.
Quote: |
Then I put the RF signal directly into the scope and saw that the 55 MHz signal is ~30 mVpp into 50 Ohms. I waited a few minutes with triggering to make sure I was getting the largest flashes. Why is the optical/RF signal so puny? This is ~100x smaller than I think we want...its OK to saturate the RF stuff a little during lock acquisition as long as the loop can suppress it so that the RMS is < 3 dBm in the steady state.
|
|
15943
|
Fri Mar 19 10:49:44 2021 |
Paco, Anchal | Update | SUS | Trying coil actuation balance |
[Paco, Anchal]
- We decided to try out the coil actuation balancing after seeing some posts from Gautum about the same on PRM and ETMY.
- We used diaggui to send swept sine excitation signal to C1:SUS-MC3_ULCOIL_EXC and read it back at C1:SUS-MC3_ASCPIT_IN1. Idea was to create transfer function measurements similar to 15880.
- We first tried taking the transfer function with excitation amplitude 0f 1, 10, 50, 200 with damping loops on (swept from 10 to 100 Hz lograthmically in 20 points).
- We found no meaningful measurement and looked like we were just measuring noise.
- We concluded that it is probably because our damping loops are damping all the excitation down.
- So we decided to switch off damping and retry.
- We switched off: C1:SUS-MC3_SUSPOS_SW2 , C1:SUS-MC3_SUSPIT_SW2, C1:SUS-MC3_ASCPIT_SW2, C1:SUS-MC3_ASCYAW_SW2, C1:SUS-MC3_SUSYAW_SW2, and C1:SUS-MC3_SUSSIDE_SW2.
- We repeated teh above measurements going up in amplitudes of excitation as 1, 10, 20. We saw the oscillation going out of UL_COIL but the swept sine couldn't measure any meaningful transfer function to C1:SUS-MC3_ASCPIT_IN1. So we decided to just stop. We are probably doing something wrong.
Trying to go back to same state:
- We switch on: C1:SUS-MC3_SUSPOS_SW2 , C1:SUS-MC3_SUSPIT_SW2, C1:SUS-MC3_ASCPIT_SW2, C1:SUS-MC3_ASCYAW_SW2, C1:SUS-MC3_SUSYAW_SW2, and C1:SUS-MC3_SUSSIDE_SW2.
- But C1:SUS-MC3_ASCYAW_INMON had accumulated about 600 offset and was distrupting the alignment. We switched off C1:SUS-MC3_ASCYAW_SW2 hoping the offset will go away once the optic is just damped with OSEM sensors, but it didn't.
- Even after minutes, the offset in C1:SUS-MC3_ASCYAW_INMON kept on increasing and crossed beyond 2000 counts limit set in C1:IOO-MC3_YAW filter bank.
- We tried to unlock the IMC and lock it back again but the offset still persisted.
- We tried to add bias in YAW DOF by increasing C1:SUS-MC3_YAW_OFFSET, and while it was able to somewhat reduce the WFS C1:SUS-MC3_ASCYAW_INMON offset but it was misalgning the optic and the lock was lost. So we retracted the bias to 0 and made it zero.
- We tried to track back where the offset is coming from. In C1IOO_WFS_MASTER.adl, we opened the WFS2_YAW filter bank to see if the sensor is indeed reading the increasing offset.
- It is quite weird that C1:IOO-WFS2_YAW_INMON is just oscillating but the output in this WFS2_YAW filter bank is slowly increasing offset.
- We tried to zero the gain and back to 0.1 to see if some holding function is causing it, but that was not the case. The output went back to high negative offset and kept increasing.
- We don't know what else to do. Only this one WFS YAW output is increasing, everything else is at normal level with no increasing offset or peculiar behavior.
- We are leaving C1:SUS-MC3_ASCYAW_SW2 off as it is disrupting the IMC lock.
[Jon walked in, asked him for help]
- Jon suggested to do burt restore on IOO channels.
- We used
(selected through burtgooey):
burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Mar/19/08:19/c1iooepics.snap -l /tmp/controls_1210319_113410_0.write.log -o /tmp/controls_1210319_113410_0.nowrite.snap -v <
- No luck, the problem persists.
|
15942
|
Thu Mar 18 21:37:59 2021 |
rana | Update | LSC | PRMI investigations: what IS the matrix?? |
- Locked PRMI several tmes after Gautam setup. Easy w IFO CONFIG screen

- tuned up alignment
- Still POP22_I doesn't go above ~111, so not triggering the loops. Lowered triggers to 100 (POP22 units) and it locks fine now.

- Ran update on zita, and now it lost its mounts (and maybe its mind). Zita needs some love to recover the StripTool plots

- Put the $600 ebay TDS3052 near the LSC rack and tried to look at the RF power, but found lots of confusing information. Is there really a RF monitor in this demod board or was it disconnected by a crazy Koji
? I couldn't see any signal above a few mV.
- Put a 20 dB coupler in line with the RF input and saw zip. Then I put the RF signal directly into the scope and saw that the 55 MHz signal is ~30 mVpp into 50 Ohms. I waited a few minutes with triggering to make sure I was getting the largest flashes. Why is the optical/RF signal so puny?
This is ~100x smaller than I think we want...its OK to saturate the RF stuff a little during lock acquisition as long as the loop can suppress it so that the RMS is < 3 dBm in the steady state.
|
Attachment 1: PXL_20210319_045925024.jpg
|
|
15941
|
Thu Mar 18 18:06:36 2021 |
gautam | Update | Electronics | Modified Sat Amp and Coil Driver |
I uploaded the annotated schematics (to be more convenient than the noise analysis notes linked from the DCC page) for the HAM-A coil driver and Satellite Amplifier. |
15940
|
Thu Mar 18 13:12:39 2021 |
gautam | Update | Computer Scripts / Programs | Omnigraffle vs draw.io |
What is the advantage of Omnigraffle c.f. draw.io? The latter also has a desktop app, and for creating drawings, seems to have all the functionality that Omnigraffle has, see for example here. draw.io doesn't require a license and I feel this is a much better tool for collaborative artwork. I really hate that I can't even open my old omnigraffle diagrams now that I no longer have a license.
Just curious if there's some major drawback(s), not like I'm making any money off draw.io.
Quote: |
After Anchal left for his test, I took the time to set up the iMAC station so that Stephen (and others) can remote desktop into it to use Omnigraffle.
|
|
15939
|
Thu Mar 18 12:46:53 2021 |
rana | Update | SUS | Testing of new input matrices with new data |
Good Enough! Let's move on with output matrix tuning. I will talk to you guys about it privately so that the whole doesn't learn our secret, and highly sought after, actuation balancing.
I suspect that changing the DC alignment of the SUS changes the required input/output matrix (since changes in the magnet position w.r.t. the OSEM head change the sensing cross-coupling and the actuation gain), so we want to make sure wo do all this with the mirror at the correct alignment.
|
15938
|
Thu Mar 18 12:35:29 2021 |
rana | Update | safety | Door to outside from control room was unlocked |
I think this is probably due to the safety tour yesterday. I beleive Jordan showed them around the office area and C&B. Not sure why they left through the control room.
Quote: |
I came into the lab a few mins ago and found the back door open. I closed it. Nothing obvious seems amiss.
Caltech security periodically checks if this door is locked but it's better if we do it too if we use this door for entry/exit.
|
|
15937
|
Thu Mar 18 09:18:49 2021 |
Paco, Anchal | Update | SUS | Testing of new input matrices with new data |
[Paco, Anchal]
Since the new generated matrices were created for the measurement made last time, they are of course going to work well for it. We need to test with new independent data to see if it works in general.
- We have run scripts/SUS/InMatCal/freeSwingMC.py for 1 repition and free swinging duration of 1050s on tmux session FreeSwingMC on Rossa. Started at GPS: 1300118787.
- Thu Mar 18 09:24:57 2021 : The script ended successfully. IMC is locked back again. Killing the tmux session.
- Attached are the results of 1-kick test, time series data and the ASD of DOFs for calculated using existing input matrix and our calculated input matrix.
- The existing one was already pretty good except for maybe the side DOF which was improved on our diagonalization.
[Paco]
After Anchal left for his test, I took the time to set up the iMAC station so that Stephen (and others) can remote desktop into it to use Omnigraffle. For this, I enabled the remote login and remote management settings under "Sharing" in "System Settings". These two should allow authenticated ssh-ing and remote-desktopping respectively. The password is the same that's currently stored in the secrets.
Quickly tested using my laptop (OS:linux, RDP client = remmina + VNC protocol) and it worked. Hopefully Stephen can get it to work too. |
Attachment 1: MC_Optics_Kicked_Time_Series_1.pdf
|
|
Attachment 2: TEST_Input_Matrix_Diagonalization.pdf
|
|
15936
|
Thu Mar 18 07:02:27 2021 |
Koji | Update | LSC | REFL11 demod retrofitting |
Attachment 1: Transfer Functions
The original circuit had a gain of ~20 and the phase delay of ~1deg at 10kHz, while the new CH-I and CH-Q have a phase delay of 3 deg and 2 deg, respectively.
Attachment 2: Output Noise Levels
The AD797 circuit had higher noise at low frequency and better noise levels at high frequency. Each TLE2027 circuit was tuned to eliminate the instability and shows a better noise level compared to the low-frequency spectrum of the AD797 version.
RXA: AD797 , all hail the op-amps ending with 27 ! |
Attachment 1: TFs.pdf
|
|
Attachment 2: PSD.pdf
|
|
15935
|
Thu Mar 18 01:12:31 2021 |
gautam | Update | LSC | PRFPMi |
- Integrated >1 hour at RF only control, high circulating powers tonight.
- All of the locklosses were due to me typing a wrong number / turning on the wrong filter.
- So the lock seems pretty stable, at least on the 20 minute timescale.
- No idea why given the various known broken parts.
- Did a bunch of characterization.
- DARM OLTF - Attachment #1. The reference is when DARM is under ALS control.
- CARM OLTF - Attachment #2. Seems okay.
- Sensing matrix - Attachment #3. The CARM and DARM phases seem okay. Maybe the CARM phase can be tuned a bit with the delay line, but I think we are within 10 degrees.
- TRX/TRY between 300-400, with large fluctuations mostly angular. So PRG ~17-22, to answer Koji's question in the meeting today.
- This is similar to what I had before the vent of Sep 2020.
- Not surprising to me, since I claim that we are in the regime where the recycling gain is limited by the flipped folding mirrors.
- Tried to tweak the ASC (QPD only) by looking at the step responses, but I could never get the loop gains such that I could close an integrator on all the loops.
I need to think a little bit about the ASC commissioning strategy. On the positive side
- REFL11 board seems to perform at least as well as before.
- ALS performance made me (as Pep would say), so so happy.
- Whole lock acquisiton sequence takes ~5mins if the PRMI catches lock quickly (5/7 times tonight).
- Process seems repeatable.
Things to think about:
- How to get the AS WFS in the picture?
- What does the (still) crazy sensing matrix mean? I think it's not possible to transfer vertex control to 1f signals with this kind of sensing.
- What does it mean that the PRM actuation seems to work, even though the coils are imabalnced by a factor of 3-5, and the coil resistances read out <2 ohms???
- What's going on at the ALS-->CARM transition? The ALS noise is clearly low enough that I can sit inside the CARM linewidth. Yet, there seems to be some offset between what ALS thinks is the resonant point, and what the REFL11 signal thinks is the resonant point. I am kind of able to "power through" this conflict, but the IMC error point (=AO path) is not very happy during the transition. It worked 8/8 times tonight, but would be good to figure out how to make this even more robust.
|
Attachment 1: DARM_OLTF_20210317.pdf
|
|
Attachment 2: CARMTF_20210317.pdf
|
|
Attachment 3: PRFPMI_Mar_17sensMat.pdf
|
|
15934
|
Wed Mar 17 16:30:46 2021 |
Anchal | Update | SUS | Normalized Input Matrices plotted better than SURF students |
Here, I present the same input matrices now normalized row by row to have same norm as current matrices rows. These now I plotted better than last time. Other comments same as 15902. Please let us know what you think.
Thu Mar 18 09:11:10 2021 :
Note: The comparison of butterfly dof in the two cases is bit bogus. The reason is that we know what the butterfly vector is in sensing matrix (N_osems x (N_dof +1)) and that is the last column being (1, -1, 1, -1, 0) corresponding to (UL, UR, LR, LL, Side). However, the matrix we multiply with the OSEM data is the inverse of this matrix (which becomes the input matrix) which has dimensions ((N_dof + 1) x N_osem) and has the last row corresponding to the butterfly dof. This row was not stored for old calculation of the input matrix (which is currently in use) and can not be recovered (mathematically not possible) with the existing 5x4 part of that input matrix. We just added (1, -1, 1, -1, 0) row in the bottom of this matrix (as was done in the matlab codes) but that is wrong and hence the butterfly vector looks so bad for the existing input matrix.
Proposal: We should store the last row of generated input matrix somewhere for such calculations. Ideally, another row in the epics channels for the input matrix would be the best place to store them but I guess that would be too destructive to implement. Other options are to store this 5 number information in wiki or just elogs. For this post, the buttefly row for the generated input matrix is present in the attached files (for future references). |
Attachment 1: IMC_InputMatrixDiagonalization.pdf
|
|
Attachment 2: NewAndOldMatrices.zip
|
15933
|
Wed Mar 17 15:04:20 2021 |
gautam | Update | Electronics | Ribbon cable for chassis |
I had asked Chub to order 100ft ea of 9, 15 and 25 conductor ribbon cable. These arrived today and are stored in the VEA alongside the rest of the electronics/chassis awaiting assembly. |
Attachment 1: IMG_9139.jpg
|
|
15932
|
Wed Mar 17 15:02:06 2021 |
gautam | Update | safety | Door to outside from control room was unlocked |
I came into the lab a few mins ago and found the back door open. I closed it. Nothing obvious seems amiss.
Caltech security periodically checks if this door is locked but it's better if we do it too if we use this door for entry/exit. |
15931
|
Wed Mar 17 14:40:39 2021 |
Yehonathan | Update | BHD | SOS SmCo magnets Inspection |
Continuing with envelope number 2
Magnet number |
Magnetic field (kG) |
1 |
2.89 |
2 |
2.85 |
3 |
2.92 |
4 |
2.75 |
5 |
2.95 |
6 |
2.91 |
7 |
2.93 |
8 |
2.9 |
9 |
2.93 |
10 |
2.9 |
11 |
2.85 |
12 |
2.89 |
13 |
2.85 |
14 |
2.88 |
15 |
2.92 |
16 |
2.75 |
17 |
2.97 |
18 |
2.88 |
19 |
2.85 |
20 |
2.87 |
21 |
2.93 |
22 |
2.9 |
23 |
2.9 |
24 |
2.89 |
25 |
2.88 |
26 |
2.88 |
27 |
2.95 |
28 |
2.88 |
29 |
2.88 |
30 |
2.9 |
31 |
2.96 |
32 |
2.91 |
33 |
2.93 |
34 |
2.9 |
35 |
2.9 |
36 |
3.03 |
37 |
2.84 |
38 |
2.95 |
39 |
2.89 |
40 |
2.88 |
41 |
2.88 |
42 |
2.93 |
43 |
2.97 |
44 |
2.74 |
45 |
2.84 |
46 |
2.85 |
47 |
2.85 |
48 |
2.87 |
49 |
2.88 |
50 |
2.8 |
I think I have to redo envelope 1 tomorrow. |
15930
|
Wed Mar 17 11:57:54 2021 |
Paco, Anchal | Update | SUS | Tested New Input Matrix for MC1 |
[Paco, Anchal]
Paco accidentally clicked on C1:SUS-MC1_UL_TO_COIL_SW_1_1 (MC1 POS to UL Coil Switch) and clicked it back on. We didn't see any loss of lock or anything significant on the large monitor on left.
Testing the new calculated input matrix
- Switched off the PSL shutter (C1:PSL-PSL_ShutterRqst)
- Switched off IMC autolocker (C1:IOO-MC_LOCK_ENABLE)
- Uploaded the same input matrix as the current one to check writing function in scripts/SUS/InMatCalc/testingInMat.py . We have created backup text file for current settings in backupMC1InMat.txt .
- Uploaded the new input matrix in normalized form. To normalize, we first made each row vector unit vector and then multiplied by the norm of current input matrix's row vectors (see scripts/SUS/InMatCalc/normalizeNewInputMat.py)
- Switched ON the PSL shutter (C1:PSL-PSL_ShutterRqst)
- Switched ON IMC autolocker (C1:IOO-MC_LOCK_ENABLE)
- Locked was caught immediately. The wavefront sensor of MC1 shows usual movement, nothing crazy.
- So the new input matrix is digestable by the system, but what's the efficacy of it?
< Two inspection people taking pictures of ceiling and portable AC unit passed. They rang the doorbell but someone else let them in. They walked out the back door.>
Testing how good the input matrix for MC1 is:
- We loaded the input matrix butterfly row in C1:SUS-MC1_LOCKIN_INMATRX_1_4 to 8. This matrix is multiplied by C1:SUS-MC1_UL_SEN_IN and so on channels before the calibration to um and application of toher filters.
- We tried to look around on how to load the same filter banks on the signal chain of LOCKIN1 of MC1 but couldn't, so we just manually added gain value of 0.09 in this chain to simulate the calibration factor at the very least.
- We started the oscillator on LOCKIN 1 on MC1 with amplitude 1 and frequency 6 Hz.
- We added butterfly mode actuation output column (UL:1, UR:-1, LL:-1, LR:1), nothing happened to the lock of probably because of low amplitude we put in.
- Now, we plot the ASD of channels like C1:SUS-MC1_SUSPOS_IN1 (for POS, PIT, YAW, SIDE) to see if we see a corresponding peak there. No we don't. See attachment 1.
Restoring the system:
- Added 0 to the LOCKIN1 column in MC1 output matrix.
- Made LOCK1 oscillator 0 Amplitude, 0 Hz.
- Changed back gain on signal chain of LOCKIN1 on MC1.
- Added 0 to C1:SUS-MC1_LOCKIN_INMATRX_1_4 to 8.
- Switched off the PSL shutter (C1:PSL-PSL_ShutterRqst)
- Switched off IMC autolocker (C1:IOO-MC_LOCK_ENABLE)
- Wrote back the old matrix by scripts/SUS/InMatCalc/testingInMat3.py which used the backup we created.
- Switched ON the PSL shutter (C1:PSL-PSL_ShutterRqst)
- Switched ON IMC autolocker (C1:IOO-MC_LOCK_ENABLE)
|
Attachment 1: 20210317_MC1_InMATtest.pdf
|
|
Attachment 2: MC1_Input_Matrix_Test.tar.gz
|
15929
|
Wed Mar 17 10:52:48 2021 |
Jordan | Update | SUS | 3" Ring Adpater for SOS |
I have added a .1", 45deg chamfer to the bottom of the adapter ring. This was added for a new placement of the eq stops, since the barrel screws are hard to access/adjust.
This also required a modification to the eq stop bracket, D960008-v2, with 1/4-20 screws angled at 45 deg to line up with the chamfer.
The issue I am running into is there needs to be a screw on the backside of the ring as well, otherwise the ring would fall backwards into the OSEMs in the event of an earthquake. The only two points of contact are these front two angled screws, a third is needed on the opposite side of the CoM for stability. This would require another bracket mounted at the back of the SOS tower, but there is very little open real estate because of the OSEMs.
Instead of this whole chamfer route, is it possible/easier to just swap the screws for the barrel eq stops? Instead of a socket head cap screw, a SS thumb screw such as this, will provide more torque when turning, and remove the need to use a hex wrench to turn.
|
Attachment 1: Side_View.png
|
|
Attachment 2: Front_View.png
|
|
Attachment 3: Ring_with_Modifed_Bracket.png
|
|
Attachment 4: Back_of_ring.png
|
|
Attachment 5: Front_of_Ring.png
|
|
15928
|
Wed Mar 17 09:05:01 2021 |
Paco, Anchal | Configuration | Computers | 40m Control Room Changes |
- Switched positions of allegra and donatella.
- While doing so, the hdmi cable previously used by donatella snapped. We replaced this cable by another unused cable we found connected only on one end to rossa. We should get more HDMI cables if that cable was in use for some other purpose.
- Paco bought a bluetooth speaker/mic that is placed infront of allegra and it's usb adapter is connected to iMac's keyboard in the bottom. With the new camera installed, the 40m video call environment is now complete.
- Again, we have placed allegra's monitor for place holder but it is not working and we need new monitors for it in future whenever it is going to be used.
|
15927
|
Wed Mar 17 00:05:26 2021 |
gautam | Update | LSC | Delay line BIO remote control |
While Koji is working on the REFL 11 demod board, I took the opportunity to investigate the non-remote-controllability of the delay line in 1Y2, since the TTs have already been disturbed. Here is what I found today.
- First, I brought over the spare delay line from the rack Chiara sits in over to 1Y2.
- Connected a Marconi to the input, monitored a -3dB pickoff and the delay line output simultaneously on a 300MHz scope.
- With the front panel selector set to "Internal", verified that local (i.e. toggling front panel switches) switchability seems to work 👍
- Set the front panel switch to "External", and connected the D25 cable from the BIO card in 1Y3 to the back panel of the delay line unit - found that I could not remotely change the delay 😒
- I thought it'd be too much of a coincidence if both delay lines have the same failure mode for the remote switching part only, so I decided to investigate further up the signal chain.
- BIO switching - the CDS BIO bit status MEDM screen seems to respond, indicating that the bits are getting set correctly in software at least. I don't know of any other software indicator for this functionality further down the signal processing chain. So it would seem the BIO card is not actually switching.
- The Contec DO cards don't actually source the voltage - they just provide a path for current to flow (or isolate said path). I checked that pin 12 of the rear panel D25 connector is at +5 V DC relative to ground as indicated in the schematic (see P1 connector - this connector isn't a Dsub, it is IDE24, so the mapping to the Dsub pins isn't one-to-one, but pin 23 on the former corresponds to pin 12 on the latter), suggesting that the pull up resistors have the necessary voltage applied to them.
- Made a little LED tester breakout board, and saw no swtiching when I toggled the status of some random bits.
- Noted that the bench power supply powering this setup (hacky arrangement from 2015 that never got unhacked) shows a current draw of 0A.
- I am not sure what the quiescent draw of these boards is - the datasheet says "Power consumption: 3.3VDC, 450mA", but the recommended supply voltage is "12-24V DC (+/-10%)" not 3.3VDC, so not sure what to make of that.
- To try and get some insight, I took one of the new Contec-32L-PE cards we got from near Jon's CDS test stand (I've labelled the one I took lest there be some fault with it in the future), and connected it to a bench supply (pin 18 = +15V DC, pin1 = GND). But in this condition, the bench supply reports 0A current draw.
- Ruled out the wrong cable being plugged in - I traced the cable over the cable tray, and seems like it is in fact connecting the BIO card in the c1lsc expansion chassis to the delay line.
So it would seem something is not quite right with this BIO card. The c1lsc expansion chassis, in which this card sits, is notoriously finicky, and this delay line isn't very high priority, so I am deferring more invasive investigation to the next time the system crashes.
* I forgot we have the nice PCB Contec tester board with LEDs - the only downside is that this board has D37 connectors on both ends whereas the delay line wants a D25, necessitating some custom ribbon cable action. But maybe there is a way to use this.
As part of this work, I was in various sensitive areas (1Y3, chiara rack, FE test stand etc) but as far as I can tell, all systems are nominal. |
15926
|
Tue Mar 16 19:13:09 2021 |
Paco, Anchal | Update | SUS | First success in Input Matric Diagonalization |
After jumping through few hoops, we have one successful result in diagonalizing the input matrix for MC1, MC2 and MC3.
Code:
- Attachment 2 has the code file contained. For now, we can only guarantee it to work on Donatella in conda base environment. Our code is present in scripts/SUS/InMatCalc
- We mostly follow the steps mentioned in 4886 and the matlab codes in scripts/SUS/peakFit.
- Data is first multiplied with currently used inpur matrix to get time series data in DOF (POS, PIT, YAW, SIDE) basis.
- Then, the peak frequencies of each resonance are identified.
- For getting these results, we did not attempt to fit the peaks with lorentzians and took the maxima point of the PSD to get the peak positions. This is only possible if the current input matrix is good enough. We have to adjust some parameters so that our fitting code works always.
- TF estimate between the sensor data w.r.t UL sensor is taken and the values around the peak frequencies of oscillations are averaged to get the sensing matrix.
- This matrix is normalized along DOF axis (columns in our case) and then inverted.
- After inversion, another normaliation is done along DOF axis (now rows).
- Finally we plot the comparison of ASD in DOF basis when using current input matrix and when using our calculated inpur matrix (diagonalizing).
- You can notice in Attachment 1 that after the diagonalization, each DOF shows resonance at only one and its own resonance frequency while earlier there was some mixing shown.
- Absolute value of the calculated DOF might have changed and we need to calibrate them or apply appropriate gain factors in the DOF basis filter chains.
Next steps:
- We'll complete our scripts and make them more general to be used for any optic.
- We'll combine all of them into one single script which can be called by medm.
- In parallel, we'll start onwards from step 2 in 15881.
- Anything else that folks can suggest on our first result. Did we actually do it or are we fooling ourselves?
|
Attachment 1: IMC_InputMatrixDiagonalization.pdf
|
|
Attachment 2: InMatCalcScripts.zip
|
15925
|
Tue Mar 16 19:04:20 2021 |
gautam | Update | CDS | Front-end testing |
Now that I think about it, I may only have backed up the root file system of chiara, and not/home/cds/ (symlinked to /opt/ over NFS). I think we never revived the rsync backup to LDAS after the FB fiasco of 2017, else that'd have been the most convenient way to get files. So you may have to resort to some other technique (e.g. configure the second network interface of the chiara clone to be on the martian network and copy over files to the local disk, and then disconnect the chiara clone from the martian network (if we really want to keep this test stand completely isolated from the existing CDS network) - the /home/cds/ directory is rather large IIRC, but with 2TB on the FB clone, you may be able to get everything needed to get the rtcds system working). It may then be necessary to hook up a separate disk to write frames to if you want to test that part of the system out.
Good to hear the backup disk was able to boot though!
Quote: |
And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.
For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success.
|
|
15924
|
Tue Mar 16 16:27:22 2021 |
Jon | Update | CDS | Front-end testing |
Some progress today towards setting up an isolated subnet for testing the new front-ends. I was able to recover the fb1 backup disk using the Rescatux disk-rescue utility and successfully booted an fb1 clone on the subnet. This machine will function as the boot server and DAQ server for the front-ends under test. (None of these machines are connected to the Martian network or, currently, even the outside Internet.)
Despite the success with the framebuilder, front-ends cannot yet be booted locally because we are still missing the DHCP and FTP servers required for network booting. On the Martian net, these processes are running not on fb1 but on chiara. And to be able to compile and run models later in the testing, we will need the contents of the /opt/rtcds directory also hosted on chiara.
For these reasons, I think it will be easiest to create another clone of chiara to run on the subnet. There is a backup disk of chiara and I attempted to boot it on one of the LLO front-ends, but without success. The repair tool I used to recover the fb1 disk does not find a problem with the chiara disk. However, the chiara disk is an external USB drive, so I suspect there could be a compatibility problem with these old (~2010) machines. Some of them don't even recognize USB keyboards pre-boot-up. I may try booting the USB drive from a newer computer.
Edit: I removed one of the new, unused Supermicros from the 1Y2 rack and set it up in the test stand. This newer machine is able to boot the chiara USB disk without issue. Next time I'll continue with the networking setup. |
15923
|
Tue Mar 16 16:02:33 2021 |
Koji | Update | LSC | REFL11 demod retrofitting |
I'm going to remove REFL11 demod for the noise check/circuit improvement.
----
- The module was removed (~4pm). Upon removal, I had to loosen AS110 LO/I out/Q out. Check the connection and tighten their SMAs upon restoration of REFL11.
- REFL11 configuration / LO: see below, PD: a short thick SMA cable, I OUT: Whitening CH3, Q OUT: Whitening CH4, I MON daughterboard: CM board IN1 (BNC cable)
- The LO cable for REFL11 was made of soft coax cable (Belden 9239 Low Noise Coax). The vendor specifies that this cable is for audio signals and NOT recommended for RF purposes [Link to Technical Datasheet (PDF)].
I'm going to measure the delay of the cable and make a replacement.
- There is a bunch of PD RF Mon cables connected to many of the demo modules. I suppose that they are connected to the PD calibration system which hasn't been used for 8 years. And the controllers are going to be removed from the rack soon.
I'm going to remove these cables.
----
First I checked the noise levels and the transfer functions of the daughterboard preamp were checked. The CH-1 of the SR785 seemed funky (I can't comprehensively tell yet how it was), so the measurement maybe unreliable.
For the replacement of AD797, I tested OP27 and TLE2027. TLE2027 is similar to OP27, but slightly faster, less noisy, and better in various aspects.
The replacement of the AD797 and whatever-film resistors with LTE2027 and thin-film Rs were straightforward for the I phase channel, while the stabilization of the Q phase channel was a struggle (no matter I used OP27 or TLE2027). It seems that the 1st stage has some kind of instability and I suffered from 3Hz comb up to ~kHz. But the scope didn't show obvious 3Hz noise.
After a quite bit of struggle, I could tame this strange noise by adjusting the feedback capacitor of the 1st stage. The final transfer functions and the noise levels were measured. (To be analyzed later)
----
Now the REFL11 LO cable was replaced from the soft low noise audio coax (Belden 9239) to jacketed solder-soaked coax cable (Belden 1671J - RG405 compatible). The original cable indicated the delay of -34.3deg (@11MHz, 8.64ns) and the loss of 0.189dB.
I took 80-inch 1671J cable and measured the delay to be ~40deg. The length was adjusted using this number and the resulting cable indicated the delay of -34.0deg (@11MHz, 8.57ns) and the loss of 0.117dB.
The REFL11 demod module was restored and the cabling around REFL11 and AS110 were restored, tightened, and checked.
----
I've removed the PD mon cables from the NI RF switch. The open ports were plugged with 50Ohm temirnators.
----
I ask commissioners to make the final check of the REFL11 performance using CDS. |
Attachment 1: IMG_0545.jpeg
|
|
Attachment 2: IMG_0547.jpeg
|
|
Attachment 3: D040179-A.pdf
|
|
Attachment 4: IMG_0548.jpeg
|
|
Attachment 5: IMG_0550.jpeg
|
|
15922
|
Tue Mar 16 14:37:36 2021 |
Yehonathan | Update | BHD | SOS SmCo magnets Inspection |
In the cleanroom, I opened the nickel-plated SmCo magnet box to take a closer look. I handled the magnets with tweezers. I wrapped the tips of the tweezers with some Kapton tape to prevent scratching and magnetization.
I put some magnets on a razor blade and took some close-up pictures of the face of the magnets on both sides. Most of them look like attachment 1.
Some have worn off plating on the edges. The most serious case is shown in attachment 2. Maybe it doesn't matter if we are going to sand them?
I measure the magnetic flux of the magnets by just attaching the gaussmeter flat head to the face of the magnet and move it around until the maximum value is reached.
For envelope #1 out of 3 the values are: (The magnet ordering is in attachment 3):
Magnet # |
Max Magnetic Field (kG) |
1 |
2.57 |
2 |
2.54 |
3 |
2.57 |
4 |
2.57 |
5 |
2.55 |
6 |
2.61 |
7 |
2.55 |
8 |
2.52 |
9 |
2.64 |
10 |
2.58
|
Going to continue tomorrow with the rest of the magnets. I left the magnet box and the gaussmeter under the flow bench in the cleanroom. |
Attachment 1: 20210316_142906.jpg
|
|
Attachment 2: 20210316_165626.jpg
|
|
Attachment 3: 20210316_165838.jpg
|
|
15921
|
Mon Mar 15 20:40:01 2021 |
rana | Configuration | Computers | installed QTgrace on donatella for dataviewer |
I installed QTgrace using yum on donatella. Both Grace and XMgrace are broken due to some boring fight between the Fedora package maintainers and the (non existent) Grace support team. So I have symlinked it:
controls@donatella|bin> sudo mv xmgrace xmgrace_bak
controls@donatella|bin> sudo ln -s qtgrace xmgrace
controls@donatella|bin> pwd
/usr/bin
I checked that dataviewer works now for realtime and playback. Although the middle click paste on the mouse doesn't work yet. |
Attachment 1: cutiegrace.png
|
|
15920
|
Mon Mar 15 20:22:01 2021 |
gautam | Update | ASC | c1rfm model restarted |
On Friday, I felt that the ASC performance when the PRFPMI was locked was not as good as it used to be, so I looked into the situation a bit more. As part of my ASC model revamp in December, I made a bunch of changes to the signal routing, and my suspicion was that the control signals weren't even reaching the ETMs. My log says that I recompiled and reinstalled the c1rfm model (used to pipe the ASC control signals to the ETMs), and indeed, the file was modified on Dec 21. But for whatever reason, the C1RFM.ini (=Dolphin receiver since the ASC control signals are sent to this model over the Dolphin network from the c1ioo machine which hosts the C1:ASC- namespace, and RFM sender to the ETMs, but this path already existed) file never picked up the new channels. Today, I recompiled, re-installed, and restarted the models, and confirmed that the control signals actually make it to the ETMs. So now we can have the QPD-based ASC loops engaged once again for the PRFPMI lock. The CDS system did not crash 🎉 . See Attachments #1-3.
I checked the loop performance in the POX/POY locked config by first deliberately misaligning the ETMs, and then engagin the loops - seems to work (Attachment #4). The loop shapes have to be tweaked a bit and I didn't engage the integrators, hence the DC pointing wasn't recovered. Also, added a line to the script that turns the ASC loops on to set limits for all the loops - in the testing process, one of the loops ran away and I tripped the ETMY watchdog. It has since been recovered. I SDFed a limit of 100cts just to be on the conservative side for model reboot situations - the value in the script can be raised/lowered as deemed necessary (sorry, I don't know the cts-->urad number off the top of my head).
But the hope is this improves the power buildup, and provides stability so that I can begin to commission the AS WFS system a bit. |
Attachment 1: RFM.png
|
|
Attachment 2: CDSoverview.png
|
|
Attachment 3: RFMchans.png
|
|
Attachment 4: ASCloops.png
|
|
15919
|
Mon Mar 15 08:55:45 2021 |
Paco, Anchal | Summary | training | |
[Paco, Anchal]
- Found IMC locked upon arrival.
- Since "allegra" was set up as an additional workstation, we tried using it but discovered the monitor ist kaput. For the sake of debugging, we tested VGA and DVI inputs and even the monitor lying around (also labeled "allegra") with no luck. So <ssh> it is for now.
IMC Input sensing matrix
- Rana joined us and asked us to use Rossa for now so that we can sit socially distantly.
- Attaching some intermediate results on our analysis as pdf and zip file containing all the codes we used.
- We used channels C1:SUS-MC1_USSEN_OUTPUT (16 Hz channels) and so on which might not be the correct way to do it as Rana pointed out today, we should have used channels like C1:SUS-MC1_SENSOR_UL etc.
- During the input matrix calculation, we used the method of TF estimate (as mentioned in 4886) to calculate the sensor matrix and inverted it and normalized all rows with the maximum absolute value element (we tried few other ways of normalization with no better results either).
- We found the peak frequencies by fitting lorentzian to the sensor data rotated by the current input matrix in the system. We also tried doing this directly on the sensor data (UL for POS, UR for PIT, LR for YAW and SD for SIDE as this seemed to be the case in the old matlab codes) but with no different results.
- The fitted peak frequencies, Q and amplitude values are in fittedPeakFreqs.yml in the attached zip.
|
Attachment 1: IMC_InputMatrixDiagonalization.pdf
|
|
Attachment 2: inputMatrixCalculationMC.tar
|
Attachment 3: freeSwingMC.py.tar
|
Attachment 4: SUSfreeswing_1299514263.txt.tar
|
15918
|
Fri Mar 12 21:15:19 2021 |
gautam | Update | LSC | coronaversary PRFPMi |
Attachment #1 - proof that the lock is RF only (A paths are ALS, B paths are RF).
Attachment #2 - CARM OLTF.
Some tuning can be done, the circulating power can be made ~twice as high with some ASC. The vertex is still on 3f control. I didn't get any major characterization done tonight but it's nice to be back here, a year on i guess. |
Attachment 1: PRFPMI.png
|
|
Attachment 2: CARM_OLTF.pdf
|
|
15917
|
Fri Mar 12 19:44:31 2021 |
gautam | Update | LSC | Delay line |
I may want to use the delay line phase shifter in 1Y2 to allow remote actuation of the REFL11 demod phase (for the AO path, not the low bandwidth one). I had this working last Feb, but today, I am unable to remotely change the delay. @Koji, it would be great if you could fix this the next time you are in the lab - I bet it's a busted latch IC or some such thing. I did the non-invasive tests - cable is connected, control bits are changing (at least according to the CDS BIO indicators) and the switch controlling remote/local switching is set correctly. The local switching works just fine.
In the meantime, I will keep trying - I am unconvinced we really need the delay line. |
15916
|
Fri Mar 12 18:10:01 2021 |
Anchal | Summary | Computer Scripts / Programs | Installed cds-workstation on allegra |
allegra had fresh Debian 10 installed on it already. I installed cds-workstation packages (with the help of Erik von Reis). I checked that command line caget, caput etc were working. I'll see if medm and other things are working next time we visit the lab. |
15915
|
Fri Mar 12 13:48:53 2021 |
gautam | Summary | SUS | Coil Rs & Ls for PRM/BS/SRM |
I didn't repeat Koji's measurement, but he reports the expected ~3.2mH per coil on all the BS and PRM coils.
Quote: |
ugh. sounds bad - maybe a short. I suggest measuring the inductance; thats usually a clearer measurement of coil health
|
|
15914
|
Fri Mar 12 13:01:43 2021 |
rana | Summary | SUS | Coil Rs & Ls for PRM/BS/SRM |
ugh. sounds bad - maybe a short. I suggest measuring the inductance; thats usually a clearer measurement of coil health |
15913
|
Fri Mar 12 12:32:54 2021 |
gautam | Summary | SUS | Coil Rs & Ls for PRM/BS/SRM |
For consistency, today, I measured both the BS and PRM actuator balancing using the same technique and don't find as serious an imbalance for the BS as in the PRM case. The Oplev laser source is common for both BS and PRM, but the QPDs are of course distinct.
BTW, I thought the expected resistance of the coil windings of the OSEM is ~13 ohms, while the BS/PRM OSEMs report ~1-2 ohms. Is this okay?
Quote: |
- All the PRM coils look well-matched in terms of the inductance. Also, I didn't find a significant difference from BS coils.
|
|
Attachment 1: BS_actuator.pdf
|
|
Attachment 2: PRMact.pdf
|
|
15912
|
Fri Mar 12 11:44:53 2021 |
Paco, Anchal | Update | training | IMC SUS diagonalization in progress |
[Paco, Anchal]
- Today we spent the morning shift debugging SUS input matrix diagonalization. MC stayed locked for most of the 4 hours we were here, and we didn't really touch any controls. |
15911
|
Fri Mar 12 11:02:38 2021 |
gautam | Update | CDS | cds reboot |
I looked into this a bit today morning. I forgot exactly what time we restarted the machines, but looking at the timesyncd logs, it appears that the NTP synchronization is in fact working (log below is for c1sus, similar on other FEs):
-- Logs begin at Fri 2021-03-12 02:01:34 UTC, end at Fri 2021-03-12 19:01:55 UTC. --
Mar 12 02:01:36 c1sus systemd[1]: Starting Network Time Synchronization...
Mar 12 02:01:37 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
Mar 12 02:01:37 c1sus systemd[1]: Started Network Time Synchronization.
Mar 12 02:02:09 c1sus systemd-timesyncd[243]: Using NTP server 192.168.113.201:123 (ntpserver).
So, the service is doing what it is supposed to (using the FB, 192.168.113.201, as the ntpserver). You can see that the timesync was done a couple of seconds after the machine booted up (validated against "uptime"). Then, the service is periodically correcting drifts. idk what it means that the time wasn't in sync when we check the time using timedatectl or similar. Anyway, like I said, I have successfully rebooted all the FEs without having to do this weird time adjustment >10 times.
I guess what I am saying is, I don't know what action is necessary for "implementing NTP synchronization properly", since the diagnostic logfile seems to indicate that it is doing what it is supposed to.
More worryingly, the time has already drifted in < 24 hours.
Quote: |
I want to emphasize the followings:
- FB's RTC (real-time clock/motherboard clock) is synchronized to NTP time.
- The other RT machines are not synchronized to NTP time.
- My speculation is that the RT machine timestamp is produced from the local RT machine time because there is no IRIG-B signal provided. -> If the RTC is more than 1 sec off from the FB time, the timestamp inconsistency occurs. Then it induces "DC" error.
- For now, we have to use "date" command to match the local RTC time to the FB's time.
- So: If NTP synchronization is properly implemented for the RT machines, we will be free from this silly manual time adjustment.
|
|
Attachment 1: timeDrift.png
|
|
15910
|
Fri Mar 12 03:28:51 2021 |
Koji | Update | CDS | cds reboot |
I want to emphasize the followings:
- FB's RTC (real-time clock/motherboard clock) is synchronized to NTP time.
- The other RT machines are not synchronized to NTP time.
- My speculation is that the RT machine timestamp is produced from the local RT machine time because there is no IRIG-B signal provided. -> If the RTC is more than 1 sec off from the FB time, the timestamp inconsistency occurs. Then it induces "DC" error.
- For now, we have to use "date" command to match the local RTC time to the FB's time.
- So: If NTP synchronization is properly implemented for the RT machines, we will be free from this silly manual time adjustment.
|
15909
|
Fri Mar 12 03:23:37 2021 |
Koji | Update | BHD | DO card (DO-32L-PE) brought from WB |
I've brought 4 DO-32L-PE cards from WB for BHD upgrade for Jon. |
Attachment 1: P_20210311_232742.jpg
|
|
Attachment 2: P_20210311_232752.jpg
|
|
15908
|
Fri Mar 12 03:22:45 2021 |
Koji | Update | General | Gaussmeter in the electronics drawer |
For magnet strength measurement: There is a gaussmeter in the flukes' drawer (2nd from the top). It turns on and reacts to a whiteboard magnet. |
Attachment 1: P_20210311_231104.jpg
|
|
15907
|
Fri Mar 12 03:08:23 2021 |
Koji | Summary | SUS | Coil Rs & Ls for PRM/BS/SRM |
Summary
Per Gautam's request, I've checked the coil resistances and inductances.
- PRM/BS/SRM coils were tested.
- All the PRM coils look well-matched in terms of the inductance. Also, I didn't find a significant difference from BS coils.
- Pin 1 of the feedthru connectors is shorted to the vacuum chamber.
- A discovery was that: The SRM DSUB pinouts are mirrored compared to the other suspensions.
Method
A DSUB25 breakout was directly connected to the flange (Attachment 1).
The impedance meter was nulled every time the measurement range and type (R or L) were changed.
Result
Feedthru connector: PRM1
Pin1 - flange: R = 0.8Ω
Pin11-23 / R = 1.79Ω / L=3.21mH
Pin 7-19 / R = 1.82Ω / L=3.22mH
Pin 3-15 / R = 1.71Ω / L=3.20mH
|
Feedthru connector: BS1
Pin1 - flange: R = 0.5Ω
Pin11-23 / R = 1.78Ω / L=3.26mH
Pin 7-19 / R = 1.63Ω / L=3.30mH
Pin 3-15 / R = 1.61Ω / L=3.29mH
|
Feedthru connector: SRM1
Pin1 - flange: R = 0.5Ω
Pin11-24 / R = 18.1Ω / L=3.22mH
Pin 7-20 / R = 18.8Ω / L=3.25mH
Pin 3-16 / R = 20.3Ω / L=3.25mH
|
Feedthru connector: PRM2
Pin1 - flange: R = 0.6Ω
Pin11-23 / R = 1.82Ω / L=3.20mH
Pin 7-19 / R = 1.53Ω / L=3.20mH
Pin 3-15 / R = N/A
|
Feedthru connector: BS2
Pin1 - flange: R = 0.6Ω
Pin11-23 / R = 1.46Ω / L=3.27mH
Pin 7-19 / R = 1.54Ω / L=3.24mH
Pin 3-15 / R = N/A
|
Feedthru connector: SRM2
Pin1 - flange: R = 0.7Ω
Pin11-24 / R = N/A
Pin 7-20 / R = 18.5Ω / L=3.21mH
Pin 3-16 / R = 19.1Ω / L=3.25mH
|
Observation
The SRM pinouts seem mirrored compared to the others. In fact, these two connectors are equipped with mirror cables (although they are unshielded ribbons) (Attachment 2).
The SRM sus is located on the ITMY table. There is a long in vacuum DSUB25 cable between the ITMY and BS tables. I suspect that the cable mirrors the pinout and this needs to be corrected by the in-air mirror cables.
I went around the lab and did not find any other suspensions which have the mirror cable.
WIth the BHD configuration, we will move the feedthru for the SRM to the one on the ITMY chamber. So I believe the situation is going to be improved.
|
Attachment 1: P_20210311_224651.jpg
|
|
Attachment 2: P_20210311_225359.jpg
|
|
15906
|
Thu Mar 11 20:18:00 2021 |
gautam | Update | LSC | High bandwidth POY |
I repeated the high bandwidth POY locking experiment.
- The "Q" demod output (SMA) was routed to the common mode board (it appears in the past I used the LEMO "MON" output instead but that shouldn't be a meaningful change).
- As usual, slow actuation --> ETMY, fast actuation --> IMC error point.
- Loop UGF measurement suggests that bandwidth ~25kHz, with ~25 degrees phase margin. Anyway the lock was pretty stable.
One thing I am not sure is - when looking at the in-loop error point spectra, the Y-arm error point did not get suppressed to the CM board's sensing noise floor - I would have thought that with the huge amount of gain at ~16 Hz, the usual structure we see in the spectra between 10-30Hz would be completely squished. Need to think about if this is signalling something wrong, because the loop TF measurements seemed as expected to me.
1020pm: plots uploaded. As I made the plot of the spectrum, I realized that I don't have the calibration for the Y-arm error point into displacement noise units, so it's in unphysical units for now. But I think the comment about the hump around 16 Hz not being crushed to some sort of flat electronics noise floor. For the TF plots, when the loop gain is high, this IN1/IN2 technique isn't the best (due to saturation issues) but I don't think there's anything controversial about getting the UGF this way, and the fact that the phase evolves as expected when the various gains are cranked up / boosts enabled makes me think that the CM board is itself just fine.
10am 12 March: i realized that the "Y-arm error point" plotted below is not the true error point - that would be the input to the CM board (before boosts etc), which we don't monitor digitally. The spectra are plotted for the CM_SLOW input which already has some transfer function applied to it. In the past, I routed the LEMO "MON" connector on the demod board to the CM board input, and hence, had the usual SMA outputs from the demod board going to the digital system. I hypothesize that plotting the spectra for that signal would have showed this expected suppression to the electronics noise floor.
In summary, on the basis of this test, I don't see any red flags with the CM board. |
Attachment 1: OLGevolution.pdf
|
|
Attachment 2: inLoopSpec.pdf
|
|
15905
|
Thu Mar 11 18:46:06 2021 |
gautam | Update | CDS | cds reboot |
Since Koji was in the lab I decided to bite the bullet and do the reboot. I've modified the reboot script - now, it prompts the user to confirm that the time recognized by the FEs are the same (use the IOP model's status screen, the GPSTIME is updated live on the upper right hand corner). So you would do sudo date --set="Thu 11 Mar 2021 06:48:30 PM UTC" for example, and then restart the IOP model. Why is this necessary? Who knows. It seems to be a deterministic way of getting things back up and running for now so we have to live with it. I will note that this was not a problem between 2017 and 2020 Oct, in which time I've run the reboot script >10 times without needing to take this step. But things change (for an as of yet unknown reason) and we must adapt. Once the IOPs all report a green "DC" status light on the CDS overview screen, you can let the script take you the rest of the way again.
The main point of this work was to relax the data rate on the c1lsc model, and this worked. It now registers ~3.2 MB/s, down from the ~3.8 MB/s earlier today. I can now measure 2 loop TFs simultaneously. This means that we should avoid adding any more DQ channels to the c1lsc model (without some adjustment/downsampling of others).
Quote: |
Holding off on a restart until I decide I have the energy to recover the CDS system from the inevitable crash.
|
|
Attachment 1: CDSoverview.png
|
|
15904
|
Thu Mar 11 14:27:56 2021 |
gautam | Update | CDS | timesync issue? |
I have recently been running into hitting the 4MB/s data rate limit on testpoints - basically, I can't run DTT TF and spectrum measurements that I was able to while locking the interferometer, which I certainly was able to this time last year. AFAIK, the major modification made was the addition of 4 DQ channels for the in-air BHD experiment - assuming the data is transmitted as double precision numbers, i estimate the additional load due to this change was ~500KB/s. Probably there is some compression so it is a bit more efficient (as this naive calc would suggest we can only record 32 channels and I counted 41 full rate channels in the model), but still, can't think of anything else that has changed. Anyway, I removed the unused parts and recompiled/re-installed the models (c1lsc and c1omc). Holding off on a restart until I decide I have the energy to recover the CDS system from the inevitable crash. For documentation I'm also attaching screenshot of the schematic of the changes made.
Anyway, the main point of this elog is that at the compilation stage, I got a warning I've never seen before:
Building front-end Linux kernel module c1lsc...
make[1]: Warning: File 'GNUmakefile' has modification time 13 s in the future
make[1]: warning: Clock skew detected. Your build may be incomplete.
This prompted me to check the system time on c1lsc and FB - you can see there is a 1 minute offset (it is not a delay in me issuing the command to the two machines)! I am suspecting this NTP action is the reason. So maybe a model reboot is in order. Sigh |
Attachment 1: timesync.png
|
|
Attachment 2: c1lscMods.png
|
|
15903
|
Thu Mar 11 14:03:02 2021 |
gautam | Update | LSC | AO path |
There is some evidence of weird saturation but the gain balancing (0.8dB) and orthogonality (~89 deg) for the daughter board on the REFL11 demod board that generates the AO path error signal seem reasonable. This board would probably benefit from the AD797-->Op27 and thick-film-->thin film swap but i don't think this is to blame for being unable to execute the RF transition. |
Attachment 1: IMG_9127.HEIC
|
15902
|
Thu Mar 11 08:13:24 2021 |
Paco, Anchal | Update | SUS | IMC First Free Swing Test failed due to typo, restarting now |
[Paco, Anchal]
The triggered code went on at 5:00 am today but a last minute change I made yesterday to increase number of repititions had an error and caused the script to exit putting everything back to normal. So as we came in the morning, we found the mode cleaner locked continuously after one free swing attempt at 5:00 am. I've fixed the script and ran it for 2 hours starting at 8;10 am. Our plan is to get some data atleast to play with when we are here. If the duration is not long enough, we'll try to run this again tomorrow morning. The new script is running on same tmux session 'MCFreeSwingTest' on Rossa
10:13 the script finished and IMC recovered lock.
Thu Mar 11 10:58:27 2021
The test ran succefully with the mode cleaner optics coming back to normal in the end of it. We wrote some scripts to read data and analyze it. More will come in future posts. No other changes were made today to the systems. |
15901
|
Thu Mar 11 02:10:06 2021 |
Koji | Summary | BHD | BHD Platform vertical dimentions |
Stephen and I discussed the nominal heights of the BHD platform components.
- The beam height from the stack is 5.5"
- The platform height is 1.5" and the thickness of 0.4", according to the VOPO suspension, which we want to be compatible with.
- Thus the beam height on the BHD platform is 4".
- The VOPO platform has a minimum 0.1" gap from the installation surface when it is suspended.
- When the BHD platform is fixed on the table, we'll use positioners that are fixed on the stack table. Then the BHD platform is fixed on the positioner rather than fixing the entire platform on the stack. This leaves us the option to suspend the platform in the future. The number of the positioners is TBD.
- Looking at the head size for 1/4-20 socket head screws, It'd be nice to have the thickness of 0.5" for the positioners. This makes the thin part of the stiffener to be 0.6" in thickness.
- The numbers are nominal for the initial design and subject to the change along with FEA simulations to determine the resonant frequency of the body modes.
|
Attachment 1: BHD_Platform_Vertical_Dimentions.pdf
|
|
15900
|
Thu Mar 11 01:45:42 2021 |
gautam | Update | LSC | PRFPMi |
- PRM satellite box indeed seems to have been the culprit - shortly after I swapped it to the SRM, its shadow sensors went dark. I leave the watchdog tripped.
- I still was unable to realize the RF only IFO
- Clearly my old settings don't work, so I tried to go about it systematically. First, try and transition CARM to RF, leave DARM on ALS.
- As usual, I can realize the state were the arm powers are ~100, and the two paths are blended.
- But I'm not able to completely turn off the CARM_A path without blowing the lock.
Pity really, I was hoping to make it much further tonight. I think I'll have to go back to the high BW POX/POY lock, and also check out the conversion efficiency / noise of the daughter board on the REFL11 demod board. Compared to before my work on the RF source, the demod phase for the PRMI lock using REFL11 as an error signal has basically necessitated a change of the digital demod phase by 180 degrees - so I made the appropriate polarity changes in the CM_SLOW and AO paths (the assumption is that CARM in REFL11 would require the same change in digital demod phase, and I think this is a reasonable assumption - indeed, with the arm powers somewhat stable ~100, if I look at the PDH signal in REFL11 I and Q, it does seem to show up largely in the I quadrature (pre digital phase rotation). Anyway, with so many weird effects (wonky PRM suspension, strange PRMI sensing etc etc, who knows what's going on. This will take a systematic effort.
I defer the electronics characterization for the daytime (if I feel like I need it tomorrow I'll do it, else. Koji has said he can do it on Friday).
Quote: |
I was unable to fully hand off control from ALS-->RF, I suspect I may be using the wrong sign on the AO path (or some such other sub-optimal CM board settings). I'll hook up the SR785 and take some TFs tomorrow, that should give more insight into what's what.
|
|