40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
 40m Log, Page 13 of 329 Not logged in
ID Date Author Type Category Subject
15981   Wed Mar 31 03:56:37 2021 KojiSummaryElectronicsA bunch of electronics received

Attachment 1: P_20210331_013257.jpg
Attachment 2: P_20210331_014020.jpg
15980   Wed Mar 31 00:40:32 2021 KojiUpdateElectronicsElectronics Packaging for assembly work

I've worked on packing the components for the following chassis
- 5 16bit AI chassis
- 4 18bit AI chassis
- 7 16bit AA chassis
- 8 HAM-A coil driver chassis
They are "almost" ready for shipment. Almost means some small parts are missing. We can ship the boxes to the company while we wait for these small parts.

• DB9 Female Ribbon Receptacle AFL09B-ND Qty100 (We have 10) -> Received 90 on Apr 1st
• DB9 Male Ribbon Receptacle CMM09-G Qty100 (We have 10) -> Received 88 on Apr 1st
• 4-40 Pan Flat Head Screw (round head, Phillips) 1/2" long Qty 50 -> Found 4-40 3/8" Qty50 @WB EE on Apr 1st (Digikey H782-ND)
• Keystone Chassis Handle 9106 36-9106-ND Qty 50 -> Received 110 on Apr 1st
• Keystone Chassis Ferrule 9121 NKL PL 36-9121-ND Qty 100 -> Received 55 on Apr 1st
• Chassis Screws 4-40 3/16" Qty 1100 -> Received 1100 on Apr 1st
• Chassis Ear Screws 6-32 1/2" 91099A220 Qty 150 -> Received 400 of 3/8" on Apr 1st
• Chassis Handle Screws 6-32 1/4" 91099A205 Qty 100 -> included in the above
• Powerboard mounting screw 4-40 Pan Flat Head Screw (round head, Phillips) 1/4" long Qty 125 -> Received 100 on Apr 1st

And some more additional items to fill the emptying stock.

• 18AWG wires (we have orange/blue/black 1000ft, I'm sending ~1000ft black/green/white)
• Already consumed 80% of 100ft 9pin ribbon cable (=only 20ft left in the stock)
Attachment 1: P_20210330_233508.jpg
Attachment 2: P_20210330_233618.jpg
15979   Tue Mar 30 18:21:34 2021 JonUpdateCDSFront-end testing

Progress today:

### Outside Internet access for FE test stand

This morning Jordan and I ran an 85-foot Cat 6 Ethernet cable from the campus network switch in the office area (on the ligo.caltech.edu domain) to the FE test stand near 1X6. This is to allow the test-stand subnet to be accessed for remote testing, while keeping it invisible to the parallel Martian subnet.

### Successful RTCDS model compilation on new FEs

The clone of the chiara:/home/cds disk completed overnight. Today I installed the disk in the chiara clone. The NFS mounts (/opt/rtcds, /opt/rtapps) shared with the other test-stand machines mounted without issue.

Next, I attempted to open the shared Matlab executable (/cvs/cds/caltech/apps/linux64/matlab/bin/matlab) and launch Simulink. The existing Matlab license (/cvs/cds/caltech/apps/linux64/matlab/licenses/license_chiara_865865_R2015b.lic) did not work on this new machine, as they are machine-specific, so I updated the license file. I linked this license to my personal license, so that the machine license for the real chiara would not get replaced. The original license file is saved in the same directory with a *.bak postfix. If this disk is ever used in the real chiara machine, this file should be restored. After the machine license was updated, Matlab and Simulink loaded and allowed model editing.

Finally, I tested RTCDS model compilation on the new FEs using the c1lsc model as a trial case. It encountered one path issue due to the model being located at /opt/rtcds/userapps/release/isc/c1/models/isc/ instead of /opt/rtcds/userapps/release/isc/c1/models/. This seems to be a relic of the migration of the 40m models from the SVN to a standalone git repo. This was resolved by simply symlinking to the expected location:

$sudo ln -s /opt/rtcds/userapps/release/isc/c1/models/isc/c1lsc.mdl /opt/rtcds/userapps/release/isc/c1/models/c1lsc.mdl The model compilation then succeeded: controls@c1bhd$ cd /opt/rtcds/caltech/c1/rtbuild/release controls@c1bhd$make clean-c1lsc Cleaning c1lsc... Done controls@c1bhd$ make c1lsc Cleaning c1lsc... Done Parsing the model c1lsc... Done Building EPICS sequencers... Done Building front-end Linux kernel module c1lsc... make[1]: Warning: File 'GNUmakefile' has modification time 28830 s in the future make[1]: warning:  Clock skew detected.  Your build may be incomplete. Done RCG source code directory: /opt/rtcds/rtscore/branches/branch-3.4 The following files were used for this build: /opt/rtcds/caltech/c1/userapps/release/cds/common/src/cdsToggle.c /opt/rtcds/userapps/release/cds/c1/src/inmtrxparse.c /opt/rtcds/userapps/release/cds/common/models/FILTBANK_MASK.mdl /opt/rtcds/userapps/release/cds/common/models/rtbitget.mdl /opt/rtcds/userapps/release/cds/common/models/SCHMITTTRIGGER.mdl /opt/rtcds/userapps/release/cds/common/models/SQRT_SWITCH.mdl /opt/rtcds/userapps/release/cds/common/src/DB2MAG.c /opt/rtcds/userapps/release/cds/common/src/OSC_WITH_CONTROL.c /opt/rtcds/userapps/release/cds/common/src/wait.c /opt/rtcds/userapps/release/isc/c1/models/c1lsc.mdl /opt/rtcds/userapps/release/isc/c1/models/IQLOCK_WHITENING_TRIGGERING.mdl /opt/rtcds/userapps/release/isc/c1/models/PHASEROT.mdl /opt/rtcds/userapps/release/isc/c1/models/RF_PD_WITH_WHITENING_TRIGGERING.mdl /opt/rtcds/userapps/release/isc/c1/models/UGF_SERVO_40m.mdl /opt/rtcds/userapps/release/isc/common/models/FILTBANK_TRIGGER.mdl /opt/rtcds/userapps/release/isc/common/models/LSC_TRIGGER.mdl Successfully compiled c1lsc *********************************************** Compile Warnings, found in c1lsc_warnings.log: *********************************************** [warnings suppressed]

As did the installation:

controls@c1bhd$make install-c1lsc Installing system=c1lsc site=caltech ifo=C1,c1 Installing /opt/rtcds/caltech/c1/chans/C1LSC.txt Installing /opt/rtcds/caltech/c1/target/c1lsc/c1lscepics Installing /opt/rtcds/caltech/c1/target/c1lsc Installing start and stop scripts /opt/rtcds/caltech/c1/scripts/killc1lsc /opt/rtcds/caltech/c1/scripts/startc1lsc Performing install-daq Updating testpoint.par config file /opt/rtcds/caltech/c1/target/gds/param/testpoint.par /opt/rtcds/rtscore/branches/branch-3.4/src/epics/util/updateTestpointPar.pl -par_file=/opt/rtcds/caltech/c1/target/gds/param/archive/testpoint_210330_170634.par -gds_node=42 -site_letter=C -system=c1lsc -host=c1lsc Installing GDS node 42 configuration file /opt/rtcds/caltech/c1/target/gds/param/tpchn_c1lsc.par Installing auto-generated DAQ configuration file /opt/rtcds/caltech/c1/chans/daq/C1LSC.ini Installing Epics MEDM screens Running post-build script safe.snap exists We are ready to start building and testing models. 15978 Tue Mar 30 17:27:04 2021 YehonathanUpdateCDSc1auxey assembly {Yehonathan, Jon} We poked (looked in situ with a flashlight, not disturbing any connections) around c1auxex chassis to understand better what is the wiring scheme. To our surprise, we found that nothing was connected to the RTNs of the analog input Acromag modules. From previous experience and the Acromag manual, there can't be any meaningful voltage measurement without it. I also did some rewiring in the Acromag chassis to improve its reliability. In particular, I removed the ground wires from the DIN rail and connected them using crimp-on butt splices. 15977 Mon Mar 29 19:32:46 2021 gautamUpdateLSCREFL55 whitening checkout I repeated the usual whitening board characterization test of: • driving a signal (using awggui) into the two inputs of the whitening board using a spare DAC channel available in 1Y2 • demodulating the response using the LSC sensing matrix infrastructure • Stepping the whitening gain, incrementing it in 3dB steps, and checking if the demodulated lock-in outputs increase in the expected 3dB steps. Attachment #1 suggests that the steps are equal (3dB) in size, but note that the "Q" channel shows only ~half the response of the I channel. The drive is derived from a channel of an unused AI+dewhite board in 1Y2, split with a BNC Tee, and fed to the two inputs on the whitening filter. The impedance is expected to be the same on each channel, and so each channel should see the same signal, but I see a large asymmetry. All of this checked out a couple of weeks ago (since we saw ellipses and not circles) so not sure what changed in the meantime, or if this is symptomatic of some deeper problem. Usually, doing this and then restoring the cabling returns the signal levels of REFL55 to nominal levels. Today it did not - at the nominal whitening gain setting of +18dB flat gain, when the PRMI is fringing, the REFL55 inputs are frequently reporting ADC overflows. Needless to say, all my attempts today evening to transition the length control of the vertex from REFL165 to REFL55 failed. I suppose we could try shifting the channels to (physical) Ch5 and Ch6 which were formerly used to digitize the ALS DFD outputs and are currently unused (from Ch3, Ch4) on this whitening filter and see if that improves the situation, but this will require a recompile of the RTCDS model and consequent CDS bootfest, which I'm not willing to undertake today. If anyone decides to do this test, let's also take the opportunity to debug the BIO switching for the delay line. Attachment 1: REFL55wht.png 15976 Mon Mar 29 17:55:50 2021 JonUpdateCDSFront-end testing ### Cloning of chiara:/home/cvs underway I returned today with a beefier USB-SATA adapter, which has an integrated 12 V supply for powering 3.5" disks. I used this to interface a new 6 TB 3.5" disk found in the FE supplies cabinet. I decided to go with a larger disk and copy the full contents of chiara:/home/cds. Strictly, the FEs only strictly need the RTS executables in /home/cvs/rtcds and /home/cvs/rtapps. However, to independently develop models, the shared matlab binaries in /home/cvs/caltech/... also need to be exposed. And there may be others I've missed. I began the clone around 12:30 pm today. To preserve bandwidth to the main disk, I am copying not the /home/cds disk directly, but rather its backup image at /media/40mBackup. ### Set up of dedicated SimPlant host Although not directly related to the FE testing, today I added a new machine to the test stand which will be dedicated to running sim models. Chris has developed a virtual cymac which we plan to run on this machine. It will provide a dedicated testbed for SimPlant and other development, and can host up to 10 user models. I used one of the spare 12-core Supermicro servers from LLO, which I have named c1sim. I assigned it the IP address 192.168.113.93 on the Martian network. This machine will run in a self-contained way that will not depend on any 40m CDS services and also should not interfere with them. However, if there are concerns about having it present on the network, it can be moved to the outside-facing switch in the office area. It is not currently running any RTCDS processes. Set-up was carried out via the following procedure: • Installed Debian 10.9 on an internal 480 GB SSD. • Installed cdssoft repos following Jamie's instructions. • Installed RTS and Docker dependencies: $ sudo apt install cpuset advligorts-mbuf-dkms advligorts-gpstime-dkms docker.io docker-compose
• Configured scheduler for real-time operation:
$sudo /sbin/sysctl kernel.sched_rt_runtime_us = -1 • Reserved 10 cores for RTS user models (plus one for IOP model) by adding the following line to /etc/default/grub: GRUB_CMDLINE_LINUX_DEFAULT="isolcpus=nohz,domain,1-11 nohz_full=1-11 tsc=reliable mce=off" followed by the commands: $ sudo update-grub sudo reboot now • Downloaded virtual cymac repo to /home/controls/docker-cymac. I need to talk to Chris before I can take the setup further. 15975 Mon Mar 29 17:34:52 2021 ranaUpdateSUSMC2 Coil Balancing updates I think there's been some mis-communication. There's no updated Hang procedure, but there is the one that Anchal, Paco and I discussed, which is different from what is in the elog. We'll discuss again, and try to get it right, but no need to make multiple forks yet. 15974 Mon Mar 29 17:11:54 2021 gautamUpdateSUSMC2 Coil Balancing updates For this technique to work, (i) the WFS loops must be well tuned and (ii) the beam must be well centered on MC2. I am reasonably certain neither is true. For MC2 coil balancing, you can use a HeNe, there is already one on the table (not powered), and I guess you can use the MC2 trans QPD as a sensor, MC won't need to be locked so you can temporarily hijack that QPD (please don't move anything on the table unless you're confident of recovering everything, it should be possible to do all of this with an additional steering mirror you can install and then remove once your test is done). Then you can do any variant of the techniques available once you have an optical lever, e.g. single coil drive, pringle mode drive etc to do the balancing. I think Hang had some technique he tried recently as well, maybe that is an improvement. 15973 Mon Mar 29 17:07:17 2021 gautamSummarySUSMC3 new Input Matrix not providing stable loop I suppose you've tried doing the submatrix approach, where SIDE is excluded for the face DoFs? Does that give a better matrix? To me, it's unreasonable that the side OSEM senses POS motion more than any single face OSEM, as your matrix suggests (indeed the old one does too). If/when we vent, we can try positioning the OSEMs better. 15972 Mon Mar 29 10:44:51 2021 Paco, AnchalUpdateSUSMC2 Coil Balancing updates We ran the coil balancing procedure 4 times while iterating through the output matrix optimization. Attachment 1, pages 1 to 4 show the progression of cross coupling from current output matrix (which is theoretical ideal) to the latest iteration. We plot the sensed DOF ASD which we used to determine the cross coupling when different excitations are fed using the LOCKIN1 feeding 13Hz oscillation of 200 counts amplitude along the vector defined in output matrix. That means, when we change the output matrix, in subsequent tests, we alos change the exciation direction along with it. Unfortunately, we don't see a very good optimizations over iterations. While we see some peaks going down in sensed PIT and sensed POS (through MC_F), we rather see an increase in cross coupling in the sensed YAW. ### Scripts: • For running the tests, we used script in scripts/SUS/OutMatCalc/crossCoupleTest.py and wrote commanding scripts in the /users/anchal/20210329_MC2_TestingNewOutMat . • The optimization code is at in scripts/SUS/OutMatCalc/outMatOptimize.py. • The code reads sensed DOF data using nds2 and calculated cross spectral density among the sensed DOF at the excitation frequencies. • This is normalized by the power spectral density of reference data (no excitation) and power spectral density of position data to create a TF estimate. • The real values of the sensor matrix thus created is used to get the inverse matrix. • The inverse matrix is first normalized along each row by diagonal elements to get 1 there and then multiplied by previous output matrix to create a new output matrix. • I guess, reading the code will be a better way of understanding this algorithm. Attachment 1: MC2OutMatCrossCouple_Old-to-It3.pdf Attachment 2: 20210329_MC2_CrossCoupleTest.tar.gz 15971 Sun Mar 28 14:16:25 2021 AnchalSummarySUSMC3 new Input Matrix not providing stable loop Rana asked us to write out here the new MC3 input matrix we have calculated along with the old one. The new matrix is not working out for us as it can't keep the suspension loops stable. Matrices: Old (Current) MC3 Input Matrix (C1:SUS-MC3_INMATRIX_ii_jj) UL UR LR LL SD POS 0.288 0.284 0.212 0.216 -0.406 PIT 2.658 0.041 -3.291 -0.674 -0.721 YAW 0.605 -2.714 0.014 3.332 0.666 SIDE 0.166 0.197 0.105 0.074 1 New MC3 Input Matrix (C1:SUS-MC3_INMATRIX_ii_jj) UL UR LR LL SIDE POS 0.144 0.182 0.124 0.086 0.586 PIT 2.328 0.059 -3.399 -1.13 -0.786 YAW 0.552 -2.591 0.263 3.406 0.768 SIDE -0.287 -0.304 -0.282 -0.265 0.871 Note that the new matrix has been made so that the norm of each row is the same as the norm of the corresponding row in the old (current) input matrix. Peak finding results: Guess Values Fittted Values PIT Resonant Freq. (Hz) 0.771 0.771 YAW Resonant Freq. (Hz) 0.841 0.846 POS Resonant Freq. (Hz) 0.969 0.969 SIDE Resonant Freq. (Hz) 0.978 0.978 PIT Resonance Q 600 345 YAW Resonance Q 230 120 POS Resonance Q 200 436 SIDE Resonance Q 460 282 PIT Resonance Amplitude 500 750 YAW Resonance Amplitude 1500 3872 POS Resonance Amplitude 3800 363 SIDE Resonance Amplitude 170 282 Note: The highest peak on SIDE OSEM sensor free swinging data as well as the DOF basis data created using existing input matrix, comes at 0.978 Hz. Ideally, this should be 1 Hz and in MC1 and MC2, we see the resonance on SIDE DOF to show near 0.99 Hz. If you look closely, there is a small peak present near 1 Hz actually, but it is too small to be the SIDE DOF eigenfrequency. And if it is indeed that, then which of the other 4 peaks is not the DOF we are interested in? On possiblity is that the POS eigenfrequency which is supposed to be around 0.97 Hz is split off in two peaks due to some sideways vibration and hence these peaks get strongly coupled to SIDE OSEM as well. P.S. I think something is wrong and out limited experience is not enough to pinpoint it. I can show up more data or plots if required to understand this issue. Let us know what you all think. Attachment 1: MC3_Input_Matrix_Diagonalization.pdf 15970 Fri Mar 26 11:54:37 2021 Paco, AnchalUpdateSUSMC2 Coil Balancing updates [Paco, Anchal] • Today we spent the morning testing the scripts under ~/c1/scripts/SUS/OutMatCalc/ that automate the procedure (which we have been doing by hand) and catch any "bad" behavior instances that we have identified. In such instances, the script sets up to restore the IMC state smoothly. • After some testing and debugging, we managed to get some data for MC2 using ~/c1/scripts/SUS/OutMatCalc/getCrossCouplingData.py 15969 Fri Mar 26 10:25:37 2021 YehonathanUpdateBHDSOS assembly I measure some of the dowel pins we got from Mcmaster with a caliper. One small pin is 0.093" in diameter and 0.376" in length. The other sampled small pin has the same dimensions. One big pin is 0.187" in diameter and 0.505" in length. The other is 0.187" in diameter and 0.506" in length. The dowels meet our requirements. 15968 Thu Mar 25 18:05:04 2021 gautamUpdateElectronicsStuffed HV coil drivers received from Screaming Circuits I think the only part missing for assembly now are 4 2U chassis. The PA95s need to be soldered on as well (they didn't arrive in time to send to SC). The stuffed boards are stored under my desk. I inspected one board, looks fine, but of course we will need to run some actual bench tests to be sure. 15967 Thu Mar 25 17:39:28 2021 gautamUpdateComputer Scripts / ProgramsSpot position measurement scripts "modernized" I want to measure the spot positions on the IMC mirrors. We know that they can't be too far off centerBasically I did the bare minimum to get these scripts in /opt/rtcds/caltech/c1/scripts/ASS/MC/ running on rossa (python3 mainly). I confirmed that I get some kind of spot measurement from this, but not sure of the data quality / calibration to convert the demodulated response into mm of decentering on the MC mirrors. Perhaps it's something the MC suspension team can look into - seems implausible to me that we are off by 5mm in PIT and YAW on MC2? The spot positions I get are (in mm from the center): MC1 P MC2P MC3P MC1Y MC2Y MC3Y 0.640515 -5.149050 0.476649 -0.279035 5.715120 -2.901459 A future iteration of the script should also truncate the number of significant figures per a reasonable statistical error estimation. Attachment 1: MCdecenter202103251735_mcmirror0.pdf Attachment 2: MCdecenter202103251735_mcdecenter0.pdf 15966 Thu Mar 25 16:02:15 2021 gautamSummarySUSRepeated measurement of coil Rs & Ls for PRM/BS Method Since I am mainly concerned with the actuator part of the OSEM, I chose to do this measurement at the output cables for the coil drivers in 1X4. See schematic for pin-mapping. There are several parts in between my measurement point and the actual coils but I figured it's a good check to figure out if measurements made from this point yield sensible results. The slow bias voltages were ramped off under damping (to avoid un-necessarily kicking the optics when disconnecting cables) and then the suspension watchdogs were shutdown for the duration of the measurement. I used an LCR meter to measure R and L - as prescribed by Koji, the probe leads were shorted and the readback nulled to return 0. Then for R, I corroborated the values measured with the LCR meter against a Fluke DMM (they turned out to be within +/- 0.5 ohms of the value reported by the BK Precision LCR meter which I think is reasonable). Result  PRM Pin1-9 (UL) / R = 30.6Ω / L=3.23mH Pin2-10 (LL) / R = 30.3Ω / L=3.24mH Pin3-11 (UR) / R = 30.6Ω / L=3.25mH Pin4-12 (LR) / R = 31.8Ω / L=3.22mH Pin5-13 (SD) / R = 30.0Ω / L=3.25mH BS Pin1-9 (UL) / R = 31.7Ω / L=3.29mH Pin2-10 (LL) / R = 29.7Ω / L=3.26mH Pin3-11 (UR) / R = 29.8Ω / L=3.30mH Pin4-12 (LR) / R = 29.7Ω / L=3.27mH Pin5-13 (SD) / R = 29.0Ω / L=3.24mH Conclusions On the basis of this measurement, I see no problems with the OSEM actuators - the wire resistances to the flange seem comparable to the nominal OSEM resistance of ~13 ohms, but this isn't outrageous I guess. But I don't know how to reconcile this with Koji's measurement at the flange - I guess I can't definitively rule out the wire resistance being 30 ohms and the OSEMs being ~1 ohm as Koji measured. How to reconcile this with the funky PRM actuator measurement? Possibilities, the way I see it, are: 1. Magnets on PRM are weird in some way. Note that the free-swinging measurement for the PRM showed some unexpected features. 2. The imbalance is coming from one of the drive chain - could be a busted current buffer for example. 3. The measurement technique was wrong. 15965 Thu Mar 25 15:31:24 2021 gautamUpdateIOOWFS servos The servos are almost certainly not optimal - but we have the IFO sort of working, so before we make any changes, let's make a strong case for it. Once the loop TFs and noises (e.g. the sensing noise reinjection you maybe saw) are fully characterized and a new loop is shown to perform better, then we can make the changes, but until then, let's continue using the "nominal" configuration and keep all the WFS loops on . I turned everything back on. BTW, MC2_ASCPIT_IN1 isn't the correct channel to measure the sensing noise re-injection, you need some other sensor, e.g. is the MC transmission (de)stabilized. 0-20 Hz is where I expect the WFS is actually measuring above the sensing noise. 15964 Thu Mar 25 14:58:16 2021 YehonathanUpdateBHDSOS SmCo magnets Inspection Redoing magnet measurement of envelope 1:  Magnet # Max Magnetic Field (kG) 1 2.89 2 2.82 3 2.86 4 2.9 5 2.86 6 2.73 7 2.9 8 2.88 9 2.85 10 2.93 Moving on to inspect and measure envelope 3 (the last one):  Magnet # Max Magnetic Field (kG) 1 2.92 2 2.85 3 2.93 4 2.97 5 2.9 6 3.04 7 2.9 8 2.92 9 3 10 2.92 11 2.94 12 2.92 13 2.92 14 2.95 15 3.02 16 2.91 17 2.89 18 2.9 19 2.86 20 2.9 21 2.92 22 2.9 23 2.87 24 2.93 25 2.85 26 2.88 27 2.92 28 2.9 29 2.9 30 2.89 31 2.83 32 2.83 33 2.8 34 2.94 35 2.88 36 2.91 37 2.9 38 2.91 39 2.94 40 2.88 15963 Thu Mar 25 14:16:33 2021 gautamUpdateCDSc1auxey assembly It might be a good idea to configure this box for the new suspension config - modern Satellite Amp, HV coil driver etc. It's a good opportunity to test the wiring scheme, "cross-connect" type adapters etc.  Quote: Next, the feedthroughs need to be wired and the channels need to be bench tested. 15962 Thu Mar 25 12:11:53 2021 YehonathanUpdateCDSc1auxey assembly I finished prewiring the new c1auxey Acromag chassis (see attached pictures). I connected all grounds to the DIN rail to save some wiring. The power switches and LEDs work as expected. I configured the DAQ modules using the old windows machine. I configured the gateway to be 192.168.114.1. The host machine still needs to be setup. Next, the feedthroughs need to be wired and the channels need to be bench tested. Attachment 1: 20210325_115500_HDR.jpg Attachment 2: 20210325_123033.jpg 15961 Thu Mar 25 11:46:31 2021 Paco, AnchalUpdateSUSMC2 Coil Balancing updates ### Proof-of-principle • We excited PIT and YAW dofs using LOCKIN1 in MC2 on Monday. • We analyzed this data in a simple analysis explained in Attachment 1 python notebook (also present at /users/anchal/20210323_AnalyszingCoilActuationBalance/) • Basically, we tried to estimate the cross coupling in 2x2 matrix from actuated DOF to sensed DOF, inverted it, and applied it to output matrix to undo the cross coupling. • Attachments 2 and 3 show how much we performed in undoing the cross coupling. • The ratio of 13.5 Hz peaks shows how much coupling is still present. ### Going towards 3x3 Coil balancing: • In a conversation with Rana yesterday, we understood that we can use MC_F data as POS sensing data out of the loop. • So today, we repreated the excitation measurements while exciting POS, PIT and YAW dofs from LOCKIN1 on MC2 and measuring C1:IOO-MC_F, C1:SUS-MC2_ASCPIT_IN1 and C1:SUS-MC2_ASCPIT_IN2. • Data from MC_F is converted into units of um using factor 9.57e-8 um/Hz. • We changed the excitation amplitude in order to see cross coupling peaks when they were not visible with low excitation. • The data was measured while new calculated input matrix was loaded which from our calculations diagonalized the sensing matrix of OSEMs. ### Some major changes: • We actually found that the C1:SUS-MC2_ASCPIT_IN1 showed a broadband increase in noise today (from Monday) by factor of about 100 in range 0-20 Hz. • We were not sure why this changed from our 22nd March measurement. • We checked if the gain values in the loops changed in alst 3 days, but they didn't. • Then we realized that the WFS1_PIT and WFS2_PIT switched that we turned ON on Tuesday were the only changes that were made in the loop. • We turned back OFF C1:IOO-WFS1_PIT_SW1 and C1:IOO-WFS2_PIT_SW1. This actually brought back the noise level of C1:SUS-MC2_ASCPIT_IN1 down to what it was on Monday. Attachment 1: CoilActuationBalancing.ipynb.tar.gz Attachment 2: MC2_CoilBalancePITnorm_excSamePIT.pdf Attachment 3: MC2_CoilBalanceYAWnorm_excSameYAW.pdf Attachment 4: 20210325_IMC_CoilBalance.tar.gz 15960 Wed Mar 24 22:54:49 2021 gautamUpdateLSCNew day, new problems I thought I'd get started on some of the tests tonight. But I found that this problem had resurfaced. I don't know what's so special about the REFL55 photodiode - as far as I can tell, other photodiodes at the REFL port are running with comparable light incident on it, similar flat whitening gain, etc etc. The whitening electronics are known to be horrible because they use the quad LT1125 - but why is only this channel problematic? To describe the problem in detail: • I had checked the entire chain by putting an AM field on the REFL 55 photodiode, and corroborating the pk-to-pk (counts) value measured in CDS with the "nominal" setting of +18dB flat whitening gain against the voltage measured by a "reference" PD, in this case a fiber coupled NF1611. • In the above test, I confirmed that the measured signal was consistent with the value reported by the NF1611. • So, at least on Friday, the entire chain worked just fine. The PRMI PDH fringes were ~6000cts-pp in this condition. • Today, I found that while trying to acquire PRMI lock, the PDH fringes witnessed in REFL55 were saturating the ADC. I lowered the whitening gain to 0 dB (so a factor of 8). Then the PDH fringes were ~20,000cts-pp. So, overall, the gain of the chain seems to have gone up by a factor of ~25. • Given my NF1611 based test, the part of the chain I am most suspicious of is the whitening filter. But I don't have a good mechanism that explains this observation. Can't be as simple as the input impedance of the LT1125 being lowered due to internal saturations, because that would have the opposite effect, we would measure a tiny signal instead of a huge one I request Koji to look into this, time permitting, tomorrow. In slightly longer term, we cannot run the IFO like this - the frequency of occurrence is much too high and the "fix" seems random to me, why should sweeping the whitening gain fix the problem? There was some suggestion of cutting the PCB trace and putting a resistor to limit the current draw on the preceeding stage, but this PCB is ancient and I believe some traces are buried in internal layers. At the same time, I am guessing it's too much work to completely replace the whitening electronics with the aLIGO style units. Anyone have any bright ideas? Anyway, I managed to lock the PRMI (ETMs misaligned) using REFL165I/Q. Then, instead of using the BS as the MICH actuator, I used the two ITMs (equal magnitude, opposite sign in the LSC output matrix). • The digital demod phase in this config is different from what is used when the arm cavities are in play (under ALS control). Probably the difference is telling us something about the reflectivity of the arm cavity for various sideband fields, from which we can extract some useful info about the arm cavity (length, losses etc). But that's not the focus here - the correct digital demod phase was 11 degrees. See Attachment #1 for the sensing matrix. I've annotated it with some remarks. • The signals appear much more orthogonal when actuating on the ITMs. However, I was still only able to null the MICH line sensed in the PRCL sensor to a ratio of 1/5 (while looking at peaks on DTT). I was unable to do better by fine tuning either the digital demod phase, or the relative actuation strength on each ITM • The PRCL loop had a UGF of ~120 Hz, MICH loop ~60 Hz. • With the PRMI locked in this config, I tried to measure the appropriate loop gain and sign if I were to use the REFL55 photodiode instead of REFL165 - but I didn't have any luck. Unsurprising given the known electronics issues I guess... I didn't get around to running any of the other tests tonight, will continue tomorrow. Update Mar 26: Attachments #2 and #3 show that there is clearly something wrong with the whitening electronics associated with REFL55 channels - with the PSL shutter closed (so the only "signal" being digitized should be the electronics noise at the input of the whitening stage), the I and Q channels don't show similar profiles, and moreover, are not consistent (the two screenshots are from two separate sweeps). I don't know what to make of the parts of the sweep that don't show the expected "steps". Until ndscope gets a log-scaled y-axis option, we have to live with the poor visualization of the gain steps which are dB (rather than linearly) spaced. For this particular case, StripTool isn't an option either because the Q channel as a negative offset, and I opted agains futzing with the cabling at 1Y2 to give a small fixed positive voltage instead. I will emphasize that on Friday, this problem was not present, because the gain balance of the I and Q channels was good to within 1dB. Attachment 1: PRMI3f_noArmssensMat.pdf Attachment 2: REFL55_whtGainStepping.png Attachment 3: REFL55_whtGainStepping2.png 15959 Wed Mar 24 19:02:21 2021 JonUpdateCDSFront-end testing This evening I prepared a new 2 TB 3.5" disk to hold a copy of /opt/rtcds and /opt/rtapps from chiara. This is the final piece of setup before model compilation can be tested on the new front-ends. However chiara does not appear to support hot-swapping of disks, as the disk is not recognized when connected to the live machine. I will await confirmation before rebooting it. The new disk is not currently connected. 15958 Wed Mar 24 15:24:13 2021 gautamUpdateLSCNotes on tests For my note-taking: 1. Lock PRMI with ITMs as the MICH actuator. Confirm that the MICH-->PRCL contribution cannot be nulled. ✅ [15960] 2. Lock PRMI on REFL165 I/Q. Check if transition can be made smoothly to (and from?) REFL55 I/Q. 3. Lock PRMI. Turn sensing lines on. Change alignment of PRM / BS and see if we can change the orthogonality of the sensing. 4. Lock PRMI. Put a razor blade in front of an out-of-loop photodiode, e.g. REFL11 or REFL33. Try a few different masks (vertical half / horizontal half and L/R permutations) and see if the orthogonality (or lack thereof) is mask-dependent. 5. Double check the resistance/inductance of the PRM OSEMs by measuring at 1X4 instead of flange. ✅ [15966] 6. Check MC spot centering. If I missed any of the tests we discussed, please add them here. 15957 Wed Mar 24 09:23:49 2021 PacoUpdateSUSMC3 new Input Matrix [Paco] • Found IMC locked upon arrival • Loaded newest MC3 Input Matrix coefficients using /scripts/SUS/InMatCalc/writeMatrix.py after unlocking the MC, and disabling the watchdog. • Again, the sens signals started increasing after the WD is reenabled with the new Input matrix, so I manually tripped it and restored the old matrix; recovered MC lock. • Something is still off with this input matrix that makes the MC3 loop unstable. 15956 Wed Mar 24 00:51:19 2021 gautamUpdateLSCSchnupp asymmetry I used the Valera technique to measure the Schnupp asymmetry to be $\approx 3.5 \, \mathrm{cm}$, see Attachment #1. The data points are points, and the zero crossing is estimated using a linear fit. I repeated the measurement 3 times for each arm to see if I get consistent results - seems like I do. Subtle effects like possible differential detuning of each arm cavity (since the measurement is done one arm at a time) are not included in the error analysis, but I think it's not controversial to say that our Schnupp asymmetry has not changed by a huge amount from past measurements. Jamie set a pretty high bar with his plot which I've tried to live up to. Attachment 1: Lsch.pdf 15955 Tue Mar 23 09:16:42 2021 Paco, AnchalUpdateComputersPower cycled C1PSL; restored C1PSL So actually, it was the C1PSL channels that had died. We did the following to get them back: • We went to this page and tried the telnet procedure. But it was unable to find the host. • So we followed the next advice. We went to the 1X1 rack and manually hard shut off C1PSL computer by holding down the power button until the LEDs went off. • We wait for 5-7 seconds and switched it back on. • By the time we were back in control room, the C1PSL channels were back online. • The mode cleaner however was struggling to keep the lock. It was going in and out of lock. • So we followed the next advice and did burt restore which ran following command: burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Mar/22/17:19/c1psl.snap -l /tmp/controls_1210323_085130_0.write.log -o /tmp/controls_1210323_085130_0.nowrite.snap -v • Now the mode cleaner was locked but we found that the input switch of C1IOO-WFS1_PIT and C1IOO-WFS2_PIT filter banks were off. Which meant that only YAW sensors were in loop in the lock. • We went back in dataviewer and checked when these channels were shut down. See attachments for time series. • It seems this happened yesterday, March 22nd near 1:00 pm (20:00:00 UTC). We can't find any mention of anyone else doing it on elog and we left by 12:15pm. • So we shut down the PSL shutter (C1:PSL-PSL_ShutterRqst) and switched off MC autolocker (C1:IOO-MC_LOCK_ENABLE). • Switched on C1:IOO-WFS1_PIT_SW1 and C1:IOO-WFS2_PIT_SW1. • Turned back on PSL shutter (C1:PSL-PSL_ShutterRqst) and MC autolocker (C1:IOO-MC_LOCK_ENABLE). • Mode cleaner locked back easily and now is keeping lock consistently. Everything looks normal. Attachment 1: MCWFS1and2PITYAW.pdf Attachment 2: MCWFS1and2PITYAW_Zoomed.pdf 15954 Mon Mar 22 19:07:50 2021 Paco, AnchalUpdateSUSTrying coil actuation balance We found that following protocol works for changing the input matrices to new matrices: • Shut the PSL shutter C1:PSL-PSL_ShutterRqst. Switch off IMC autolocker C1:IOO-MC_LOCK_ENABLE. • Switch of the watchdog, C1:SUS-MC1_LATCH_OFF. • Update the new matrix. (in case of MC1, we need to change sign of C1:SUS-MC1_SUSSIDE_GAIN for new matrix) • Switch on the watchdog back again which enables all the coil outputs. Confirm that the optic is damped with just OSEM sensors. • Switch on IMC autolocker C1:IOO-MC_LOCK_ENABLE and open PSL shutter C1:PSL-PSL_ShutterRqst. We repeated this for MC2 as well and were able to lock. However, we could not do the same for MC3. It was getting unstable as soon as cavity was locked i.e. the WFS were making the lock unstable. However, the unstability was different in different attempts but we didn't try mroe times as we had to go. ### Coil actuation balancing: • We set LOCKIN1 and LOCKIN2 oscillators at 10.5 Hz anf 13.5 Hz with amplitude of 10 counts. • We wrote PIT, YAW and Butterfly actuation vectors (see attached text files used for this) on LOCKIN1 and LOCKIN2 for MC1. • We measured C1:SUS-MC1_ASCYAW_IN1 and C1:SUS-MC1_ASCPIT_IN1 and compared it against the case when no excitation was fed. • We repeated the above steps for MC2 except that we did not use LOCKIN2. LOCKIN2 was found to already on at oscillator frequency of 0.03Hz with amplitude of 500 counts and was fed to all coils with gain of 1 (so it was effectively moving position DOF at 0.03 Hz.) When we changed it, it became ON back again after we turned on the autolocker, so we guess this must be due to some background script and msut be important so we did not make any changes here. But what is it for? • We have gotten some good data for MC1 and MC2 to ponder upon next. • MC1 showed no cross coupling at all while MC2 shoed significant cross coupling between PIT and YAW. • Both MC1 and MC2 did not show any cross coupling between butterfly actuation and PIT/YAW dof. ### On another news, IOO channels died! • Infront of us, the medm channels starting with C1:IOO just died. See attachment 8. • We are not sure why that happened, but we have reported everything we did up here. • This happened around the time we were ready to switch back on the IMC autolocker and open the shutter. But now these channels are dead. • All optics were restored with old matrices and settings and are damped in good condition as of now. • IMC should lock back as soon as someone can restart the EPICS channels and switch on C1:IOO-MC_LOCK_ENABLE and C1:PSL-PSL_ShutterRqst. Attachment 1: 20210322_MC1_CoilBalancePIT.pdf Attachment 2: 20210322_MC1_CoilBalanceYAW.pdf Attachment 3: 20210322_MC1_CoilBalanceBUTT.pdf Attachment 4: 20210322_MC2_CoilBalancePIT.pdf Attachment 5: 20210322_MC2_CoilBalanceYAW.pdf Attachment 6: 20210322_MC2_CoilBalanceBUTT.pdf Attachment 7: 20210322_IMC_CoilBalance.tar.gz Attachment 8: image-e6019a14-9cf3-45f7-8f2c-cc3d7ad1c452.jpg 15953 Mon Mar 22 16:29:17 2021 gautamUpdateASCSome prep for AS WFS commissioning 1. Added rough cts2mW calibration filters to the quadrants, see Attachment #1. The number I used is: 0.85 A/W * 500 V/A * 10 V/V * 1638.4 cts/V (InGaAs responsivity) (RF transimpedance) (IQ demod conversion gain) (ADC calibration) 2. Recovered FPMI locking. Once the arms are locked on POX / POY, I lock MICH using AS55_Q as a sensor and BS as an actuator with ~80 Hz UGF. 3. Phased the digital demod phases such that while driving a sine wave in ETMX PIT, I saw it show up only in the "I" quadrant signals, see Attachment #2. The idea is to use the FPMI config, which is more easily accessed than the PRFPMI, to set up some tests, measure some TFs etc, before trying to commission the more complicated optomechanical system. Attachment 1: AS_WFS_head.png Attachment 2: WFSquads.pdf 15952 Mon Mar 22 15:10:00 2021 ranaUpdateSUSTrying coil actuation balance There's an integrator in the MC WFS servos, so you should never be disabling the ASC inputs in the suspensions. Disabling 1 leg in a 6 DOF MIMO system is like a kitchen table with 1 leg removed. Just diagnose your suspension stuff with the cavity unlocked. You should be able to see the effect by characterizing the damping loops / cross-coupling. 15951 Mon Mar 22 11:57:21 2021 Paco, AnchalUpdateSUSTrying coil actuation balance [Paco, Anchal] • For MC coil balancing we will use the ASC (i.e. WFS) error signals since there are no OPLEV inputs (are there OPLEVs at all?). ### Test MC1 • Using the SUS screen LockIns the plan is to feed excitation(s) through the coil outputs, and look at the ASC(Y/P) error signals. • A diaggui xml template was saved in /users/Templates/SUS/MC1-actDiag.xml which was based on /users/Templates/SUS/ETMY-actDiag.xml • Before running the measurement, we of course want to plug our input matrix, so we ran /scripts/SUS/InMatCalc/writeMatrix.py only to find that it tripped the MC1 Watchdog. • The SIDE input seems to have the largest rail, but we just followed the procedure of temporarily increasing the WD max! threshold to allow the damping action and then restoring it. • This happened because in latest iteration of our code, we followed an advice from the matlab code to ensure the SIDE OSEM -> SIDE DOF matrix element remains positive, but we found out that MC1 SIDE gain (C1:SUS-MC1_SUSSIDE_GAIN) was set to -8000 (instead of a positive value like all other suspensions). • So we decided to try our new input matrix with a positive gain value of 8000 at C1:SUS-MC1_SUSSIDE_GAIN and we were able to stablize the optic and acquire lock, but... • We saw that WFS YAW dof started accumulating offset and started disturbing the lock (much like last friday). We disabled the ASC Input button (C1:SUS-MC1_ASCYAW_SW2). • This made the lock stable and IMC autolocker was able to lock. But the offset kept on increasing (see attachment 1). • After sometime, the offset begain to exponential go to some steady state value which was around -3000. • We wrote back the old matrix values and changed the C1:SUS-MC1_SUSSIDE_GAIN back to -8000. But the ASCYAW offset remained to the same position. We're leaving it disabled again as we don't know how to fix this. Hopefully, it will organically come back to small value later in the day like last time (Gautum just reenabled the ASCYAW input and it worked). Test MC3 • Defeated by MC1, we moved to MC3. • Here, the gain value for C1:SUS-MC3_SUSSIDE_GAIN was already positive (+500) so it could directly take our new matrix. • When we switched off watchdog, loaded the new matrix and switched the watchdog back on. • The IMC lock was slightly distrupted but remain locked. There was no unusual activity in the WFS sensor values. However, we saw the the SIDE coil output is slowly accumulating offset. • So we switched off the watchdog before it will trip itself, wrote back the old matrix and reinstated the status quo. • This suggests we need to carefully look back our latest changes of normalization and have new input matriced which keep the system stable other than working on paper with offline data. Attachment 1: 210322_MC1_ASCY.pdf Attachment 2: NewandOldMatrices.tar.gz 15950 Sun Mar 21 19:31:29 2021 ranaSummaryElectronicsRTL-SDR for monitoring RF noise / interference When we're debugging our RF system, either due to weird demod phases, or low SNR, or non-stationary noise in the PDH signals, its good to have some baseline measurements of the RF levels in the lab. I got this cheap USB dongle (RTL-SDR.COM) that seems to be capable of this and also has a bunch of open source code on GitHub to support it. It also comes mith an SMA coax and rabbit ear antenna with a flexi-tripod. I used CubicSDR, which has free .dmg downloads for MacOS.It would be cool to have a student write some python code (perhaps starting with RTL_Power) for this to let us hop between the diffierent RF frequencies we care about and monitor the power in a small band around them. 15949 Fri Mar 19 22:24:54 2021 gautamUpdateLSCPRMI investigations: what IS the matrix?? I did all these checks today.  Quote: I will check (i) REFL55 transimpedance, (ii) cable loss between AP table and 1Y2 and (iii) is the beam well centered on the REFL55 photodiode. 1. The transimpedance was measured to be ~420 ohms at 55 MHz (-4.3 dB relative to the assumed 700V/A of the NF1611), so close to what I measured in June (the data download didn't work apparently and so I don't have a plot but it can readily be repeated). The DC levels also checked out - with 20mA drive current for the Jenne laser, I measured ~2.3 V on the NF1611 (10kohm DC transimpedance) vs ~13mV on the DC output of the REFL55 PD (50 ohm DC transimpedance). 2. Time domain confirmation of the above statement is seen in Attachment #1. The Agilent was used to drive the Jenne laser with 0dBm RF signal @ 55 MHz. Ch1 (yellow) is the REFL55 PD output, Ch2 (blue) is the NF1611 RFPD, measured at the AP table (sorry for the confusing V/div setting). 3. Re-connected the cabling at the AP table, and measured the signal at 1Y2 using the scope Rana conveniently left there, see Attachment #2. Though the two scopes are different, the cable+connector loss estimated from the Vpp of the signal at the AP table vs that at 1Y2 is 1.5 dB, which isn't outrageous I think. 4. For the integrated test, I left the AM laser incident on the REFL55 photodiode, reconnected all the cabling to the CDS system, and viewed the traces on ndscope, see Attachment #3. Again, I think all the numbers are consistent. • REFL55 demod board has an overall conversion gain (including the x10 gain of the daughter board preamp) of ~5V I/F per 1V RF. • There is a flat 18 dB whitening gain. • The digitized signal was ~13000 ctspp - assuming 3276.8 cts/V, that's ~4Vpp. Undoing the flat whitening gain and the conversion efficiency, I get 13000 / 3276.8 / (10^(18/20)) / 5 ~ 100 mVpp, which is in good agreement with Attachment #3 (pardon the thin traces, I didn't realize it looked so bad until I closed everything). So it would seem that there is nothing wrong with the sensing electronics. I also think we can rule out any funkiness with the modulation depths since they have been confirmed with multiple different measurements. One thing I checked was the splitting ratios on the AP table. Jenne's diagram is still accurate (assuming the components are labelled correctly). Let's assume 0.8 W makes it through the IMC to the PRM - then, I would expect, according to the linked diagram, 0.8 W * 0.8 * (1-5.637e-2) * 0.8 * 0.1 * 0.5 * 0.9 ~ 22 mW to make it onto the REFL55 PD. With the PRM aligned and the beam centered on the PD (using DC monitor but I also looked through an IR viewer, looked pretty well centered), I measured 500 mV DC level. Assuming 50 ohm DC transimpedance, that's 500 / 50 / 0.8 ~ 12.5 mW of power on this photodiode, which while is consistent with what's annotated on Jenne's diagram, is ~50% off from expectation. Is the uncertainty in the Faraday transmission and IMC transmission enough to account for this large deviation? If we want more optical gain, we'd have to put more light on this PD. I suppose we could have ~10x the power since that's what is on IMC REFL when the MC is unlocked? If we want x100 increase in optical gain, we'd also have to increase the transimpedance by 10. I'll double check the simulation but I"m inclined to believe that the sensing electronics are not to blame. Unconnected to this work but I feel like I'm flying blind without the wall StripTool traces so I restored them on zita (ran /opt/rtcds/caltech/c1/scripts/general/startStrip.sh). Attachment 1: IMG_9140.jpg Attachment 2: IMG_9141.jpg Attachment 3: REFL55.png 15948 Fri Mar 19 19:15:13 2021 JonUpdateCDSc1auxey assembly Today I helped Yehonathan get started with assembly of the c1auxey (slow controls) Acromag chassis. This will replace the final remaining VME crate. We cleared the far left end of the electronics bench in the office area, as discussed on Wed. The high-voltage supplies and test equipment was moved together to the desk across the aisle. Yehonathan has begun assembling the chassis frame (it required some light machining to mount the DIN rails that hold the Acromag units). Next, he will wire up the switches, LED indicator lights, and Acromag power connectors following the the documented procedure. 15947 Fri Mar 19 18:14:56 2021 JonUpdateCDSFront-end testing ### Summary Today I finished setting up the subnet for new FE testing. There are clones of both fb1 and chiara running on this subnet (pictured in Attachment 2), which are able to boot FEs completely independently of the Martian network. I then assembled a second FE system (Supermicro host and IO chassis) to serve as c1sus2, using a new OSS host adapter card received yesterday from LLO. I ran the same set of PCIe hardware/driver tests as was done on the c1bhd system in 15890. All the PCIe tests pass. ### Subnet setup For future reference, below is the procedure used to configure the bootserver subnet. • Select "Network" as highest boot priority in FE BIOS settings • Connect all machines to subnet switch. Verify fb1 and chiara eth0 interfaces are enabled and assigned correct IP address. • Add c1bhd and c1sus2 entries to chiara:/etc/dhcp/dhcpd.conf: host c1bhd { hardware ethernet 00:25:90:05:AB:46; fixed-address 192.168.113.91; } host c1bhd { hardware ethernet 00:25:90:06:69:C2; fixed-address 192.168.113.92; } • Restart DHCP server to pick up changes: sudo service isc-dhcp-server restart
• Add c1bhd and c1sus2 entries to fb1:/etc/hosts:
192.168.113.91    c1bhd 192.168.113.92    c1sus2
• Power on the FEs. If all was configured correctly, the machines will boot.

### C1SUS2 I/O chassis assembly

• Installed in host:
• One Stop Systems PCIe x4 host adapter (new card sent from LLO)
• Installed in chassis:
• Channel Well 250 W power supply (replaces aLIGO-style 24 V feedthrough)
• Timing slave
• Contec DIO-1616L-PE module for timing control

Next time, on to RTCDS model compilation and testing. This will require first obtaining a clone of the /opt/rtcds disk hosted on chiara.

Attachment 1: image_72192707_(1).JPG
Attachment 2: image_50412545.JPG
15946   Fri Mar 19 15:31:56 2021 AidanUpdateComputersActivated MATLAB license on donatella

15945   Fri Mar 19 15:26:19 2021 AidanUpdateComputersActivated MATLAB license on megatron

15944   Fri Mar 19 11:18:25 2021 gautamUpdateLSCPRMI investigations: what IS the matrix??

From Finesse simulation (and also analytic calcs), the expected PRCL optical gain is ~1 MW/m (there is a large uncertainty, let's say a factor of 5, because of unknown losses e.g. PRC, Faraday, steering mirrors, splitting fractions on the AP table between the REFL photodiodes). From the same simulation, the MICH optical gain in the Q-phase signal is expected to be a factor of ~10 smaller. I measured the REFL55 RF transimpedance to be ~400 ohms in June last year, which was already a little lower than the previous number I found on the wiki (Koji's?) of 615 ohms. So we expect, across the ~3nm PRCL linewidth, a PDH horn-to-horn voltage of ~1 V (equivalently, the optical gain in units of V/m for PRCL is ~0.3 GV/m).

In the measurement, the MICH gain is indeed ~x10 smaller than the PRCL gain. However, the measured optical gain (~0.1GV/m, but this is after the x10 gain of the daughter board) is ~10 times smaller than what is expected (after accounting for the various splitting fractions on the AS table between REFL photodiodes). We've established that the modulation depth isn't to blame I think. I will check (i) REFL55 transimpedance, (ii) cable loss between AP table and 1Y2 and (iii) is the beam well centered on the REFL55 photodiode.

Basically, with the 400ohm transimpedance gain, we should be running with a whitening gain of 0dB before digitization as we expect a signal of O(1V). We are currently running at +18dB.

 Quote: Then I put the RF signal directly into the scope and saw that the 55 MHz signal is ~30 mVpp into 50 Ohms. I waited a few minutes with triggering to make sure I was getting the largest flashes. Why is the optical/RF signal so puny? This is ~100x smaller than I think we want...its OK to saturate the RF stuff a little during lock acquisition as long as the loop can suppress it so that the RMS is < 3 dBm in the steady state.
15943   Fri Mar 19 10:49:44 2021 Paco, AnchalUpdateSUSTrying coil actuation balance

[Paco, Anchal]

• We decided to try out the coil actuation balancing after seeing some posts from Gautum about the same on PRM and ETMY.
• We used diaggui to send swept sine excitation signal to C1:SUS-MC3_ULCOIL_EXC and read it back at C1:SUS-MC3_ASCPIT_IN1. Idea was to create transfer function measurements similar to 15880.
• We first tried taking the transfer function with excitation amplitude 0f 1, 10, 50, 200 with damping loops on (swept from 10 to 100 Hz lograthmically in 20 points).
• We found no meaningful measurement and looked like we were just measuring noise.
• We concluded that it is probably because our damping loops are damping all the excitation down.
• So we decided to switch off damping and retry.
• We switched off: C1:SUS-MC3_SUSPOS_SW2 , C1:SUS-MC3_SUSPIT_SW2, C1:SUS-MC3_ASCPIT_SW2, C1:SUS-MC3_ASCYAW_SW2, C1:SUS-MC3_SUSYAW_SW2, and C1:SUS-MC3_SUSSIDE_SW2.
• We repeated teh above measurements going up in amplitudes of excitation as 1, 10, 20. We saw the oscillation going out of UL_COIL but the swept sine couldn't measure any meaningful transfer function to C1:SUS-MC3_ASCPIT_IN1. So we decided to just stop. We are probably doing something wrong.

### Trying to go back to same state:

• We switch on: C1:SUS-MC3_SUSPOS_SW2 , C1:SUS-MC3_SUSPIT_SW2, C1:SUS-MC3_ASCPIT_SW2, C1:SUS-MC3_ASCYAW_SW2, C1:SUS-MC3_SUSYAW_SW2, and C1:SUS-MC3_SUSSIDE_SW2.
• But C1:SUS-MC3_ASCYAW_INMON had accumulated about 600 offset and was distrupting the alignment. We switched off C1:SUS-MC3_ASCYAW_SW2 hoping the offset will go away once the optic is just damped with OSEM sensors, but it didn't.
• Even after minutes, the offset in C1:SUS-MC3_ASCYAW_INMON kept on increasing and crossed beyond 2000 counts limit set in C1:IOO-MC3_YAW filter bank.
• We tried to unlock the IMC and lock it back again but the offset still persisted.
• We tried to add bias in YAW DOF by increasing C1:SUS-MC3_YAW_OFFSET, and while it was able to somewhat reduce the WFS C1:SUS-MC3_ASCYAW_INMON offset  but it was misalgning the optic and the lock was lost. So we retracted the bias to 0 and made it zero.
• We tried to track back where the offset is coming from. In C1IOO_WFS_MASTER.adl, we opened the WFS2_YAW filter bank to see if the sensor is indeed reading the increasing offset.
• It is quite weird that C1:IOO-WFS2_YAW_INMON is just oscillating but the output in this WFS2_YAW filter bank is slowly increasing offset.
• We tried to zero the gain and back to 0.1 to see if some holding function is causing it, but that was not the case. The output went back to high negative offset and kept increasing.
• We don't know what else to do. Only this one WFS YAW output is increasing, everything else is at normal level with no increasing offset or peculiar behavior.
• We are leaving C1:SUS-MC3_ASCYAW_SW2 off as it is disrupting the IMC lock.

[Jon walked in, asked him for help]

• Jon suggested to do burt restore on IOO channels.
• We used (selected through burtgooey): burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Mar/19/08:19/c1iooepics.snap -l /tmp/controls_1210319_113410_0.write.log -o /tmp/controls_1210319_113410_0.nowrite.snap -v <
• No luck, the problem persists.
15942   Thu Mar 18 21:37:59 2021 ranaUpdateLSCPRMI investigations: what IS the matrix??
• Locked PRMI several tmes after Gautam setup. Easy w IFO CONFIG screen
• tuned up alignment
• Still POP22_I doesn't go above ~111, so not triggering the loops. Lowered triggers to 100 (POP22 units) and it locks fine now.
• Ran update on zita, and now it lost its mounts (and maybe its mind). Zita needs some love to recover the StripTool plots
• Put the \$600 ebay TDS3052 near the LSC rack and tried to look at the RF power, but found lots of confusing information. Is there really a RF monitor in this demod board or was it disconnected by a crazy Koji ? I couldn't see any signal above a few mV.
• Put a 20 dB coupler in line with the RF input and saw zip. Then I put the RF signal directly into the scope and saw that the 55 MHz signal is ~30 mVpp into 50 Ohms. I waited a few minutes with triggering to make sure I was getting the largest flashes. Why is the optical/RF signal so puny? This is ~100x smaller than I think we want...its OK to saturate the RF stuff a little during lock acquisition as long as the loop can suppress it so that the RMS is < 3 dBm in the steady state.
Attachment 1: PXL_20210319_045925024.jpg
15941   Thu Mar 18 18:06:36 2021 gautamUpdateElectronicsModified Sat Amp and Coil Driver

I uploaded the annotated schematics (to be more convenient than the noise analysis notes linked from the DCC page) for the HAM-A coil driver and Satellite Amplifier.

15940   Thu Mar 18 13:12:39 2021 gautamUpdateComputer Scripts / ProgramsOmnigraffle vs draw.io

What is the advantage of Omnigraffle c.f. draw.io? The latter also has a desktop app, and for creating drawings, seems to have all the functionality that Omnigraffle has, see for example here. draw.io doesn't require a license and I feel this is a much better tool for collaborative artwork. I really hate that I can't even open my old omnigraffle diagrams now that I no longer have a license.

Just curious if there's some major drawback(s), not like I'm making any money off draw.io.

 Quote: After Anchal left for his test, I took the time to set up the iMAC station so that Stephen (and others) can remote desktop into it to use Omnigraffle.
15939   Thu Mar 18 12:46:53 2021 ranaUpdateSUSTesting of new input matrices with new data

Good Enough! Let's move on with output matrix tuning. I will talk to you guys about it privately so that the whole doesn't learn our secret, and highly sought after, actuation balancing.

I suspect that changing the DC alignment of the SUS changes the required input/output matrix (since changes in the magnet position w.r.t. the OSEM head change the sensing cross-coupling and the actuation gain), so we want to make sure wo do all this with the mirror at the correct alignment.

15938   Thu Mar 18 12:35:29 2021 ranaUpdatesafetyDoor to outside from control room was unlocked

I think this is probably due to the safety tour yesterday. I beleive Jordan showed them around the office area and C&B. Not sure why they left through the control room.

 Quote: I came into the lab a few mins ago and found the back door open. I closed it. Nothing obvious seems amiss. Caltech security periodically checks if this door is locked but it's better if we do it too if we use this door for entry/exit.

15937   Thu Mar 18 09:18:49 2021 Paco, AnchalUpdateSUSTesting of new input matrices with new data

[Paco, Anchal]

Since the new generated matrices were created for the measurement made last time, they are of course going to work well for it. We need to test with new independent data to see if it works in general.

• We have run scripts/SUS/InMatCal/freeSwingMC.py for 1 repition and free swinging duration of 1050s on tmux session FreeSwingMC on Rossa. Started at GPS: 1300118787.
• Thu Mar 18 09:24:57 2021 : The script ended successfully. IMC is locked back again. Killing the tmux session.
• Attached are the results of 1-kick test, time series data and the ASD of DOFs for calculated using existing input matrix and our calculated input matrix.
• The existing one was already pretty good except for maybe the side DOF which was improved on our diagonalization.

[Paco]

After Anchal left for his test, I took the time to set up the iMAC station so that Stephen (and others) can remote desktop into it to use Omnigraffle. For this, I enabled the remote login and remote management settings under "Sharing" in "System Settings". These two should allow authenticated ssh-ing and remote-desktopping respectively. The password is the same that's currently stored in the secrets.

Quickly tested using my laptop (OS:linux, RDP client = remmina + VNC protocol) and it worked. Hopefully Stephen can get it to work too.

Attachment 1: MC_Optics_Kicked_Time_Series_1.pdf
Attachment 2: TEST_Input_Matrix_Diagonalization.pdf
15936   Thu Mar 18 07:02:27 2021 KojiUpdateLSCREFL11 demod retrofitting

Attachment 1: Transfer Functions

The original circuit had a gain of ~20 and the phase delay of ~1deg at 10kHz, while the new CH-I and CH-Q have a phase delay of 3 deg and 2 deg, respectively.

Attachment 2: Output Noise Levels

The AD797 circuit had higher noise at low frequency and better noise levels at high frequency. Each TLE2027 circuit was tuned to eliminate the instability and shows a better noise level compared to the low-frequency spectrum of the AD797 version.

RXA: AD797 , all hail the op-amps ending with 27 !

Attachment 1: TFs.pdf
Attachment 2: PSD.pdf
15935   Thu Mar 18 01:12:31 2021 gautamUpdateLSCPRFPMi
1. Integrated >1 hour at RF only control, high circulating powers tonight.
• All of the locklosses were due to me typing a wrong number / turning on the wrong filter.
• So the lock seems pretty stable, at least on the 20 minute timescale.
• No idea why given the various known broken parts.
2. Did a bunch of characterization.
• DARM OLTF - Attachment #1. The reference is when DARM is under ALS control.
• CARM OLTF - Attachment #2. Seems okay.
• Sensing matrix - Attachment #3. The CARM and DARM phases seem okay. Maybe the CARM phase can be tuned a bit with the delay line, but I think we are within 10 degrees.
3. TRX/TRY between 300-400, with large fluctuations mostly angular. So PRG ~17-22, to answer Koji's question in the meeting today.
• This is similar to what I had before the vent of Sep 2020.
• Not surprising to me, since I claim that we are in the regime where the recycling gain is limited by the flipped folding mirrors.
4. Tried to tweak the ASC (QPD only) by looking at the step responses, but I could never get the loop gains such that I could close an integrator on all the loops.

I need to think a little bit about the ASC commissioning strategy. On the positive side

1. REFL11 board seems to perform at least as well as before.
2. ALS performance made me (as Pep would say), so so happy.
3. Whole lock acquisiton sequence takes ~5mins if the PRMI catches lock quickly (5/7 times tonight).
4. Process seems repeatable.

1. How to get the AS WFS in the picture?
2. What does the (still) crazy sensing matrix mean? I think it's not possible to transfer vertex control to 1f signals with this kind of sensing.
3. What does it mean that the PRM actuation seems to work, even though the coils are imabalnced by a factor of 3-5, and the coil resistances read out <2 ohms???
4. What's going on at the ALS-->CARM transition? The ALS noise is clearly low enough that I can sit inside the CARM linewidth. Yet, there seems to be some offset between what ALS thinks is the resonant point, and what the REFL11 signal thinks is the resonant point. I am kind of able to "power through" this conflict, but the IMC error point (=AO path) is not very happy during the transition. It worked 8/8 times tonight, but would be good to figure out how to make this even more robust.
Attachment 1: DARM_OLTF_20210317.pdf
Attachment 2: CARMTF_20210317.pdf
Attachment 3: PRFPMI_Mar_17sensMat.pdf
15934   Wed Mar 17 16:30:46 2021 AnchalUpdateSUSNormalized Input Matrices plotted better than SURF students

Here, I present the same input matrices now normalized row by row to have same norm as current matrices rows. These now I plotted better than last time. Other comments same as 15902. Please let us know what you think.

Thu Mar 18 09:11:10 2021 :

Note: The comparison of butterfly dof in the two cases is bit bogus. The reason is that we know what the butterfly vector is in sensing matrix (N_osems x (N_dof +1)) and that is the last column being (1, -1, 1, -1, 0) corresponding to (UL, UR, LR, LL, Side). However, the matrix we multiply with the OSEM data is the inverse of this matrix (which becomes the input matrix) which has dimensions ((N_dof + 1) x N_osem) and has the last row corresponding to the butterfly dof. This row was not stored for old calculation of the input matrix (which is currently in use) and can not be recovered (mathematically not possible) with the existing 5x4 part of that input matrix. We just added (1, -1, 1, -1, 0) row in the bottom of this matrix (as was done in the matlab codes) but that is wrong and hence the butterfly vector looks so bad for the existing input matrix.

Proposal: We should store the last row of generated input matrix somewhere for such calculations. Ideally, another row in the epics channels for the input matrix would be the best place to store them but I guess that would be too destructive to implement. Other options are to store this 5 number information in wiki or just elogs. For this post, the buttefly row for the generated input matrix is present in the attached files (for future references).

Attachment 1: IMC_InputMatrixDiagonalization.pdf
Attachment 2: NewAndOldMatrices.zip
15933   Wed Mar 17 15:04:20 2021 gautamUpdateElectronicsRibbon cable for chassis

I had asked Chub to order 100ft ea of 9, 15 and 25 conductor ribbon cable. These arrived today and are stored in the VEA alongside the rest of the electronics/chassis awaiting assembly.

Attachment 1: IMG_9139.jpg
15932   Wed Mar 17 15:02:06 2021 gautamUpdatesafetyDoor to outside from control room was unlocked

I came into the lab a few mins ago and found the back door open. I closed it. Nothing obvious seems amiss.

Caltech security periodically checks if this door is locked but it's better if we do it too if we use this door for entry/exit.

ELOG V3.1.3-