After fixing a few things we felt were wrong in our implementation of the algorithm, we ran the coil balancing for 12 iterations with just 11s per excitation and still taking CSD with 0.1 Hz bandwidth. This time we saw the distance of sensing matrix from identity going down.
Came in a little bit after 8 and found the MC unlocked and struggling to lock for the past 3 hours. Looking at the SUS overview, both MC1 and ITMX Watchdogs had tripped so we damped the suspensions and brought them back to a good state. The autolocker was still not able to catch lock, so we cleared the WFS filter history to remove large angular offsets in MC1 and after this the MC caught its lock again.
Looks like two EQs came in at around 4:45 AM (Pacific) suggested by a couple of spikes in the seismic rainbow, and this.
As mentioned in last post, we earlier made an error in making sure that all time series arrays go in with same sampling rate in CSD calculation. When we fixed that, our recursive method just blew out in all the efforts since then.
We suspect a major issue is how our measured sensing matrix (the cross-coupling matrix between different degrees of freedom on excitation) has significant imaginary parts in it. We discard the imaginary vaues and only use real parts for iterative method, but we think this is not the solution.
Here we present cross-spectral density of different channels representing the three sensed DOFs (normalized by ASD of no excitation data for each involved component) and the sensing matrix (TF estimate) calculated by normalizing the first cross spectral density plots column wise by the diagonal values. These are measured with existing ideal output matrix but with the new input matrix. This is to get an idea of how these elements look when we use them.
Note, that we used only 10 seconds of data in this run and used binwidth of 0.25Hz. When we used binwidth of 0.1 Hz, we found that the peaks were broad and highest at 13.1 Hz instead of 13 Hz which is the excitation frequency used in these measurements.
Today, we finally crossed the last hurdle and got a successful converging coil balancing run.
We ran again this method but with the 'b' parameter as a matrix instead. This provides more gain on some off-diagonal terms than others. This gave us a better convergence with the code reaching to the tolerance level provided (0.01 distance of S matrix from identity) within 16 iterations (~17 mins).
Attachment 1 again shows how the off-diagonal terms go down and how the overall distance of sensing matrix from identity goes down. This is 'Cross coupling budget' of the coils as iterations move forward.
PMC lost lock between 21:00 and 22:00 UTC on April 11th as seen in the summary pages:
That's between 2pm and 3pm on Sunday for us. We're not sure what caused it. We will attempt to lock it back.
Mon Apr 12 08:45:53 2021: we used milind's python script in scripts/PSL/PMC/pmc_autolocker.py. It locked the PMC in about a minute and then IMC catched lock succefully.
However, the PMC transmission PD shows voltage level of about 0.7V. On medm, it is set to turn red below 0.7 and yellow above. In Summary pages in the past, it seems like this value has typically been around 0.74V. Simil;arly, the reflection RFPD DC voltage is around 0.063 V right now while it is supposed to be around 0.04 nominally So the lock is not so healthy.
We tried running this script and the bashscript version too (scripts/PSL/PMC/PMCAutolocker) a couple of times but it was unable to acquire lock.
Then we manually tried to acquire lock by varying the C1:PSL-PMC_RAMP (with gain set to -10 dB) and resetting PZT position by toggling Blank. After a few attempts, we were able to find the lock with transmission PD value around 0.73V and reflection RFPD value around 0.043. PZT control voltage was 30V and shown in red in medm to begin with. So we adjusted the output ramp again to let it come to above 50V (or maybe it just drifted to that value by itself as we could se some slow drift too). At Mon Apr 12 09:50:12 2021 , the PZT voltage was around 58V and shown in green.
We assume this is a good enough point for PMC lock and move on.
We tried two sets of filters on the output matrix POS column in MC2. Both versions failed. Following are some details.
The filters were somewhat successful, how much we can see in attachment 1. The tip about difference between eigenmode basis and cartesian basis was the main thing that helped us take data properly. We still used OSEM data but rotated the output from POS, PIT, YAW to x, theta, phi (cartesian basis where x is also measured as angle projected by suspension length).
** The notation here is [UL, UR, LR, LL]
We did a step response test with MC2 Suspensoin Damping Gains and optimized them to get <5 oscillations in ringdown.
In the afternoon, we'll complete doing the above steps for MC1 and MC3. Their coil balancing has not been done on DC so, it is bit non-ideal right now. We'll look into scripting this process as well.
Note: The AC gains were measured by keeping output matrix to ideal values of 1s. When optimizing DC gains, the AC gains were uploaded in coil ouput gains.
Today we had some trouble launching an excitation on C1:IOO-MC_LSC_EXC from awggui. The error read:
awgSetChannel: failed getIndexAWG C1:SUS-MC2_LSC_EXC ret=-3
What solved this was the following :
awg free 37008
Today we tested the F2A filters created from the DC gain values listed in 16066.
We extended the f2a filter implementation and diagnostics as summarized in 16086 to MC1 and MC3.
Attachment 1 shows the filters with Q=3, 7, 10. We diagnosed using Q=3.
Attachment 2 shows the test summary, exciting with broadband noise on the LSC_EXC and measuring the CSD to estimate the transfer functions.
Attachment 3 shows the filters with Q=3, 7, 10. We diagnosed using Q=3.
Attachment 4 shows the test summary, exciting with broadband noise on the LSC_EXC and measuring the CSD to estimate the transfer functions.
Our main observation (and difference) with respect to MC2 is the filters have relative success for the PIT cross-coupling and not so much for YAW. We already observed this when we tuned the DC output gains to compute the filters.
We ran the f2a filter test for MC1, MC2, and MC3.
The new filters differ from previous versions by a adding non-unity Q factor for the pole pairs as well.
This in terms of zpk is: [ [zr + i zi, zr - i zi], [pr + i pi, pr - i pi], 1] where
We uploaded all these filters using foton, into the three last FM slots on the POS output gain coil.
We ran tests on all suspended optics using the following (nominal) procedure:
C1:IOO-WFS_GAIN to 0.05.
** Excitation = 0.05 - 3.5 Hz uniform noise, 100 amplitude, 100 gain
WFS1 noise injection
C1:LSC-XARM_IN1_DQ / C1:LSC-YARM_IN1_DQ
C1:SUS-ETMX_LSC_OUT_DQ / C1:SUS-ETMY_LSC_OUT_DQ
C1:SUS-MC1_**COIL_OUT / C1:SUS-MC2_**COIL_OUT / C1:SUS-MC3_**COIL_OUT
C1:IOO-WFS1_PIT_ERR / C1:IOO-WFS1_YAW_ERR
** denotes [UL, UR, LL, LR]; the output coils.
We redid the WFS noise injection test and have compiled some results on noise contribution in arm cavity noise and IMC frequency noise due to angular noise of IMC.
Attachment 1: Shows the calibrated noise contribution from MC1 ASCPIT OUT to ARM cavity length noise and IMC frequency noise.
We today measured the calibration factors for XARM_OUT and YARM_OUT in nm/cts and replotted our results from 16117 with the correct frequency dependence.
Calibration of XARM_OUT and YARM_OUT
Inferring noise contributions to arm cavities:
Edit Mon May 10 18:31:52 2021
See corrections in 16129.
A few corrections to last analysis:
Attached is the control loop diagram when main laser is locked to IMC and a single arm (XARM) is locked to the transmitted light from IMC.
We picked a few parameters from 40m summary page and plotted them to see the effect of new settings. On April 4th, old settings were present. On April 28th (16091), new input matrices and F2A filters were uploaded but suspension gains remained the same. On May 5th (16120), we uploaded new (higher) suspension gains. We chose Sundays on UTC so that it lies on weekends for us. Most probably nobody entered 40m and it was calmer in the institute as well.
We can download data and plot comparisons ourselves and maybe calculate the spectrums of MC_TRANS_PIT/YAW and MC_REFL_DC when IMC was locked. But we want to know if anyone has better ways of characterizing the settings that we should know of before we get into this large data handling which might be time-consuming. From this preliminary 40m summary page plots, maybe it is already clear that we should go back to old settings. Awaiting orders.
We came in the morning with the following scene on the zita monitor:
The MC1 watchdog was tripped and seemed like IMC struggled all night with misconfigured WFS offsets. After restoring the MC1 WD, clearing the WFS offsets, and seeing the suspension damp, the MC caught lock. It wasn't long before the MC unlocked, and the MC1 WD tripped again.
We tried few things, not sure what order we tried them in:
Nothing worked. We kept seeing that ULPD var on MC1 keeps showing kicks every few minutes which jolts the suspension loops. So we decided to record some data with PSL shutter closed and just suspension loops on. Then we switched off the loops and recorded some data with freely swinging optic. Even when optic was freely swinging, we could see impulses in the MC1 OSEM UL PD var which were completely uncorrelated with any seismic activity. Infact, last night was one fo teh calmer nights seismically speaking. See attachment 2 for the time series of OSEM PD variance. Red region is when the coil outputs were disabled.
Edit Thu May 13 14:47:25 2021 :
Added OSEM Sensor timeseries data on the plots as well. The UL OSEM sensor data is the only channel which is jumping hapazardly (even during free swinging time) and varying by +/- 30. Other sensors only show some noise around a stable position as should be the case for a freely suspended optic.
We've set a free swing test to trigger at 3:30 am tomorrow for MC1. The script for tests is running on tmux session named 'freeSwingMC1' on rossa. The script will run for about 4.5 hrs and we'll correct the input matrix tomorrow from the results. If anyone wants to work during this time (3:30 am to 8:00 am), you can just kill the script by killing tmux session on rossa. ssh into rossa and type tmux kill-session -t freeSwingMC1.
We should redo the MC1 input matrix optimization and the coil balancing afterward as we did everything based on the noisy UL OSEM values.
The test was succesful and brought back the IMC to lock point at the end.
We calculated new input matrix using same code in scripts/SUS/InMatCalc/sus_diagonalization.py . Attachment 1 shows the results.
The calculations are present in scripts/SUS/InMatCalc/MC1.
We uploaded the new MC1 input matrix at:
Unix Time = 1621963200
GPS Time = 1305998418
This was done by running python scripts/SUS/general/20210525_NewMC1Settings/uploadNewConfigIMC.py on allegra. Old IMC settings (before Paco and I started workin on 40m) can be restored by running python scripts/SUS/general/20210525_NewMC1Settings/restoreOldConfigIMC.py on allegra.
Everything looks as stable as before. We'll look into long term trends in a week to see if this helped at all.
Here is our first attempt at a single-arm noise budget for ALS.
Attachment 1 shows the loop diagram we used to calculate the contribution of different noises.
Attachment 2 shows the measured noise at C1:ALS-BEATX_PHASE_FINE_OUT_HZ when XARM was locked to the main laser and Xend Green laser was locked to XARM.
We went near the MC2 area and opened the lid to inspect the GigE and analog video monitors for MC2. Looked like whatever image is coming through the viewport is split into the GigE (for beam tracking) and the analog monitor. We hooked the monitor found on the floor nearby and tweaked the analog video camera around to get a feel for how the "ghost" image of the transmission moves around. It looks like in order to try and remove this "extra spots" we would need to tweak the beam tracking BS. We will consult the beam tracking authorities and return to this.
Here's an updated X ARM ALS noise budget.
Rana suggested in today's meeting to put in a notch filter in the XARM IR PDH loop to avoid suppressing the excitation line. We tried this today first with just one notch at 1069 Hz and then with an additional notch at 619 Hz and sent two simultaneous excitations.
Following the conclusion, we are reverting the suspension gains to old values, i.e.
While the F2A filters, AC coil gains and input matrices are changed to as mentioned in 16066 and 16072.
The changes can be reverted all the way back to old settings (before Paco and I changed anything in the IMC suspensions) by running python scripts/SUS/general/20210602_NewIMCOldGains/restoreOldConfigIMC.py on allegra. The new settings can be uploaded back by running python scripts/SUS/general/20210602_NewIMCOldGains/uploadNewConfigIMC.py on allegra.
Unix Time = 1622676038
GPS Time = 1306711256
We attempted to simulate "oscillator based realtime calibration noise monitoring" in offline analysis with python. This helped us in finding about a factor of sqrt(2) that we were missing earlier in 16171. we measured C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ when X-ARM was locked to main laser and Xend green laser was locked to XARM. An excitation signal of amplitude 600 was setn at 619 hz at C1:ITMX_LSC_EXC.
We got a value of . The calibration factor in use is from nm/cts from 13984.
Next steps could be to budget this noise while we setup some way of having this calibration factor generated in realitime using oscillators on a FE model. Calibrating actuation of a single optic in a single arm is easy, so this is a good test setup for getting a noise budget of this calibration method.
We measured the Xend green laser PDH Open loop transfer function by following method:
We did a quick check to make sure there is no saturation in the C1:SUS-ITMX_LSC_EXC analog path. For this, we looked at the SOS driver output monitors from the 1X4 chassis near MC2 on a scope. We found that even with 600 x 10 = 6000 counts of our 619 Hz excitation these outputs in particular are not saturating (highest mon signal was UL coil with 5.2 Vpp). In comparison, the calibration trials we have done before had 600 counts of amplitude, so we can safely increase our oscillator strength by that much
Things that remain to be investigated -->
Attachment 1 shows the case when excitation is sent at control point i.e. the PZT output. As before, free running laser noise in units of Hz/rtHz is added after the actuator and I've also shown shot noise being added just before the detector.
Again, we have a access to three output points for measurement. right at the output of mixer (the PDH error signal), the feedback signal to be applied by uPDH box (PZT Mon) and the output of the summing box SR560.
Doing loop algebra as before, we get:
So measurement of can be done by
This means at 100 Hz, with 10 integration cycles, we should have needed only 3 mV of excitation signal to get an SNR of 100. However, we have been unable to get good measurements with even 25 mV of excitation. We tried increasing the cycles, that did not work either.
This post is to summarize this analysis. We need more tests to get any conclusions.
We replaced the Mon 7 with an LCD monitor from back bench. It is fed the analog signal from BNC converted into VGS with a converter box that Paco bought. We can replace this monitor with another monitor if it is required on the back bench. For now, we definitely need a monitor to show IMC camera's up there.
MC1 LL Sensor showed signs of fluctuating large offsets. We tried to find the issue in the box but couldn't find any. On power cycling, the sensor got back to normal. But in putting back the box, we bumped something and c1susaux slow channels froze. We tried to reboot it, but it didn't work and the channels do not exist anymore.
Today morning we came to find that IMC struggled to lock all night (See attachment 1). We kind of had an indication yesterday evening that MC1 LL Sensor PD had a higher variance than usual and Paco had to reset WFS offsets because they had integrated the noise from this sensor. Something similar happened last night, that a false offset and its fluctuation overwhelmed WFS and MC1 got misaligned making it impossible for IMC to get lock.
In the morning, Paco again reset the WFS offsets but not we were sure that the PD variance from MC1 LL osem was very high. See attachment 2 to see how only 1 OSEM is showing higher noise in comparison to the other 4 OSEMs. This behavior is similar to what we saw earlier in 16138 but for UL sensor. Koji and I fixed it in 16139 and we tested all other channels too.
So, Paco and I, went ahead and took out the MC1 satellite amplifier box S2100029 D1002812, opened the top, and checked all the PD channel testpoints with no input current. We didn't find anything odd. Next we checked the LED dirver circuit testpoints with LED OUT and GND shorted. We got 4.997V on all LED MON testpoints which indicate normal functioning.
We just hooked back everything on the MC1 satellit box and checked the sensor channels again on medm screens. To our surprise, it started functioning normally. So maybe, just a power cycling was required but we still don't know what caused this issue.
BUT when I (Anchal) was plugging back the power cables and D25 connectors on the back side in 1X4 after moving the box back into the rack, we found that the slow channels stopped updating. They just froze!
We got worried for some time as the negative power supply indicator LEDs on the acromag chassis (which is just below the MC1 satellite box) were not ON. We checked the power cables and had to open the side panel of the 1X4 rack to check how the power cables are connected. We found that there is no third wire in the power cables and the acromag chassis only takes in single rail supply. We confirmed this by looking at another acromag chassis on Xend. We pasted a note on the acromag chassis for future reference that it uses only positive rails and negative LED monitors are not usually ON.
Back to solving the frozen acromag issue, we conjectured that maybe the ethernet connection is broken. The DB25 cables for the satellite box are bit short and pull around other cables with it when connected. We checked all the ethernet cabling, it looked fine. On c1susaux computer, we saw that the monitor LED for ethernet port 2 which is connected to acromag chassis is solid ON while the other one (which is probably connection to the switch) is blinking.
We tried doing telnet to the computer, it didn't work. The host refused connection from pianosa workstation. We tried pinging the c1susaux computer, and that worked. So we concluded that most probably, the epics modbus server hosting the slow channels on c1susaux is unable to communicate with acromag chassis and hence the solid LED light on that ethernet port instead of a blinking one. We checked computer restart procedure page for SLOW computers on wiki and found that it said if telnet is not working, we can hard reboot the computer.
We hard reboot the computer by long pressing the power button and then presssing it back on. We did this process 3 times with the same result. The ethernet port 2 LED (Acromag chassis) would blink but the ethernet port 1 LED (connected to switch) would not turn ON. We now can not even ping the machine now, let alone telnet into it. All SUS slow monitor channels are not present now ofcourse. We also tried once pressing the reset button (which the manual said would reboot the machine), but we got the same outcome.
Now, we decided to stop poking around until someone with more experience can help us on this.
Bottomline: We don't know what caused the LL sensor issue and hence it has not been fixed. It can happen again. We lost all C1SUSAUX slow channels which are the OSEM and COIL slow monitor channels for PRM, BS, ITMX, ITMY, MC1, MC2 and MC3.
Jon suggested to reboot the acromag chassis, then the computer, and we did this without success. Then, Koji suggested we try running ifup eth0, so we ran `sudo /sbin/ifup eth0` and it worked to put c1susaux back in the martian network, but the modbus service was still down. We switched off the chassis and rebooted the computer and we had to do sudo /sbin/ifup eth0` again (why do we need to do this manually everytime?). Switched on the chassis but still no channels. `sudo systemctl status modbusioc.service' gave us inactive (dead) status. So we ran sudo systemctl restart modbusioc.service'.
The status became:
● modbusIOC.service - ModbusIOC Service via procServ
Loaded: loaded (/etc/systemd/system/modbusIOC.service; enabled)
Active: inactive (dead)
start condition failed at Thu 2021-06-17 16:10:42 PDT; 12min ago
ConditionPathExists=/opt/rtcds/caltech/c1/burt/autoburt/latest/c1susaux.snap was not met`
After another iteration we finally got a modbusIOC.service OK status, and we then repeated Jon's reboot procedure. This time, the acromags were on but reading 0.0, so we just needed to run `sudo /sbin/ifup eth1`and finally some sweet slow channels were read. As a final step we burt restored to 05:19 AM today c1susaux.snap file and managed to relock the IMC >> will keep an eye on it.... Finally, in the process of damping all the suspended optics, we noticed some OSEM channels on BS and PRM are reading 0.0 (they are red as we browse them)... We succeeded in locking both arms, but this remains an unknown for us.
We did the measurement of OLTF for Xend green laser PDH loop with excitation added at control point using a SR560 as shown in attachment 1 of 16202. We also measured coherence in our measurement, see attachment 1.
We checked back in time to see how the BS and PRM OSEM slow channels are zero. It was clear that they became zero when we worked on this issue on June 17th, Thursday. So we simply went back and power cycled the c1susaux acromag chassis. After that, we had to log in to c1susaux computer and run
sudo /sbin/ifdown eth1
sudo /sbin/ifup eth1
This restarted the ethernet port acromag chassis is connected to. This solved this issue and we were able to see all the slow channels in BS and PRM.
But then, we noticed that the OPLEV of ITMX is unable to read the position of the beam on the QPD at all. No light was reaching the QPD. We went in, opened the ITMX table cover and confirmed that the return OPLEV beam is way off and is not even hitting one of the steering mirrors that brings it to the QPD. We switched off the OPLEV contribution to the damping.
We did burt restore to 16th June morning using
burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Jun/16/06:19/c1susaux.snap -l /tmp/controls_1210622_095432_0.write.log -o /tmp/controls_1210622_095432_0.nowrite.snap -v
This did not solve the issue.
Then we noticed that the OSEM signals from ITMX were saturated in opposite directions for Left and Right OSEMs. The Left OSEM fast channels are saturated to 1.918 um for UL and 1.399 um for LL, while both right OSEM channels are bottomed to 0 um. On the other hand, the acromag slow PD monitors are showing 0 on the right channels but 1097 cts on UL PDMon and 802 cts in LL PD Mon. We actually went in and checked the DC voltages from the PD input monitor LEMO ports on the ITMX dewhitening board D000210-A1 and measured non-zero voltages across all the channels. Following is a summary:
We even took out the 4-pin LEMO outputs from the dewhitening boards that go to the anti-aliasing chassis and checked the voltages. They are same as the input voltages as expected. So the dewhitening board is doing its job fine and the OSEMs are doing their jobs fine.
It is weird that both the ADC and the acromags are reading these values wrong. We believe this is causing a big yaw offset in the ITMX control signal causing the ITMX to turn enough make OPLEV go out of range. We checked the CDS FE status (attachment 1). Other than c1rfm showing a yellow bar (bit 2 = GE FANUC RFM card 0) in RT Net Status, nothing else seems wrong in c1sus computer. c1sus FE model is running fine. c1x02 (the lower level model) does show a red bar in TIM which suggests some timing issue. This is present in c1x04 too.
Currently, the ITMX coil outputs are disabled as we can't trust the OSEM channels. We're investigating more why any of this is happening. Any input is welcome.
We successfully laid down all required optical fibre fiber cables from 1X4-1X7 region to 1Y1-1Y3 region today. This includes following cables:
Today we went through LSC locking mechanics with Gautam and as a "Hello World" example, worked on locking michelson cavity.
We characterized the loop OLTF, found the UGF to be 90 Hz and measured the noise at error and control points.
gautam: one aim of this work was to demonstrate that the "Lock Michelson (dark)" script call from the IFOconfigure screen worked - it did, reliably, after the setting changes mentioned above.
We corrected the MICH locking snap file C1configure_MI.req and saved an updated C1configure_MI.snap. Now the 'Restore MICH' script in IFO_CONFIGURE>!MICH>Restore MICH works. The corrections included adding the correct rows of PD_DOF matrices to be at the right settings (use AS55 as error signal). The MICH_A_GAIN and MICH_B_GAIN needed to be saved as well.
We also were able to get to PRMI SB resonance. PRM was misalgined earlier from optimal position and after some manual aligning, we were able to get it to lock just by hitting IFO_CONFIGURE>!PRMI>Restore PRMI SB (3f).
We found that megatron is unable to properly run scripts/MC/WFS/mcwfsoff and scripts/MC/WFS/mcwfson scripts. It fails cdsutils commands due to a library conflict. This meant that WFS loops were not turned off when IMC would get unlocked and they would keep integrating noise into offsets. The mcwfsoff script is also supposed to clear up WFS loop offsets, but that wasn't happening either. The mcwfson script was also not bringing back WFS loops on.
Gautam fixed these scripts temprorarily for running on megatron by using ezcawrite and ezcaswitch commands instead of cdsutils commands. Now these scripts are running normally. This could be the reason for wildly fluctuating WFS offsets that we have seen in teh past few months.
gautam: the problem here is that megatron is running Ubuntu18 - I'm not sure if there is any dedicated CDS group packaging for Ubuntu, and so we're using some shared install of the cdsutils (hosted on the shared chiara NFS drive), which is complaining about missing linked lib files. Depending on people's mood, it may be worth biting the bullet and make Megatron run Debian10, for which the CDS group maintains packages.
MC was unlocked and struggling to recover this morning due to misguided WFS offsets. In order to recover from this kind of issue, we
The MC is now restored and the plan is to let it run for a few hours so the offsets converge; then run the WFS relief script.
Last night Gautam walked us through the algorithm used to lock PRFPMI. We tried it several times with the PSL HEPA filter off between 10:00 pm July 7th to 1:00 am July 8th. None of our attempts were successful. In between, we tried to do the locking with old IMC settings as well, but it did not change the result for us. In most attempts, the arms would start to resonate with PRMI with about 200 times the power than without power recycling while the arms are still controlled by ALS beatnote. The handover of lock controls "CARM+DARM locked to ALS beatnote" to "Main laser + IMC locked to the CARM+DARM" would always fail. More specifically, we were seeing that as soon as we hand over the DC control of CARM from ALS beatnote to IR by feeding back to MC2, the lock would inevitably fail before the rest of the high-frequency control can be transferred over.
Nonetheless, Paco and I got a good demo of how to do PRFPMI locking if the need appears. With more practice and attempts, we should be able to achieve the lock at some point in the future. The issues in handover could be due to any of the following:
More insights or suggestions are welcome.
Note; An earthquake came around lunch time and tripped all watchdogs. Most suspensions were recovered without issues, but ITMX appeared to be stuck. We tried the shaking procedure, but after this we couldn't restore the XARM lock. From alignment, we tried optimizing the TRX but we only got up to ~0.5 and ASS wouldn't work as usual. In the end the issue was that we had forgotten to enable the LL coil output so after we did this, we managed to recover the XARM.
Rana came and helped us figure us where to inject the noise. Following are the characteristics of the test we did:
Attachment 1 shows a screenshot with awggui and diaggui screens displaying the signal in both angular and longitudinal channels.
Attachment 2 shows the analogous screenshot for MC2.
We found Mon7 in control room dead today afternoon. It's front power on green light is not lighting up. All other monitors are working as normal.
This monitor was used for looking at IMC camera analog feed. It is one of the most important monitors for us, so we should replace it with a different monitor.
Yehonathan and Paco disconnected the monitor and brought it down. We put it under the back table if anyone wants to fix it. Paco has ordered a BNC to VGA/HDMI converter to put in any normal monitor up there. It will happen this Wednesday. Meanwhile, I have changed the MON4 assignment from POP to Quad2 to be used for IMC.
I had been working on the Xend table optical layout update. Since the two steering mirrors in the Xend green are too close to each, there is a very small Gouy Phase different between these two mirrors. It was suggested to place two lenses so that we can increase the Gouy Phase. I have been working with Nick on this problem, and we had found a solution by using a la mode. We had written an a la mode code that optimize the Gouy Phase and the Mode Matching at the same time. After trying different lenses, we found the following results: a mode matching of 0.9939 as it is show in the first attachment below, and we found a Gouy Phase different between the two mirrors of about 60 degrees. I took photos of the Xend Table. The first photo is the Xend table as we had it right now. In the second photo, I moved the 2nd lens, and I placed the two more lenses that we need it, with more or lenses the correct position where they will be placed. The three old lenses will be replaced by three lenses of different focal length as it can be seen in the first attachment below. The first lens and third lens will stay in the same position where the old first lens and old third lens are, and the second lens will be moved by about half of an inch. We might have one or two of the lenses that we need, but we will have to order the rest of the lenses that need. My plan is to verify the lenses that we already have. Then, I need to let Nick know with lenses we need to order. Hopefully, we will be able to update the table by the end of this week if everything turn out fine.
Current Mode Matching and Gouy Phase Between Steering Mirrors
We found in 40m elog ID 3330 ( http://nodus.ligo.caltech.edu:8080/40m/3330) a documentation done by Kiwamu, where he measured the waist of the green. The waist of the green is about 35µm. Using a la mode, I was able to calculate the current mode matching, and the Gouy phase between the steering mirrors. In a la mode, I used the optical distances,which is just the distance measured times its index of refraction. I contacted someone from ThorLabs (which is the company that bought Optics For Research), and that person told that the Faraday IO-5-532-LP has a Terbium Gallium Garnet crystal of a length of 7mm and its index of refraction is 1.95. The current mode matching is 0.9343, and the current Gouy phase between steering mirrors is 0.2023 degrees. On Monday, Nick and I are planning to measure the actual mode matching. The attached below is the current X-arm optical layout.
Calculation For the New Optical Layout
Since the current Gouy phase between the steering mirror is essentially zero, we need to find a way how to increase the Gouy Phase. We tried to add two more lenses after the second steering mirror, and we found that increasing the Gouy phase result in a dramatically decrease in mode matching. For instance, a Gouy phase of about 50 degrees results in a mode matching of about .2, which is awful. We removed the first lens after the faraday, and we added two more mirrors and two more lenses after the second steering mirror. I modified the photo that I took and I place where the new lenses and new mirrors should go as shown in the second pictures attached below. Using a la mode, we found the following solution:
label z (m) type parameters
----- ----- ---- ----------
lens 1 0.0800 lens focalLength: 0.1000
First mirror 0.1550 flat mirror none:
Second mirror 0.2800 flat mirror none:
lens 2 0.4275 lens focalLength: Inf
lens 3 0.6549 lens focalLength: 0.3000
lens 4 0.8968 lens focalLength: -0.250
Third mirror 1.0675 flat mirror none:
Fourth mirror 1.4183 flat mirror none:
lens 5 1.6384 lens focalLength: -0.100
Fifth mirror 1.7351 flat mirror none:
Sixth mirror 2.0859 flat mirror none:
lens 6 2.1621 lens focalLength: 0.6000
ETM 2.7407 lens focalLength: -129.7
ITM 40.5307 flat mirror none:
The mode matching is 0.9786. The different Gouy phase different between Third Mirror and Fourth Mirror is 69.59 degrees, Gouy Phase between Fourth and Fifth 18.80 degrees, Gouy phase between Fifth and Sixth mirrors is 1.28 degrees, Gouy phase between Third and Fifth 88.38 degrees, and the Gouy phase between Fourth and Sixth is 20.08 degrees. Bellow attached the a la Mode code and the Plots.
Plan for this week
I don't think we have the lenses that we need for this new setup. Mostly, we will need to order the lenses on Monday. As I mention, Nick and I are going to measure the actual mode matching on Monday. If everything look good, then we will move on and do the Upgrade.