A few comments on REFL table alignment and REFL165.
Last time we realigned the table was after the PZT work by Koji/Kiwamu; we made sure that the beam was going through optics satisfactorily and that we were reading reasonable numbers. I did use primarily a viewer to align onto PD, after which we used the voltage reading to center better around that spot. As desired, I could not see the beam once it was centered on the PD. I never touched the PBS unfortunately, so I never noticed it was not fixed. Sad.
I am very surprised to hear the reading from REFL165, since I was reading around 400mV from it a few days before. Something strange happened in the mean time. I hope not when I was plugging and unplugging at the power rack for the POY work. But I would not have needed to touch REFL165. Those cables should get some strain relief at the rack, by the way.
I thought about it, and I must admit that after we centered camera on REFL (paired with an alignment), we did not check the beam path later, even after we saw that the REFL beam had moved. We only did a quick by-viewer check that the beams were not off of the PDs.
- The REFL path has been thoroughly aligned
Many optics had the spots not on the middle of the optic, including the PBS whose post was not fixed on the post holder.
We aligned the optical paths, the RF PDs, and the CCD. The alignment of the PD required the use of the IR viewer.
One should not trust the DC output as a reference of the PD alignment as it is not enough sensitive to the clipping.
We aligned the optical paths again after the reasonable alignment of PRM is established with the interferometer.
"Next time when you see REFL spot is not at the center of the camera, think what is moved!"
- The REFL165 PD is disconnected from the power supply
I found that the REFL165 PD is producing 7.5V output at the DC monitor no matter how the beam is blocked.
As I could not recover this issue by swapping the power connector at the LSC rack, I disconnected the cable
at the RFL165 PD side. I need to go through the PD power supply circuit next week.
The mode cleaner was misaligned probably due to the earthquake (the drop in the MC transmitted value slightly after utc 7:38:52 as seen in the second plot). The plots show PMC transmitted and MC sum signals from 10th june 07:10:08 UTC over a duration of 17 hrs. The PMC was realigned at about 4-4:15 pm today by rana. This can be seen in the first plot.
We came this morning and the IMC was misaligned. The IMC was realigned and locked. This of course changed the input beam and sent us down to a long alignment journey.
We first use TTs to find beam on BHD DCPD/Camera since it is only single bounce on all optics.
Then, PR2/3 were used to find POP beam while keeping the BHD beam.
Unfortunately, that was not enough. TTs and PRs have some degeneracy which caused us to lose the REFL beam.
Realizing this we went to AS table to find the REFL beam. We found a ghost beam that decieved us for a while. Realizing it was a ghost beam, we moved TT2 in pitch, while keeping the POP DCPD high with PRs, until we saw a new beam on the viewing card.
We kept aligning TT1/2, PR2/3 to maximize the REFL DCPD while keeping the POP DCPD high. We tried to look at the REFL camera but soon realized that the REFL beam is too weak for the camera to see.
At that point we already had some flashing in the arms (we centered the OpLevs in the beginning).
Arms were aligned and locked. We had some issue with the X-ARM not catching lock. We increased the gain and it locked almost immediately. To fix the arms gains correctly we took OLTFs (Attachment) and adjusted the XARM gain to 0.027 to make the UGF at 200Hz.
Both arms locked with 200 Hz UGF from:
From GPS: 1346713049
To GPS: 1346713300
From GPS: 1346713380
To GPS: 1346714166
HEPA turned off:
From GPS: 1346714298
To GPS: 1346714716
I took over the IFO, after Jenne's locking efforts, which included manual alignment, since the ASS was doing bad things.
For whatever reason, the Yarm ASS TT gains needed to be flipped back to go in the right direction. I've restored the old BURT snap file, and the ASS seems to work for now.
Furthermore, I added some FMs to the Yarm ASS to be able to ramp down gains, to be done as new offsets are ramped in, so that a smooth offset transition is possible. The new version of the script works reasonably, but could be smoother still... Once I iron this out, I'll do the same change to the Xarm, and update the buttons.
In any case, I was able to run ASS on both arms; single arm lock maxed out at around 0.85, maybe because we're only getting 0.78 from the PMC and 16k from the MC? I then aligned and locked the PRM, then reentered the oplevs on all of the PRMI optics. Oddly, the ETMs were at single uRads on their oplevs.
With this arm alignment, I was able to get the green TRX to ~0.55, and thus the beatnote to around -25dBm, which is still lower than we'd like. I didn't touch the Y green alignment, though it is pretty bad, at transmission of below 0.2 when "locked" on the 00 mode.
When I try to lock things, the initial ALS CARM and DARM locking seems to go fine, actuating on the ETMs for both DoFs, but ETMX is getting kicked during the resonance search every time. Maybe improving green alignment / increasing beatnote amplitudes will hopefully help some.
I'm leaving the interferometer with the PRM aligned, so that all optics (except SRM) are near the center of their oplev range. I'm curious as to what their variance will be over the next day; this can inform whether we need to improve the ETMY oplev's angular range or not.
Here's an 12 hour minute-trend of all of the oplevs. The worst offenders are ITMY pitch and yaw, and ITMX pitch.
Additionally, ETMY's yaw range is +-30urad, and here we see it wandering by 10 urad in a half day. We probably need more range.
I've uploaded a note at T1400735 about a new implementation of CESAR ESCOBAR ideas I've been working on. Please send me any and all feedback, comments, criticisms!
Using the things I talk about in there, I was able to have a time domain simulation of a 40m arm cavity transition through three error signals, without hardcoding the gains, offsets, or thresholds for using the signals. Some results look like this:
I'm going to be trying this out on the real IFO soon...
Power cycling c1dcuepics seems to have fixed the EPICs channel problems, and c1lsc, c1asc, and c1iovme are talking again.
I burt restored c1iscepics and c1Iosepics from the snapshot at 6 am this morning.
However, c1susvme1 never came back after the last power cycle of its crate that it shared with c1susvme2. I connected a monitor and keyboard per the reboot instructions. I hit ctrl-x, and it proceeded to boot, however, it displays that there's a media error, PXE-E61, suggests testing the cable, and only offers an option to reboot. From a cursory inspection of the front, the cables seem to look okay. Also, this machine had eventually come back after the first power cycle and I'm pretty sure no cables were moved in between.
Unfortunately, this has happened (and seems like it will happen) enough times that I set up a script for rebooting the machine in a controlled way, hopefully it will negate the need to repeatedly go into the VEA and hard-reboot the machines. Script lives at /opt/rtcds/caltech/c1/scripts/cds/rebootC1LSC.sh. SVN committed. It worked well for me today. All applicable CDS indicator lights are now green again. Be aware that c1oaf will probably need to be restarted manually in order to make the DC light green. Also, this script won't help you if you try to unload a model on c1lsc and the FE crashes. It relies on c1lsc being ssh-able. The basic logic is:
After the CDSs crashed we run the rebootC1LSC.sh script.
The script is a bit annoying in that it requires entering the CDSs' passwords multiple times over the time it runs which is long.
The resulting CDS screen is a bit different than what was reported before (attached). Also, not all watchdogs were restored.
We restore the remaining watchdogs and do XARM locking. Everything seems to be fine.
It was way more annoying without a script and took longer than the 4 minutes it does now.
You can fix the requirement to enter password by changing the sshd settings on the FEs like I did for pianosa.
After running the script, you should verify that there are no red flags in the output to console. Yesterday, some of the settings the script was supposed to reset weren't correctly reset, possibly due to python/EPICS problems on donatella, and this cost me an hour of searching last night because the locking wasn't working. Anyway, best practise is to not crash the FEs.
Steve noticed the RGA was not working today. It was powered on but no other lights were lit.
Turns out the c0rga machine had not been rebooted when the file system on linux1 was moved to the raid array, and thus no longer had a valid mount to /cvs/cds/. Thus, the scripts that were run as a cron could not be called.
We rebooted c0rga, and then ran ./RGAset.py to reset all the RGA settings, which had been reset when the RGA had lost power (and thus was the reason for only the power light being lit).
Everything seems to be working now. I'll be adding c0rga to the list of computers to reboot in the wiki.
Steve noticed that the RGA was not logging data and that not all the correct connection lights were on, and he wasn't able to run the "RGAset.py" script (in ...../scripts/RGA/) that sets up the proper parameters.
I looked, and the computer was not mounting the file system. I did a remote shutdown, then Steve went in and pushed the power button to turn the machine back on. After it booted up, it was able to talk to the file system, so I started ..../scripts/RGA/RGAset.py . The first 2 times I ran the script, it reported errors, but the 3rd time, it reported no communication errors. So, now that the computer can again talk to the file system, it should be able to run the cronjob, which is set to take data at 4am every day. Steve will check in the morning to confirm that the data is there. (The last data that's logged is 22Dec2013, 4am, which is right around when Koji reported and then fixed the file system).
When I came in this morning no light was reaching the MC. One fast machine was dead, c1lsc, and a number of the slow machines: c1susaux, c1iool0, c1auxex, c1auxey, c1iscaux. Gautam walked me through reseting the slow machines manually and the fast machines via the reboot script. The computers are all back online and the MC is again able to lock.
I restored C1:PSL-126MOPA_126MON to its original settings (EGUF = -410, EGUL = 410) and added a new calc channel called C1:LSC-EX_GRNBEAT_FREQ that is derived from C1:PSL-126MOPA_126MON. The calibration in the new channel converts the input to MHz.
field(DESC,"EX-PSL Green Beat Note Frequency")
field(SCAN, ".1 second")
I rebooted c1psl and burtrestored.
I rebooted cymac0 a couple of times. When I first got here it was just frozen. I rebooted it and then ran a model (x1ios). The machine froze the second time I ran ./killx1ios. I've rebooted it again.
For context, there's a is stand-alone cymac test system running at the 40m. It's not hooked up to anything, except for just being on the martian network (it's not currently mounting any 40m CDS filesystems, for instance). The machine is temporarily between the 1Y4 and 1Y5 racks.
I had to restart the elog again.
At this point, I'm going to try to get one of the GC guys to install gdb on nodus, and run the elog in the debugger, that way when it crashes the next time, I have some error output I can send back to the developer and ask why its crashing there.
The front ends seem to have different gps timestamps on the data than the frame builder has when receiving them.
One theory is we have fairly been doing SVN checkouts of the code for the front ends once a week or every two weeks, but the frame builder has not been rebuilt for about a month.
Alex is currently rebuilding the frame builder with the latest code changes.
It also suggests I should try rebuilding the frame builder on a semi-regular basis as updates come in.
(Report on Aug 12, 2022)
We went around the lab for the final check. Here are the additional notes.
I declare that now we are ready for the power outage.
Recent status of SOSs:
We completed one of the suspension (ITMY).
ITMX: 6 Magnets, standoffs, and guide rod glued / balance to be confirmed / needs to be baked
ITMY: 6 Magnets, standoffs, and guide rod glued / balance confirmed / needs to be baked
SRM: 6 Magnets, one standoff, and guide rod glued, / waiting for the release from the gluing fixture.
PRM: one standoff, and guide rod glued / waiting for the magnet gluing.
We think we solved all the problems for hanging the suspensions.
--- Magnet gluing fixture ---
--- Suspending the mirror ---
[yehonathan, anchal, paco]
Yesterday around 9:30 pm, we centered the BS, ITMY, ETMY, ITMX and ETMX oplevs (in that order) in their respective QPDs by turning the last mirror before the QPDs. We did this after running the ASS dither for the XARM/YARM configurations to use as the alignment reference. We did this in preparation for PRFPMI lock acquisition which we had to stop due to an earthquake around midnight
Late elog. Original time 08/02/2021 21:00.
I locked both arms and ran ASS to reach to optimum alignment. ETMY PIT > 10urad, ITMX P > 10urad and ETMX P < -10urad. Everything else was ok absolute value less than 10urad. I recentered these three.
Than I locked PRMI, ran ASS on PRCL and MICH and checked BS and PRM alignment. They were also less than absolute value 10urad.
I recently realized that the PLL is only using about 20% of the available actuation range of the AUX PZT. The +/-10 V control signal from the LB1005 is being directly inputted into the fast AUX PZT channel, which has an input range of +/-50 V.
I recommend to install a PZT driver (amplifier) between the controller and laser to use the full available actuator range. For cavity scans, this will increase the available sweep range from +/-50 MHz to +/-250MHz. This has a unique advantage even if slow temperature feedback is also implemented. To sample faster than the timescale of most of the angular noise, scans generally need to be made with a total sweep time <1 sec. This is faster than the PLL offset can be offloaded via the slow temperature control, so the only way to scan more than 100 MHz in one measurement is with a larger dynamic range.
I reconfigured the MC reflection path for low power. This meant the following changes:
Note, even the pick-off for WFS1 and WFS2 is too low I think. The IOO WFS alignment does not work properly for such low levels of light. I tried running the WFS loop for IMC and it just took the cavity out of the lock. So for low power scenario, we would keep the WFS loops OFF.
" We found that IPANG was not on its photodiode, but determined that it was centered on all of the in-vac mirrors, and that it was just a little bit of steering on the ETMY end out-of-vac table that needed to be done."
Manasa took photos of all test mass chambers and the BS chamber, so we can keep up-to-date CAD drawings.
Oplevs and IPPOS/IPANG are being centered as I type. Manasa and Ayaka are moving the lens in front of IPANG such that we have a slightly larger beam on the QPD.
The lens in front of IPANG on the out-of-vac table was moved to get a larger beam giving reasonable signals at the QPD.
IPPOS did not need much adjustment and was happy at the center of the QPD.
All oplevs but the ETMY were close to the center. I had to move the first steering mirror about half an inch on the out-of-vac table to catch the returning oplev beam from ETMY and direct it to the oplev PD.
* We have taken reasonable amount of in-vac pictures of ETM, ITM and BS chambers to update the CAD drawing.
Rather than make a new elog post every time I move something, I'm going to just keep updating this Google spreadsheet, which ought to republish every time I change it. It's already got everything I've done for the past week-ish. The spreadsheet can be accessed here, as a website, or here, as a pdf. I will still post something nightly so that you don't have to search for this post, but I wanted to be able to provide more-or-less real-time information on where things are without carpet-bombing the elog.
Thus far, the software needed for the Magewell video encoder has been successfully installed on Donatella. OBS studio has also been installed and works correctly. OBS will be the video recording software that can be interfaced via command line once the SDI video encoder starts working. (https://github.com/muesli/obs-cli)
So far, the camera can not be connected to the Magewell encoder. The encoder continues to have a pulsing error light that indicates "no signal" or "signal not locked". I have begun testing on a secondary camera, directly connected to the Magewell encoder with similar errors. This may be able to be resolved once more information about the camera and its specifications/resolution is uncovered. At this time I have not found any details on the LCL-902K by Watec that was given to me by Koji. I will begin looking into the model used in the 40 meter next.
Connect any video signal supported by the adapter. Use Windows / Mac or any other OS. If it keeps complaining, contact Magwell support.
We went this evening in search of a beat note signal between the Xarm transmitted green light and the PSL doubled green light.
First, we removed our new ETMX camera (we left the mount so it should be easy to put back) from the other day. We left the test masses exactly where they had been while flashing for IR, so even though we can no longer confirm, we expect that the IR beam axis hasn't changed. We used the steering mirror on the end table to align the green beam to the cavity. We turned on the loop to lock the end laser to the cavity, and achieved green lock of the arm.
Then we went to the PSL table to overlap the arm transmitted light with the PSL doubled light. We made a few changes to the optics that take the arm transmitted light over to the PD. We found that the arm transmitted light was very high, so we changed from having one steering mirror to having 3 (for table space / geometry reasons we needed 3, not just 2) in order to lower the beam axis. We also found that the spot size of the arm transmitted beam was ~2x too small, so we changed the mode matching telescope from a 4x reducer to a 2x reducer by changing the 2nd lens from f=50mm to f=100mm (the first lens is f=200mm). We made the arm transmitted beam and the PSL green beam overlap, but we saw no peak on the spectrum analyzer.
We checked the temperature of the PSL and end lasers, and determined that we needed to adjust the set temp of the end laser. However, we still didn't find any beat note.
We then tweaked the temperature of the doubling oven at the end station, to maximize the power transmitted, since Kiwamu said that that had worked in the past. Alas, no success tonight.
We're stopping for the evening, with the success of reacquiring green lock of the Xarm.
Now that we have all Satellite boxes working again, I've been working on trying to recover the DRMI 1f locking over the last couple of days, in preparation for getting back to DRFPMI locking. Given that the AS light levels have changed, I had to change the whitening gains on the AS55 and AS110 channels to take this into account. I found that I also had to tune a number of demod phases to get the lock going. I had some success with the locks tonight, but noticed that the lock would be lost when the MICH/SRCL boosts were triggered ON - when I turned off the triggering for these, the lock would hold for ~1min, but I couldn't get a loop shape measurement in tonight.
As an aside, we have noticed in the last couple of months glitchy behaviour in the ITMY UL shadow sensor PD output - qualitatively, these were similar to what was seen in the PRM sat. box, and since I was able to get that working again, I did a similar analysis on the ITMY sat. box today with the help of Ben's tester box. However, I found nothing obviously wrong, as I did for the PRM sat. box. Looking back at the trend, the glitchy behaviour seems to have stopped some days ago, the UL channel has been well behaved over the last week. Not sure what has changed, but we should keep an eye on this...
We recovered the LO beam on the BHD port. To do this, we first tried reverting to a previously "good" alignment but couldn't see LO beam hit the sensor. Then we checked the ITMY table and couldn't see LO beam either, even though the AS beam was coming out fine. The misalignment is likely due to recent changes in both injection alignment on TT1, TT2, PR2, PR3, as well as ITMX, ITMY. We remembered that LO path is quite constrained in the YAW direction, so we started a random search by steering LO1 YAW around by ~ 1000 counts in the negative direction at which point we saw the beam come out of the ITMY chamber
We proceeded to walk the LO1-LO2 in PIT mostly to try and offload the huge alignment offset from LO2 to LO1 but this resulted in the LO beam disappearing or become dimmer (from some clipping somewhere). This is WiP and we shall continue this alignment offload task at least tomorrow, but if we can't offload significantly we will have to move forward with this alignment. Attachment #1 shows the end result of today's alignment.
The following that went unnoticed from yesterday were recovered today:
1. ETMX and ETMY 'misalign' scripts weren't running. Troubleshooting showed slow machines c1auxex and c1auxey weren't responding. The machines were reset.
2. PRM oplev gains were zero. Gain values were set looking back at the burt files.
3. X end PZT power supplies were turned ON and set to 100V.
4. X end doubler temperature was reset to the last optimal value on elog (36.35 deg).
Some hitches that should be looked into:
1. Check: ASS for X arm seems not quite doing its job. ETMX has to be moved using sliders to obtain maximum TRX and the arm alignment was seen to be drifting.
2. Check: Status of other slow machines and burt restore whichever needs one.
ETMX ASC output was turned off for whatever reason. Switched it on, ASS is fine.
We both happened to come by today to fix things up.
When I arrived, the PMC was locked to a 01 mode, which I fixed. The PMC transmission is still worryingly low. MC locked happily.
ETMX was getting odd kicks, the kind where a DC shift would occur suddenly, and then go away a few moments later. I turned off all dynamic coil outputs, and looked at the MON output of the SOS driver with a scope to try and see if the DAC or dewhitening was glitching, but didn't see anything... Meanwhile, Jenne fiddled with the TTs until we got beams on POP and REFL. (EDIT, JCD: Useful strategies were to put an excitation onto TT2, and move TT1 until the scattered beam in the chamber was moving at the excitation frequency, Find the edges of TT2 by finding where the scattered light stops seeing the excitation, and center the beam on TT2. By then, I think I saw the beam on the PRM face camera. Then, put a temporary camera looking at the face of PR2. Using TT2 to center here got us the beam on the POP camera.)
We then walked PRM and the TTs around to keep those two camera beams and get the PRM oplev beam back on its QPD. At this point, ITMX was misaligned (by us), and ITMY aligned to get some recycled flashes into the Y-arm. Y-arm was locked to green, and we poked TTs to get better IR flashes. Misaligning PRM, we had Y-Arm flashes of ~0.7. From there, the michelson and then X-arm were roughly aligned. Both arms were seeing flashes of about 0.7, and the MICH fringes on the AS port look nice.
Frustratingly, the SUS->LSC communication for TRY and TRX isn't working, and could not be fixed by any combination of model or front-end restarting... Thus we haven't been able to actually lock the arms and run ASS. THIS IS VERY FRUSTRATING.
Additionally, at the point where we were getting light back into the Yarm, the ITMX that were seen on Friday were happening again, tripping the watchdog. Also, something in the Yarm cavity is getting intermittently pushed around, as can be seen by the green lock suddenly wandering off. All of these suspension shenanigans seem to be independent of oplev damping.
It troubles me that this whole situation is fairly similar to the last time we lost the input pointing (ELOG 10088)
In any case, we feel that we have gotten the IFO alignment to a lockable state.
It's great that you guys found the beam.
Yes, ITMX kick and lost communication for TRY were the motivation of my CDS rebooting.
[Anchal, JC, Radhika, Paco]
JC reported that power outage happened twice in 40m today at around 4:17 pm.
We followed instructions from this page mostly to recover today. Following are some highlights:
Paco reported that the adhoc fan in the back of main laser controller slid down and broke. Their might be contamination on the table from broken fan parts. Paco replaced this fan with another fan which is larger. I think it is time to fix this fan on the controller for good.
The main volume valve shut down because c1vac turned off. We restored the vacuum state by simply opening this valve again. Everything else was same as until the final step in vacuum resetting steps.
The burt restore for mode cleaner board settings do not bring back the state of channels C1:IOO-MC_FASTSW and C1:IOO-MC_POL. This has been an issue which has puzzled us in the past too as we try to get the mode cleaner to lock after power outage recovery. I have now added these channels and their required state in autolocker settings so that autolocker scan in the correct state always. It seems like I added with Yuta's name in the commit author.
[Steve Koji Kiwamu]
- The storage on linux1 and linux1 itself (with this order) were turned on at 10:30am
- Kiwamu restored the vacuum system
=> opened V4, started TP1 (maglev) and opened V1.
The pressure went downfrom 2.5 mTorr to the normal level in about 20 minutes.
- A regular fsck of linux1 was completed at 5pm
- Nodus was turned on. Mounting /cvs/cds succeeded
- The control room computers were turned on
- The rack power for FB turned on, FB and megatron started.
- HVs on 1X1 were turned on. The are not vacuum HV, but used only in the air
- Turned on the RF generation box and the RF distribution box
- burtrestore slow machines (c1psl, c1susaux, c1iool0, c1iscaux, c1iscaux2, c1auxex, c1auxey)
Found the Marconi for the 11 MHz source had been reset to its default.
=> reset the parameters. f = 11.065910 MHz (see #5530) amp = 13 dBm.
Interferometer became lockable. I checked the X/Y arm lock, and MICH lock, they all are fine.
I failed to start Rana's favorit anciant desktop computer at the vac rack. He has to baby this old beast just a little bit more.
Vacuum status: Vac Normal was reached through Farfalla: Rga was switched back to IFO and and Annuloses are beeing pumped now.
V1 was closed for about a day and the pressure reached ~2.8 mTorr in the IFO. This leakrate plus outgassing is normal
The ref cavity 5000V was turned on.
The only thing that has to be done is to restart the RGA. I forgat to turn it off on Friday.
As it turns out Steve is not crazy in this particular instance: the vacuum computer, linux3 , has some issues. I can login just fine, but trying to open a terminal makes the CPU rail and the RAM max out and eventually the machine freezes.
Under KDE, I can open a terminal OK as root, but if I then try a 'su controls', the same issue happens. Let's wait for Jamie to fix this.
Restarted Apache on nodus using Yoichi's wiki instructions. SVN is back up.
The following is a message from the LIGO 40m Chief Recycling Officer:
Please get up off your (Alignment Stabilization Servo)es and recycle your bottles and cans! There is a recycling bin in the control room. Recent weeks have seen an increase in number of bottles/cans thrown away in the regular garbage. This is not cool.
Going off some discussion we had at lunch today, here is my current knowledge of the state of cavity lengths.
Acknowledging that Koji changed the sideband modulation frequency recently, the ideal cavity lengths are (to the nearest mm):
We when last hand measured distances, after moving PR2, we found:
However, when I looked at the sideband splitting interferometrically, I found:
This is only 5mm from the hand measured value, so we can believe that the SRC length is between 5 and 6 cm too long. I'm building a MIST model to try and see what this may entail.
Com'on. This is just a 60ppm change of the mod frequency from the nominal. How can it change the recycling cav length by more than a cm?
This describes how the desirable recycling cavity lengths are affected by the phase of the sidebands at non-resonant reflection of the arms.
If we believe these numbers, L_PRC = 6.7538 [m] and L_SRC = 5.39915 [m].
Compare them with the measured numbers
You should definitely run MIST to see what is the optimal length of the RCs, and what is the effect of the given length deviations.
Koji correctly points out that I naïvely overlooked various factors. With a similar analysis to the wiki page, I get:
This means that:
Next step is to see how this may affect our ability to sense, and thereby control, the SRC when the arms are going.
MIST simulations and plots are in the attached zip.
I looked at the data of the day before yesterday (Oct 06) to know how much is the recycling gain.
X arm: (TRX_PRecycled) / (TRX_PRMmisaligned) * T_PRM = 83.1/0.943*0.07 = 6.17
Y arm: (TRX_PRecycled) / (TRX_PRMmisaligned) * T_PRM = 99.2/1.017*0.07 = 6.83
==> G_PR = 6.5 +/- 0.5 (oh...this estimation is so bad...)
From the recycling gain and the arm cavity reflectance, one can get the loss in the recycling cavity.
G_PR = T_PRM / (1-Sqrt(R_PRM * (1-L_PRC)*R_cav))^2
==> loss in the recycling cavity L_PRC: 0.009+/-0.009
(About 1% loss is likely in the recycling cavity)
Measured arm reflectivity R_cav: 0.875 +/- 0.005
Estimated round trip loss L_RT: 157ppm +/- 8ppm
Estimated finesse F: 1213+/-2
Measured arm reflectivity R_cav: 0.869 +/- 0.006
Estimated round trip loss L_RT: 166ppm +/- 8ppm
Estimated finesse F: 1211+/-2