40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 195 of 344  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  15367   Wed Jun 3 02:08:00 2020 gautamUpdateLSCPower buildup diagnostics

Attachments #1 and Attachments #2 are in the style of elog15356, but with data from a more recent lock. It'd be nice to calibrate the ASDC channel (and in general all channels) into power units, so we have an estimate of how much sideband power we expect, and the rest can be attributed to carrier leakage to ASDC.

On the basis of Attachments #1, the PRG is ~19, and at times, the arm transmission goes even higher. I'd say we are now in the regime where the uncertainty of the losses in the recycling cavity - maybe beamsplitter clipping? is important in using this info to try and constrain the arm cavity losses. I'm also not sure what to make of the asymmetry between TRX and TRY. Allegedly, the Y arm is supposed to be lossier.

Quote:

This is very interesting. Do you have the ASDC vs PRG (~ TRXor TRY) plot? That gives you insight on what is the cause of the low recycling gain.

Attachment 1: PRFPMIcorner_DC_1275190251_1275190551.pdf
PRFPMIcorner_DC_1275190251_1275190551.pdf
Attachment 2: PRFPMIcorner_SB_1275190251_1275190551.pdf
PRFPMIcorner_SB_1275190251_1275190551.pdf
  15019   Wed Nov 6 20:34:28 2019 KojiUpdateIOOPower combiner loss (EOM resonant box installed)

Gautam and I were talking about some modulation and demodulation and wondered what is the power combining situation for the triple resonant EOM installed 8 years ago. And we noticed that the current setup has additional ~5dB loss associated with the 3-to-1 power combiner. (Figure a)

N-to-1 broadband power combiners have an intrinsic loss of 10 log10(N). You can think about a reciprocal process (power splitting) (Figure b). The 2W input coming to the 2-port power splitter gives us two 1W outputs. The opposite process is power combining as shown in Figure c. This case, the two identical signals are the constructively added in the combiner, but the output is not 20Vpk but 14Vpk. Considering thge linearity, when one of the port is terminated, the output is going to be a half. So we expect 27dBm output for a 30dBm input (Figure d). This fact is frequently oversight particularly when one combines the signals at multiple frequencies (Figrue e). We can avoid this kind of loss by using a frequency-dependent power combiner like a diplexer or a triplexer.

Attachment 1: power_combiner.pdf
power_combiner.pdf
  15955   Tue Mar 23 09:16:42 2021 Paco, AnchalUpdateComputersPower cycled C1PSL; restored C1PSL

So actually, it was the C1PSL channels that had died. We did the following to get them back:

  • We went to this page and tried the telnet procedure. But it was unable to find the host.
  • So we followed the next advice. We went to the 1X1 rack and manually hard shut off C1PSL computer by holding down the power button until the LEDs went off.
  • We wait for 5-7 seconds and switched it back on.
  • By the time we were back in control room, the C1PSL channels were back online.
  • The mode cleaner however was struggling to keep the lock. It was going in and out of lock.
  • So we followed the next advice and did burt restore which ran following command:
    burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/Mar/22/17:19/c1psl.snap -l /tmp/controls_1210323_085130_0.write.log -o /tmp/controls_1210323_085130_0.nowrite.snap -v 
  • Now the mode cleaner was locked but we found that the input switch of C1IOO-WFS1_PIT and C1IOO-WFS2_PIT filter banks were off. Which meant that only YAW sensors were in loop in the lock.
  • We went back in dataviewer and checked when these channels were shut down. See attachments for time series.
  • It seems this happened yesterday, March 22nd near 1:00 pm (20:00:00 UTC). We can't find any mention of anyone else doing it on elog and we left by 12:15pm.
  • So we shut down the PSL shutter (C1:PSL-PSL_ShutterRqst) and switched off MC autolocker (C1:IOO-MC_LOCK_ENABLE).
  • Switched on C1:IOO-WFS1_PIT_SW1 and C1:IOO-WFS2_PIT_SW1.
  • Turned back on PSL shutter (C1:PSL-PSL_ShutterRqst) and MC autolocker (C1:IOO-MC_LOCK_ENABLE).
  • Mode cleaner locked back easily and now is keeping lock consistently. Everything looks normal.
Attachment 1: MCWFS1and2PITYAW.pdf
MCWFS1and2PITYAW.pdf
Attachment 2: MCWFS1and2PITYAW_Zoomed.pdf
MCWFS1and2PITYAW_Zoomed.pdf
  13034   Fri Jun 2 12:32:16 2017 gautamUpdateGeneralPower glitch

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

  13035   Fri Jun 2 16:02:34 2017 gautamUpdateGeneralPower glitch

Today's recovery seems to be a lot more complicated than usual.

  • The vertex area of the lab is pretty warm - I think the ACs are not running. The wall switch-box (see Attachment #1) shows some red lights which I'm pretty sure are usually green. I pressed the push-buttons above the red light, hopefully this fixed the AC and the lab cools down soon.
  • Related to the above - C1IOO has a bunch of warning orange indicator lights ON that suggest it is feeling the heat. Not sure if that is why, but I am unable to bring any of the C1IOO models back online - the rtcds compilation just fails, after which I am unable to ssh back into the machine as well.
  • C1SUS was problematic as well. I found that the expansion chassis was not powered. Fortunately, this was fixed by simply switching to the one free socket on the power strip that powers a bunch of stuff on 1X4 - this brought the expansion chassis back alive, and after a soft reboot of c1sus, I was able to get these models up and running. Fortunately, none of the electronics seem to have been damaged. Perhaps it is time for surge-protecting power strips inside the lab area as well (if they aren't already)? 
  • I was unable to successfully resolve the dmesg problem alluded to earlier. Looking through some forums, I gather that the output of dmesg should be written to a file in /var/log/. But no such file exists on any of our 5 front-ends (but it does on Megatron, for example). So is this way of setting up the front end machines deliberate? Why does this matter? Because it seems that the buffer which we see when we simply run "dmesg" on the console gets preiodically cleared. So sometime back, when I was trying to verify that the installed DACs are indeed 16-bit DACs by looking at dmesg, running "dmesg | head" showed a first line that was written to well after the last reboot of the machine. Anyway, this probably isn't a big deal, and I also verified during the model recompilation that all our DACs are indeed 16-bit.
  • I was also trying to set up the Upstart processes on megatron such that the MC autolocker and FSS slow control scripts start up automatically when the machine is rebooted. But since C1IOO isn't co-operating, I wasn't able to get very far on this front either...

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

GV Jun 5 6pm: From my discussion with jamie, I gather that the fact that the dmesg output is not written to file is because our front-ends are diskless (this is also why the ring buffer, which is what we are reading from when running "dmesg", gets cleared periodically)

 

Quote:

Looks like there was a power glitch at around 10am today.

All frontends, FB, Megatron, Optimus were offline. Chiara reports an uptime of 666 days so looks like its UPS works fine. PSL was tripped, probably the end lasers too (yet to check). Slow machines seem alright (Responds to ping, and I can also telnet into them).

Since all the frontends have to be re-started manually, I am taking this opportunity to investigate some cds issues like the lack of a dmesg log file on some of the frontends. So the IFO will be offline for sometime.

 

Attachment 1: IMG_7399.JPG
IMG_7399.JPG
  13036   Fri Jun 2 22:01:52 2017 gautamUpdateGeneralPower glitch - recovery

[Koji, Rana, Gautam]

Attachment #1 - CDS status at the end of todays efforts. There is one red indicator light showing an RFM error which couldn't be fixed by running "global diag reset" or "mxstream restart" scripts, but getting to this point was a journey so we decided to call it for today.


The state this work was started in was as indicated in the previous elog - c1ioo wasn't ssh-able, but was responding to ping. We then did the following:

  1. Killed all models on all four other front ends other than c1ioo. 
  2. Hard reboot for c1ioo - at this point, we could ssh into c1ioo. With all other models killed, we restarted the c1ioo models one by one. They all came online smoothly.
  3. We then set about restarting the models on the other machines.
    • We started with the IOP models, and then restarted the others one by one
    • We then tried running "global diag reset", "mxstream restart" and "telnet fb 8087 -> shutdown" to get rid of all the red indicator fields on the CDS overview screen.
    • All models came back online, but the models on c1sus indicated a DC (data concentrator?) error. 
  4. After a few minutes, I noticed that all the models on c1iscex had stalled
    • dmesg pointed to a synchronization error when trying to initialize the ADC
    • The field that normally pulses at ~1pps on the CDS overview MEDM screen when the models are running normally was stuck
    • Repeated attempts to restart the models kept throwing up the same error in dmesg 
    • We even tried killing all models on all other frontends and restarting just those on c1iscex as detailed earlier in this elog for c1ioo - to no avail.
    • A walk to the end station to do a hard reboot of c1iscex revealed that both green indicator lights on the slave timing card in the expansion chassis were OFF.
    • The corresponding lights on the Master Timing Sequencer (which supplies the synchronization signal to all the front ends via optical fiber) were also off.
    • Sometime ago, Eric and I had noticed a similar problem. Back then, we simply switched the connection on the Master Timing Sequencer to the one unused available port, this fixed the problem. This time, switching the fiber connection on the Master Timing Sequencer had no effect.
    • Power cycling the Master Timing Sequencer had no effect
    • However, switching the optical fiber connections going to the X and Y ends lead to the green LED on the suspect port on the Master Timing Sequencer (originally the X end fiber was plugged in here) turning back ON when the Y end fiber was plugged in.
    • This suggested a problem with the slave timing card, and not the master. 
  5. Koji and I then did the following at the X-end electronics rack:
    • Shutdown c1iscex, toggled the switches in the front and back of the expansion chassis
    • Disconnect AC power from rear of c1iscex as well as the expansion chassis. This meant all LEDs in the expansion chassis went off, except a single one labelled "+5AUX" on the PCB - to make this go off, we had to disconnect a jumper on the PCB (see Attachment #2), and then toggle the power switches on the front and back of the expansion chassis (with the AC power still disconnected). Finally all lights were off.
    • Confident we had completely cut all power to the board, we then started re-connecting AC power. First we re-started the expansion chassis, and then re-booted c1iscex.
    • The lights on the slave timing card came on (including the one that pulses at ~1pps, which indicates normal operation)!
  6. Then we went back to the control room, and essentially repeated bullet points 2 and 3, but starting with c1iscex instead of c1ioo.
  7. The last twist in this tale was that though all the models came back online, the DC errors on c1sus models persisted. No amount of "mxstream restart", "global diag reset", or restarting fb would make these go away.
  8. Eventually, Koji noticed that there was a large discrepancy in the gpstimes indicated in c1x02 (the IOP model on c1sus), compared to all the other IOP models (even though the PDT displayed was correct). There were also a large number or IRIG-B errors indicated on the same c1x02 status screen, and the "TIM" indicator in the status word was red.
  9. Turns out, running ntpdate before restarting all the models somehow doesn't sync the gps time - so this was what was causing the DC errors. 
  10. So we did a hard reboot of c1sus (and for good measure, repeated the bullet points of 5 above on c1sus and its expansion chassis). Then, we tried starting the c1x02 model without running ntpdate first (on startup, there is an 8 hour mismatch between the actual time in Pasadena and the system time - but system time is 8 hours behind, so it isn't even somehow syncing to UTC or any other real timezone?)
    • Model started up smoothly
    • But there was still a 1 second discrepancy between the gpstime on c1x02 and all the other IOPs (and the 8 hour discrepancy between displayed PDT and actual time in Pasadena)
    • So we tried running ntpdate after starting c1x02 - this finally fixed the problem, gpstime and PDT on c1x02 agreed with the other frontends and the actual time in Pasadena.
    • However, the models on c1lsc and c1ioo crashed
    • So we restarted the IOPs on both these machines, and then the rest of the models.
  11. Finally, we ran "mxstream restart", "global diag reset", and restarted fb, to make the CDS overview screen look like it does now.

Why does ntpdate behave this way? And only on one of the frontends? And what is the remaining RFM error? 

Koji then restarted the IMC autolocker and FSS slow processes on megatron. The IMC locked almost immediately. The MC2 transmon indicated a large shift in the spot position, and also the PMC transmission is pretty low (while the lab temperature equilibriates after the AC being off during peak daytime heat). So the MC transmission is ~14500 counts, while we are used to more like 16,500 counts nowadays.

Re-alignment of the IFO remains to be done. I also did not restart the end lasers, or set up the Marconi with nominal params. 

Attachment #3 - Status of the Master Timing Sequencer after various reboots and power cycling of front ends and associated electronics.

Attachment #4 - Warning lights on C1IOO

Quote:

Today's recovery seems to be a lot more complicated than usual.

So current status is that all front-end models except those hosted on C1IOO are back up and running. Further recovery efforts in progress.  

 

Attachment 1: power_glitch_recovery.png
power_glitch_recovery.png
Attachment 2: IMG_7406.JPG
IMG_7406.JPG
Attachment 3: IMG_7407.JPG
IMG_7407.JPG
Attachment 4: IMG_7400.JPG
IMG_7400.JPG
  13038   Sun Jun 4 15:59:50 2017 gautamUpdateGeneralPower glitch - recovery

I think the CDS status is back to normal.

  • Bit 2 of the C1RFM status word was red, indicating something was wrong with "GE FANUC RFM Card 0".
  • You would think the RFM errors occur in pairs, in C1RFM and in some other model - but in this case, the only red light was on c1rfm.
  • While trying to re-align the IFO, I noticed that the TRY time series flatlined at 0 even though I could see flashes on the TRANSMON camera.
  • Quick trip to the Y-End with an oscilloscope confirmed that there was nothing wrong with the PD.
  • I crawled through some elogs, but didn't really find any instructions on how to fix this problem - the couple of references I did find to similar problems reported red indicator lights occurring in pairs on two or more models, and the problem was then fixed by restarting said models.
  • So on a hunch, I restarted all models on c1iscey (no hard or soft reboot of the FE was required)
  • This fixed the problem
  • I also had to start the monit process manually on some of the FEs like c1sus. 

Now IFO work like fixing ASS can continue...

Attachment 1: powerGlitchRecovery.png
powerGlitchRecovery.png
  4894   Tue Jun 28 07:46:54 2011 SureshUpdateIOOPower incident on REFL11 and REFL55

I measured the power incident on REFL11 and REFL55.  Steve was concerned that it is too high.  If we consider this elog the incident power levels were REFL11: 30 mW and REFL55: 87 mW. (assuming efficiency of ~ 0.8 A/W @1064nm for the C30642 PD).  However, currently there is a combination of Polarising BS and Half-waveplate with which we have attenuated the power incident on the REFL PDs.  We now have (with the PRM misaligned):

REFL11:  Power incident = 7.60 mW ;  DC out = 0.330 V  => efficiency = 0.87 A/W

REFL55:  Power incident = 23 mW ;  DC out = 0.850 V  => efficiency = 0.74 A/W

and with the PRM aligned::

REFL11:  DC out = 0.35 V  => 8 mW is incident

REFL55: DC out = 0.975 V  => 26 mW is incident

These power levels may go up further when everything is working well.

The max rated photo-current is 100mA => max power 125mW @0.8 A/W.

 

  4896   Tue Jun 28 10:11:13 2011 steveUpdateIOOPower incident on REFL11 and REFL55

Quote:

I measured the power incident on REFL11 and REFL55.  Steve was concerned that it is too high.  If we consider this elog the incident power levels were REFL11: 30 mW and REFL55: 87 mW. (assuming efficiency of ~ 0.8 A/W @1064nm for the C30642 PD).  However, currently there is a combination of Polarising BS and Half-waveplate with which we have attenuated the power incident on the REFL PDs.  We now have (with the PRM misaligned):

REFL11:  Power incident = 7.60 mW ;  DC out = 0.330 V  => efficiency = 0.87 A/W

REFL55:  Power incident = 23 mW ;  DC out = 0.850 V  => efficiency = 0.74 A/W

and with the PRM aligned::

REFL11:  DC out = 0.35 V  => 8 mW is incident

REFL55: DC out = 0.975 V  => 26 mW is incident

These power levels may go up further when everything is working well.

The max rated photo-current is 100mA => max power 125mW @0.8 A/W.

 

What is the power level on MC_REFL_ PDs and WFS  when the MC is not locked?

  3151   Wed Jun 30 23:03:46 2010 ranaConfigurationIOOPower into MC restored to max

Kiwamu, Nancy, and I restored the power into the MC today:

  1. Changed the 2" dia. mirror ahead of the MC REFL RFPD back to the old R=10% mirror.
  2. Since the MC axis has changed, we had to redo the alignment of the optics in that area. Nearly all optics had to move by 1-2 cm.
  3. 2 of the main mounts there had the wrong handedness (e.g. the U100-A-LH instead of RH). We rotated them to some level of reasonableness.
  4. Tuned the penultimate waveplate on the PSL (ahead of the PBS) to maximize the transmission to the MC and to minimize the power in the PBS reject beam.
  5. MC_REFL DC  =1.8 V.
  6. Beams aligned on WFS.
  7. MC mirrors alignment tweaked to maximize transmission. IN the morning we will check the whole A2L centering again. If its OK, fine. Otherwise, we'll restore the bias values and align the PSL beam to the MC via the persicope.
  8. waveplates and PBS in the PSL were NOT removed.
  9. MC TRANS camera and QPD have to be recentered after we are happy with the MC axis.
  10. MC REFL camera has to be restored.
  11. WFS measurements will commence after the SURF reports are submitted.

We found many dis-assembled Allen Key sets. Do not do this! Return tools to their proper places or else you are just wasting everyone's time!

 

  4103   Tue Jan 4 02:58:53 2011 JenneUpdateIOOPower into Mode Cleaner increased

What was the point:

I twiddled with several different things this evening to increase the power into the Mode Cleaner.  The goal was to have enough power to be able to see the arm cavity flashes on the CCD cameras, since it's going to be a total pain to lock the IFO if we can't see what the mode structure looks like.

Summed-up list of what I did:

* Found the MC nicely aligned.  Did not ever adjust the MC suspensions.

* Optimized MC Refl DC, using the old "DMM hooked up to DC out" method.

* Removed the temporary BS1-1064-33-1025-45S that was in the MC refl path, and replaced it with the old BS1-1064-IF-2037-C-45S that used to be there.  This undoes the temporary change from elog 3878.  Note however, that Yuta's elog 3892 says that the original mirror was a 1%, not 10% as the sticker indicates. The temporary mirror was in place to get enough light to MC Refl while the laser power was low, but now we don't want to fry the PD.

* Noticed that the MCWFS path is totally wrong.  Someone (Yuta?) wanted to use the MCWFS as a reference, but the steering mirror in front of WFS1 was switched out, and now no beam goes to WFS2 (it's blocked by part of the mount of the new mirror). I have not yet fixed this, since I wasn't using the WFS tonight, and had other things to get done.  We will need to fix this.

* Realigned the MC Refl path to optimize MC Refl again, with the new mirror.

* Replaced the last steering mirror on the PSL table before the beam goes into the chamber from a BS1-1064-33-1025-45S to a Y1-45S.  I would have liked a Y1-0deg mirror, since the angle is closer to 0 than 45, but I couldn't find one.  According to Mott's elog 2392 the CVI Y1-45S is pretty much equally good all the way down to 0deg, so I went with it.  This undoes the change of keeping the laser power in the chambers to a nice safe ~50mW max while we were at atmosphere.

* Put the HWP in front of the laser back to 267deg, from its temporary place of 240deg.  The rotation was to keep the laser power down while we were at atmosphere.  I put the HWP back to the place that Kevin had determined was best in his elog 3818.

* Tried to quickly align the Xarm by touching the BS, ITMX and ETMX.  I might be seeing IR flashes (I blocked the green beam on the ETMX table so I wouldn't be confused.  I unblocked it before finishing for the night) on the CCD for the Xarm, but that might also be wishful thinking.  There's definitely something lighting up / flashing in the ~center of ETMX on the camera, but I can't decide if it's scatter off of a part of the suspension tower, or if it's really the resonance.  Note to self:  Rana reminds me that the ITM should be misaligned while using BS to get beam on ETM, and then using ETM to get beam on ITM.  Only then should I have realigned the ITM.  I had the ITM aligned (just left where it had been) the whole time, so I was making my life way harder than it should have been.  I'll work on it again more today (Tuesday). 

What happened in the end:

The MC Trans signal on the MC Lock screen went up by almost an order of magnitude (from ~3500 to ~32,000).  When the count was near ~20,000 I could barely see the spot on a card, so I'm not worried about the QPD.  I do wonder, however, if we are saturating the ADC. Suresh changed the transimpedance of the MC Trans QPD a while ago (Suresh's elog 3882), and maybe that was a bad idea? 

Xarm not yet locked. 

Can't really see flashes on the Test Mass cameras. 

  4104   Tue Jan 4 11:06:32 2011 KojiUpdateIOOPower into Mode Cleaner increased

- Previously MC TRANS was 9000~10000 when the alignment was good. This means that the MC TRANS PD is saturated if the full power is given.
==> Transimpedance must be changed again.

- Y1-45S has 4% of transmission. Definitively we like to use Y1-0 or anything else. There must be the replaced mirror.
I think Suresh replaced it. So he must remember wher it is.

- We must confirm the beam pointing on the MC mirrors with A2L.

- We must check the MCWFS path alignment and configuration.

- We should take the picture of the new PSL setup in order to update the photo on wiki.

Quote:

What was the point:

I twiddled with several different things this evening to increase the power into the Mode Cleaner.  The goal was to have enough power to be able to see the arm cavity flashes on the CCD cameras, since it's going to be a total pain to lock the IFO if we can't see what the mode structure looks like.

Summed-up list of what I did:

* Found the MC nicely aligned.  Did not ever adjust the MC suspensions.

* Optimized MC Refl DC, using the old "DMM hooked up to DC out" method.

* Removed the temporary BS1-1064-33-1025-45S that was in the MC refl path, and replaced it with the old BS1-1064-IF-2037-C-45S that used to be there.  This undoes the temporary change from elog 3878.  Note however, that Yuta's elog 3892 says that the original mirror was a 1%, not 10% as the sticker indicates. The temporary mirror was in place to get enough light to MC Refl while the laser power was low, but now we don't want to fry the PD.

* Noticed that the MCWFS path is totally wrong.  Someone (Yuta?) wanted to use the MCWFS as a reference, but the steering mirror in front of WFS1 was switched out, and now no beam goes to WFS2 (it's blocked by part of the mount of the new mirror). I have not yet fixed this, since I wasn't using the WFS tonight, and had other things to get done.  We will need to fix this.

* Realigned the MC Refl path to optimize MC Refl again, with the new mirror.

* Replaced the last steering mirror on the PSL table before the beam goes into the chamber from a BS1-1064-33-1025-45S to a Y1-45S.  I would have liked a Y1-0deg mirror, since the angle is closer to 0 than 45, but I couldn't find one.  According to Mott's elog 2392 the CVI Y1-45S is pretty much equally good all the way down to 0deg, so I went with it.  This undoes the change of keeping the laser power in the chambers to a nice safe ~50mW max while we were at atmosphere.

* Put the HWP in front of the laser back to 267deg, from its temporary place of 240deg.  The rotation was to keep the laser power down while we were at atmosphere.  I put the HWP back to the place that Kevin had determined was best in his elog 3818.

* Tried to quickly align the Xarm by touching the BS, ITMX and ETMX.  I might be seeing IR flashes (I blocked the green beam on the ETMX table so I wouldn't be confused.  I unblocked it before finishing for the night) on the CCD for the Xarm, but that might also be wishful thinking.  There's definitely something lighting up / flashing in the ~center of ETMX on the camera, but I can't decide if it's scatter off of a part of the suspension tower, or if it's really the resonance. 

What happened in the end:

The MC Trans signal on the MC Lock screen went up by almost an order of magnitude (from ~3500 to ~32,000).  When the count was near ~20,000 I could barely see the spot on a card, so I'm not worried about the QPD.  I do wonder, however, if we are saturating the ADC. Suresh changed the transimpedance of the MC Trans QPD a while ago (Suresh's elog 3882), and maybe that was a bad idea? 

Xarm not yet locked. 

Can't really see flashes on the Test Mass cameras. 

 

  7410   Wed Sep 19 13:12:48 2012 JenneUpdateIOOPower into vacuum increased to 75mW

The power buildup in the MC is ~400, so 100mW of incident power would give about 40W circulating in the mode cleaner.

Rana points out that the ATF had a 35W beam running around the table in air, with a much smaller spot size than our MC has, so 40W should be totally fine in terms of coating damage.

I have therefore increased the power into the vacuum envelope to ~75mW.  The MC REFL PD should be totally fine up to ~100mW, so 75mW is plenty low.  The MC transmission is now a little over 1000 counts.  I have changed the low power mcup script to not bring the VCO gain all the way up to 31dB anymore.  Now it seems happy with a VCO gain of 15dB (which is the same as normal power).

  4958   Fri Jul 8 20:50:49 2011 sonaliUpdateGreen LockingPower of the AUX laser increased.

The ETMY laser was operating at 1.5 A current and 197 mW power.

For the efficient frequency doubling of the AUX  laser beam at the ETMY table, a higher power is required.

Steve and I changed the current level of the laser from 1.5 A to 2.1 A in steps of 0.1 A and noted the corresponding power output . The graph is attached here.

The laser has been set to current 1.8 Amperes. At this current, the power of the output beam just near the laser output is measured to be 390 mW.

The power of the beam which is being coupled into the optical fibre is measured to be between 159 mW to 164 mW (The power meter was showing fluctuating readings).

The power out of the beam coming out of the fibre far-end at the PSL table is measured to be 72 mW. Here, I have attached a picture of the beam paths of the ETMY table with the beams labelled with their respective powers.

Next we are going to adjust the green alignment on the ETMY and then measure the power of the beam.

At the output end of the fibre on the PSL, a power meter has been put to dump the beam for now as well as to help with the alignment at the ETMY table.

Attachment 1: Graph3.png
Graph3.png
Attachment 2: ETMY_beam_powers.png
ETMY_beam_powers.png
  4965   Thu Jul 14 02:32:11 2011 sonaliUpdateGreen LockingPower of the AUX laser increased.

Quote:

The power of the beam which is being coupled into the optical fibre is measured to be between 159 mW to 164 mW (The power meter was showing fluctuating readings).

The power out of the beam coming out of the fibre far-end at the PSL table is measured to be 72 mW. Here, I have attached a picture of the beam paths of the ETMY table with the beams labelled with their respective powers.

 For the phase locking or beat note measuring we only need ~1 mW. Its a bad idea to send so much power into the fiber because of SBS and safety. The power should be lowered until the output at the PSL is < 2 mW. In terms of SNR, there's no advantage to use such high powers.

  4973   Fri Jul 15 13:48:56 2011 sonaliUpdateGreen LockingPower of the AUX laser increased.

Quote:

Quote:

The power of the beam which is being coupled into the optical fibre is measured to be between 159 mW to 164 mW (The power meter was showing fluctuating readings).

The power out of the beam coming out of the fibre far-end at the PSL table is measured to be 72 mW. Here, I have attached a picture of the beam paths of the ETMY table with the beams labelled with their respective powers.

 For the phase locking or beat note measuring we only need ~1 mW. Its a bad idea to send so much power into the fiber because of SBS and safety. The power should be lowered until the output at the PSL is < 2 mW. In terms of SNR, there's no advantage to use such high powers.

 

Well,the plan is to put in  a neutral density filter in the beam path before it enters the fibre. But before I could do that, I set up the camera on the PSL table to look at the fiber output . I will need it while I realign the  beam after putting in the Neutral Density Filter. I have attached the ETMY layout with the Neutral Density filter in place herewith.

Attachment 1: ETMY_after_fibre_coupling_labelled.pdf
ETMY_after_fibre_coupling_labelled.pdf
  5631   Fri Oct 7 17:35:26 2011 KatrinUpdateGreen LockingPower on green YARM table

After all realignment is finished, here are the powers at several positions:

 

DSC_3496_power.JPG

  15523   Thu Aug 13 18:10:22 2020 gautamUpdateGeneralPower outage

There was a power outage ~30 mins ago that knocked out CDS, PSL etc. The lights in the office area also flickered briefly. Working on recovery now. The elog was also down (since nodus presumably rebooted), I restarted the service just now. Vacuum status seems okay, even though the status string reads "Unrecognized".

The recovery was complete at 1830 local time. Curiously, the EX NPRO and the doubling oven temp controllers stayed on, usually they are taken out as well. Also, all the slow machines and associated Acromag crates survived. I guess the interruption was so fleeting that some devices survived.

The control room workstation, zita, which is responsible for the IFO status StripTool display on the large TV screen, has some display driver issues I think - it crashed twice when I tried to change the default display arrangement (large TV + small monitor). It also wants to update to Ubuntu 18.04 LTS, but I decided not to for the time being (it is running Ubuntu 16.04 LTS). Anyways, after a couple of power cycles, the wall StripTools are up once again.

  10586   Thu Oct 9 10:52:37 2014 manasaUpdateGeneralPower outage II & recovery

Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.

I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.

c1vac1 is not responding as of now. All other computers have come back and are alive.

  10587   Thu Oct 9 11:56:35 2014 SteveUpdateVACPower outage II & recovery

Quote:

Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.

I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.

c1vac1 is not responding as of now. All other computers have come back and are alive.

 

 IFO vacuum, air condition and PMC HV are still down. PSL out put beam is blocked on the table.

  10588   Thu Oct 9 13:29:14 2014 JenneUpdatePSLPower outage II & recovery

Quote:

 

 IFO vacuum, air condition and PMC HV are still down. PSL out put beam is blocked on the table.

 PMC is fine.  There are sliders in the Phase Shifter screen (accessible from the PMC screen) that also needed touching. 

PSL shutter is still closed until Steve is happy with the vacuum system - I guess we don't want to let high power in, in case we come all the way up to atmosphere and particulates somehow get in and get fried on the mirrors. 

  10590   Thu Oct 9 17:33:28 2014 SteveUpdateVACPower outage II & recovery

Quote:

Quote:

Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.

I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.

c1vac1 is not responding as of now. All other computers have come back and are alive.

 

 IFO vacuum, air condition and PMC HV are still down. PSL out put beam is blocked on the table.

 We are pumping again. This is a temporary configuration. The annuloses are at atmosphere. The reset reboot of c1Vac1 and 2 opened everything except the valves that were disconnected.

TP2 lost it's vent solenoid power supply and dry pump during the power outage.

They were replaced but the new small turbo controller is not set up as the old TP2 was so it does not allow V4 to open. 

Tomorrow I will swap back the old controller,  pump down the annuloses and close off the ion pumps.

I removed the beam block from the PSL table and opened the shutter. CC4 has the real pressure 2e-5 Torr  

CC1 is not real.

Attachment 1: pumpingAgain.png
pumpingAgain.png
  10592   Thu Oct 9 19:14:04 2014 ericqUpdateGeneralPower outage II & recovery

I touched up the PMC alignment. 

While bringing back the MC, I realized IOO got a really old BURT restore again... Restored from midnight last night. WFS still working.

Now aligning IFO for tonight's work

  10597   Fri Oct 10 14:41:04 2014 SteveUpdateVACPower outage II & recovery

Quote:

Quote:

Quote:

Post 30-40min unexpected power outage this morning, Steve checked the status of the vacuum and I powered up Chiara.

I brought back the FE machines and keyed all the crates to bring back the slow machines but for the vac computers.

c1vac1 is not responding as of now. All other computers have come back and are alive.

 

 IFO vacuum, air condition and PMC HV are still down. PSL out put beam is blocked on the table.

 We are pumping again. This is a temporary configuration. The annuloses are at atmosphere. The reset reboot of c1Vac1 and 2 opened everything except the valves that were disconnected.

TP2 lost it's vent solenoid power supply and dry pump during the power outage.

They were replaced but the new small turbo controller is not set up as the old TP2 was so it does not allow V4 to open. 

Tomorrow I will swap back the old controller,  pump down the annuloses and close off the ion pumps.

I removed the beam block from the PSL table and opened the shutter. CC4 has the real pressure 2e-5 Torr  

CC1 is not real.

 Tp2 is controlled by old controller. Annuloses pumped down. Valve configuration: "vacuum normal "

  Ion pumps closed at  <1e-4 mT

Attachment 1: recovery_poweroutage.png
recovery_poweroutage.png
  6542   Wed Apr 18 08:53:50 2012 JamieUpdateGeneralPower outage last night

Apparently there was a catastrophic power failure last night.  Bob says it took out power in most of Pasadena.

Bob checked the vacuum system when he got in first thing this morning and everything's back up and checks out.  The laster is still off and most of the front-end computers did not recover.

I'm going to start a boot fest now.  I'll be able to report more once everything is back on.

  6543   Wed Apr 18 10:05:40 2012 JamieUpdateGeneralPower outage last night

All of the front-ends are back up and I've been able to recover local control of all of the opitcs (snapshots from saturday). Issues:

  • I can't lock the PMC.  Still unclear why.
  • there are no oplev signals in MC1, MC2, and MC3
  • Something is wrong with PRM.  He is very noisy.  Turning on his oplev servo makes him go crazy.
  • There are all sorts of problems with the oplevs in general.  Many of the optics have no oplev settings.  This is probably not related to the power outage.

On a brighter note, ETMX is damped with it's new RCG 2.5 controller!  yay!

  6544   Wed Apr 18 11:46:14 2012 JenneUpdateGeneralPower outage last night

Quote:

  • there are no oplev signals in MC1, MC2, and MC3
  • Something is wrong with PRM.  He is very noisy.  Turning on his oplev servo makes him go crazy.

 None of the 3 MC optics have oplevs, so there shouldn't be any oplev signals. Although MC2 has the trans QPD, which was once (still is??) going through the MC2 oplev signal path.

PRM was noisy last week too.  But turning on his oplev shouldn't make him crazy. That's not so good. Try restoring PRM (the ! near the PRM label on the IFO align screen), then checking if his oplev is ~centered.  Maybe PRM wasn't in the nominal position at the time you restored from.

  6545   Wed Apr 18 11:54:31 2012 JenneUpdateGeneralPower outage last night

Quote:

None of the 3 MC optics have oplevs, so there shouldn't be any oplev signals. Although MC2 has the trans QPD, which was once (still is??) going through the MC2 oplev signal path.

 Duh.  Unawake brain.  The MCs look fine.

  11864   Tue Dec 8 15:57:16 2015 yutaroSummaryLSCPower recycling gain estimation from arm loss measurement

I estimated power recycling gain with the results of arm loss measurement.

From elog 11818 and 11857, round trip losses including transmittivity of ETM of Y arm and X arm (let us call them T_\mathrm{loss,Y} and T_\mathrm{loss,X}) are 229+13.7=243 ppm and 483+13.7=495 ppm, respectively.

 

How I calculated:

I used the following formula.

Amplitude reflectivity of an arm cavity r_\mathrm{FP}

r_\mathrm{FP}=\sqrt{1-\frac{4T_\mathrm{ITM}T_\mathrm{loss}}{T^2_\mathrm{tot}}}   (see elog 11816)

Amplitude reflectivity of FPMI r_\mathrm{FPMI}

r_\mathrm{FPMI}=\frac{1}{2}(r_\mathrm{FP,X}+r_\mathrm{FP,Y})

With power transmittivity of PRM T_\mathrm{PRM} and amplitude reflectivity of PRM r_\mathrm{PRM}, power recycling gain is

\mathrm{PRG}=\frac{T_\mathrm{PRM}}{(1-r_\mathrm{PRM}r_\mathrm{FPMI})^2}.

 I assumed T_\mathrm{ITM}\simeq T_\mathrm{tot}=\frac{2\pi}{401}=0.01566T_\mathrm{PRM}=0.05637, and r_\mathrm{PRM}=\sqrt{1-T_\mathrm{PRM}}, and then I got

PRG = 9.8.

Since both round trip losses have relative error of ~ 4 % and PRG is proportional to inverse square of T_\mathrm{loss} up to the leading order of it, relative error of PRG can be estimated as ~ 8 %, so PRG = 9.8 +/- 0.8

 

Discussion

According to elog 11691, which says TRX and TRY level was ~125 when DRFPMI was locked, power recycling gain was \mathrm{PRG}=125\times T_\mathrm{PRM}=7.0 at the last DRFPMI lock.

Measured PRG is lower than PRG estimated here, but it is natural because various causes such as mode mismatch between PRC mode and arm cavity mode, imperfect contrast of FPMI, and so on could decrease PRG, which Eric suggested to me. 

 

Added on Dec 9

If T_\mathrm{loss,X} were as small as T_\mathrm{loss,Y}, PRG would be 16.0. PRC would be still under coupled.  

  11872   Fri Dec 11 09:35:44 2015 yutaroUpdateLSCPower recycling gain estimation from arm loss measurement

I took PR3 AR reflectivity and calculated PRG (PR3 is flipped and so AR surface is inside PRC).

As shown in attached figure, which shows AR specification of the LaserOptik mirror (PR3 is this mirror), AR reflectivity of PR3 is ~0.5 %. Since resonant light in PRC goes through AR surface of PR3 4 times per round trip, round trip loss due to this is ~2 %. Then I got

PRG = 7.8.    

 

Attachment 1: LaserOptikAR.png
LaserOptikAR.png
  11873   Fri Dec 11 13:28:36 2015 KojiUpdateLSCPower recycling gain estimation from arm loss measurement

Can I ask you to make a plot of the power recycling gain as a function of the average arm loss, indicating the current loss value?

  11874   Fri Dec 11 15:37:50 2015 yutaroUpdateLSCPower recycling gain estimation from arm loss measurement

Attached is the plot of relation between the average arm round trip loss and power recycling gain. 2 % loss due to PR3 AR reflection is taken into account.

Attachment 1: PRG_plot.png
PRG_plot.png
  7297   Tue Aug 28 17:16:54 2012 ericqUpdatePSLPower reduced!

We have now reduced the power being input to the MC from 1.25W to 10mW, and changed out the MC refl BS for a mirror. 

The power was reduced via the PBS we introduced in Entry 7295.

While we were in there, we took a look at the AS beam, which was looking clipped on the monitor. Jenne felt that it appears that the clipping seems to be occurring inside the vacuum, possibly on the faraday. This will be investigated during the vent. 

  7298   Tue Aug 28 17:43:04 2012 JenneUpdatePSLPower reduced!

Quote:

We have now reduced the power being input to the MC from 1.25W to 10mW, and changed out the MC refl BS for a mirror. 

The power was reduced via the PBS we introduced in Entry 7295.

While we were in there, we took a look at the AS beam, which was looking clipped on the monitor. Jenne felt that it appears that the clipping seems to be occurring inside the vacuum, possibly on the faraday. This will be investigated during the vent. 

 I stopped the regular MC autolocker and told the crontab to startup the low power Mc autolocker on op340m.  Also, since we now have the new MC2 transmission setup, the power that gets to the 'regular' MC trans PD is lower, so I've lowered the lock threshold to 50 counts, from 100 counts.

  7308   Wed Aug 29 17:02:41 2012 ericqUpdatePSLPower reduced!

Quote:

We have now reduced the power being input to the MC from 1.25W to 10mW, and changed out the MC refl BS for a mirror. 

The power was reduced via the PBS we introduced in Entry 7295.

While we were in there, we took a look at the AS beam, which was looking clipped on the monitor. Jenne felt that it appears that the clipping seems to be occurring inside the vacuum, possibly on the faraday. This will be investigated during the vent. 

 The power has been increased to 20mW. We got the 10mW number from the linked elog entry above. However, after venting we were having problems locking the MC. Upon investigating past elog posts, we found that 20mW was actually the power used in the past. The MC will now autolock. 

  7310   Wed Aug 29 17:35:34 2012 KojiUpdatePSLPower reduced!

The biggest reason why we could not lock the MC was that the beam was not properly hitting the MC REFL diode.

Now the MC REFL DC is about ~0.1 and 1.2 when the MC is and is not locked.

We increased the power according to the quantitative analysis of the intracavity power in this earlier entry

Autolocker script for the low power MC was modified so that the initial VCO gain is 3 in stead of 10.
The 2 steps of super boost were also enabled again.

  4838   Mon Jun 20 10:45:43 2011 JamieUpdateCDSPower restored to 1X1/1X2 racks. IOO binary output module installed.

All power has been restored to the 1X1 and 1X2 racks.  The modecleaner is locked again.

I have also hooked up the binary output module in 1X2, which was never actually powered.  This controls the whitening filters for MC WFS.  Still needs to be tested.

  5148   Tue Aug 9 02:27:54 2011 Ishwita , ManuelUpdatePEMPower spectra and Coherence of Guralps and STS2

We did offline wiener filtering on 3rd August (Elog entry) using only Guralps' channels X and Y.

Here we report the Power spectrum of the 3 seismometers (Guralp1, Guralp2, STS1) during that time.

and also the coherence between the data from different channels of the 3 seismometers.

We see that the STS is less correlated with the two Guralps. We think it is due to the wrong alignment of the STS with the interferometer's axes.

We are going to align the STS and move the seismometers closer to the stacks of the X arm.

 Pw_gurs_correct.png

 

 


 Pw_gur1_sts1_correct.png


 

 


 

 Coher_gur1_gur2_BW0.01.png

 

 



Coher_gur1_sts1_BW0.01.png

 

 



Coher_sts1_gur2_BW0.01.png

  5197   Thu Aug 11 16:21:16 2011 Ishwita , ManuelUpdatePEMPower spectra and Coherence of Guralps and STS2

 

 Following is the power spectrum plot (with corrected calibration [see here]) of seismometers Guralp1 and STS2(Bacardi, Serial NR 100151):

Pw_gur1_sts1_correct.png

 

 The seismometers are placed approximately below the center of the mode cleaner vacuum tube.

  5514   Thu Sep 22 10:43:50 2011 PaulUpdateSUSPower spectrum with different filter gains

 I thought it might be informative before trying to optimise the filter design to see how the current one performs with different gain settings. I've plotted the power spectra for ITMY yaw with filter gains of 0, 1, 2, 3 and 4.

All of the higher gains seem to perform better than the 0 gain, so can I deduce from this that so far the oplev control loop isn't adding excess noise at these frequencies?

Attachment 1: ITMY_YAW_closed_vs_open_noise.pdf
ITMY_YAW_closed_vs_open_noise.pdf
  14481   Sun Mar 17 13:35:39 2019 AnjaliUpdateALSPower splitter characterization

We characterized the power splitter ( Minicircuit- ZAPD-2-252-S+). The schematic of the measurement setup is shown in attachment #1. The network/spectrum/impedance analyzer (Agilent 4395A) was used in the network analyzer mode for the characterisation. The RF output is enabled in the network analyser mode. We used an other spliiter (Power splitter #1) to splitt the RF power such that one part goes to the network analzer and the other part goes to the power spliiter (Power splitter #2) . We are characterising power splitter #2 in this test. The characterisation results and comparison with the data sheet values are shown in Attachment # 2-4.

Attachment #2 : Comparison of total loss in port 1 and 2

Attachment #3 : Comparison of amplitude unbalance

Attachment #4 : Comparison of phase unbalance

  • From the data sheet: the splitter is wideband, 5 to 2500 MHz, useable from 0.5 to 3000 MHz. We performd the measurement from 1 MHz to 500 MHz (limited by the band width of the network analyzer).
  • It can be seen from attachment #2 and #4 that there is a sudden increase below ~11 MHz. The reason for this is not clear to me
  • The mesured total loss value for port 1 and port 2 are slightly higher than that specified in the data sheet.From the data sheet, the maximum loss in port 1 and port 2 in the range at 450 MHz are 3.51 dB and 3.49 dB respectively. The measured values are 3.61 dB and 3.59 dB respectively for port 1 and port 2, which is higher than the values mentioed in the data sheet. It can also be seen from attachment #1 (b) that the expected trend in total loss with frequency is that the loss is decreasing with increase in frequency and we are observing the opposite trend in the frequency range 11-500 MHz. 
  • From the data sheet, the maximum amplitude balance in the 5 MHz-500 MHz range is 0.02 dB and the measured maximum value is 0.03 dB
  • Similary for the phase unbalance, the maximum value specified by the data sheet in the 5 MHz- 500 MHz range is 0.12 degree and the measurement shows a phase unbalance upto 0.7 degree in this frequency range
  • So the observations shows that the measured values are slighty higher than that specified in the data sheet values.
Attachment 1: Measurement_setup.pdf
Measurement_setup.pdf
Attachment 2: Total_loss.pdf
Total_loss.pdf
Attachment 3: Amplitude_unbalance.pdf
Amplitude_unbalance.pdf
Attachment 4: Phase_unbalance.pdf
Phase_unbalance.pdf
  5658   Wed Oct 12 19:58:32 2011 KatrinUpdateGreen LockingPower splitter is unbalanced

The mini circuit power splitter ZFRSC-42S+ used at the YARM has no balanced output as it should have according to the data sheet.

@ 0.05MHz  the amplitude unbalance should be 0.03 dB

A quick measurement shows that there is a LO amplitude dependent unbalance:

LO amplitude input (Vpp)  unbalanced output (dB)
1.3 3.66
1.4 4.08
1.5 4.28
1.6 4.36

So my question is, shall I replace the power splitter just in case it is further degrading?

  16417   Wed Oct 20 11:48:27 2021 AnchalSummaryCDSPower supple configured correctly.

This was horrible! That's my bad, I should have checked the configuration before assuming that it is right.

I fixed the power supply configuration. Now the strip has two rails of +/- 18V and the GND is referenced to power supply earth GND.

Ian should redo the tests.

  7600   Tue Oct 23 17:41:20 2012 ManasaUpdateAlignmentPower supply at OMC removed

Quote:

Manasa and Raji hooked up HV power supplies to the PZTs and set them to the middle of their ranges (75 V).

 [Raji, Manasa]

The high-voltage power supply from the OMC was removed to replace one of the PZT power supplies. The power supply terminals were connected to the rear connection ports as per instructions from the manual (TB1 panel: port 3 - (-)OUT and port7 - (+)OUT). They were both switched  on and set to deliver (75V) to the PZTs.

 

  7603   Tue Oct 23 18:21:21 2012 JenneUpdateAlignmentPower supply at OMC removed

Quote:

Quote:

Manasa and Raji hooked up HV power supplies to the PZTs and set them to the middle of their ranges (75 V).

 [Raji, Manasa]

The high-voltage power supply from the OMC was removed to replace one of the PZT power supplies. The power supply terminals were connected to the rear connection ports as per instructions from the manual (TB1 panel: port 3 - (-)OUT and port7 - (+)OUT). They were both switched  on and set to deliver (75V) to the PZTs.

 

 This means that the low voltage dual supply which was wired in series (so could supply a max of 63V = 2*31.5V) has been replaced with the OMC power supply.  This is okay since we haven't turned on the OMC PZTs in a long, long time.  This is *not* the power supply for the output pointing PZTs.  When she says "both", she means the new HV supply, as well as the HV supply that was already there, so both pitch and yaw for PZT2 are being supplied with 75V now.

  1480   Tue Apr 14 02:59:02 2009 YoichiUpdateLockingPower up until 26
Yoichi, Peter,

With careful adjustments of the common mode gains, we were able to go up to arm power = 26, sort of robustly (more 50% chance).
At this arm power level, the common mode loop shape still looks good. But the interferometer loses lock easily.
I have to check other DOFs, but the interferometer does not stay locked long enough.
Today, lock losses of the IFO were associated with the lock loss of the PMC whereas the FSS stayed locked.
Probably the AO path got large kicks, which could not be handled by the PMC PZT.

The cause for the IFO lock loss is under investigation.
  12539   Fri Oct 7 20:25:14 2016 KojiUpdateCDSPower-cycled c1psl and c1iool0

Found the MC autolocker kept failing, It turned out that c1iool0 and c1psl went bad and did not accept the epics commands.

Went to the rack and power cycled them. Burt resotred with the snapshot files at 5:07 today.

The PMC lock was restored, IMC was locked, WFS turned on, and WFS output offloaded to the bias sliders.

The PMC seemed highly misaligned, but I didn't bother myself to touch it this time.

  12542   Mon Oct 10 11:48:05 2016 gautamUpdateCDSPower-cycled c1susaux, realigned PMC, spots centered on WFS1 and WFS2

[Koji, Gautam]

We did the following today morning:

  1. I re-aligned the PMC - transmission level on the scope on the PSL table is now ~0.72V which is around what I remember it being
  2. The spot had fallen off WFS 2 - so we froze the output of the MC WFS servo, and turned the servo off. Then we went to the table to re-center the spot on the WFS. The alignment had drifted quite a bit on WFS2, and so we had to change the scale on the grid on the MEDM screen to +/-10 (from +/- 1) to find the spot and re-center it using the steering mirror immediately before the WFS. It would appear that the dark offsets are different on WFS1 and WFS2, so the "SUM" reads ~2.5 on WFS1 and ~0.3 on WFS2 when the spots are well centered
  3. Coming back to the control room, we ran the WFSoffsets script and turned on the WFS servo again. Trying to run the relief servo, we were confronted by an error message that c1susaux needed to be power cycled (again). This is of course the slow machine that the ITMX suspension is controlled by, and in the past, power cycling c1susaux has resulted in the optic getting stuck. An approach that seems to work (without getting ITMX stuck)  is to do the following:
    • Save the alignment of the optic, turn off Oplev servo
    • Move the bias sliders on IFO align to (0,0) slowly
    • Turn the watchdog for ITMX off
    • Unplug the cables running from the satellite box to the vacuum feedthrough
    • Power cycle the slow machine. Be aware that when the machine comes back on, the offset sliders are reset to the value in the saved file! So before plugging the cables back in, it would be advisable to set these to (0,0) again, to avoid kicking the optic while plugging the cables back in
    • Plug in the cables, restore alignment and Oplev servos, check that the optic isn't stuck
  4. Y green beat touch up - I tweaked the alignment of the first mirror steering the PSL green (after the beam splitter to divide PSL green for X and Y beats) to maximize the beat amplitude on a fast scope. Doing so increased the beat amplitude on the scope from about 20mVpp to ~35mVpp. A detailed power budget for the green beats is yet to be done

It is unfortunate we have to do this dance each time c1susaux has to be restarted, but I guess it is preferable to repeated unsticking of the optic, which presumably applies considerable shear force on the magnets...


After Wednesday's locking effort, Eric had set the IFO to the PRMI configuration, so that we could collect some training data for the PRC angular feedforward filters and see if the filter has changed since it was last updated. We should have plenty of usable data, so I have restored the arms now.

  7381   Thu Sep 13 23:27:14 2012 JenneUpdateGeneralPre-close checklist

We need to do the following things:  Images of optics in DRMI chain, place black glass beam dumps, make sure pickoff beams get out, align IP POS/ANG.

Black glass: behind MMT1, behind IPPOSSM3, forward-going POP beam.

Images and pickoff stuff should happen at the end of each vent.

Images need to be taken of the following optics (with ruler edge at center of optic):

* PZT1

* MMT1

* MMT2

* PZT2

* PRM

* PR2

* PR3

* BS (front and back?)

* ITMX

* ITMY

* SR3

* SR2

* SRM

* OM1

* OM2

* OM3

* OM4=PZT3

* OM5=PZT4

* OMPO

* OM6

* Viewport as AS beam leaves chamber

* POYM1 (check no clipping on edge of mount)

* POXM1 (check no clipping on edge of mount)

Pickoff / aux beams:

* REFL path

* POX

* POY

* POP

* IPPOS

* IPANG

  12556   Thu Oct 13 00:23:54 2016 ericqUpdateVACPre-vent prep

I have completed the following non-Steve portions of the pre-vent checklist [wiki-40m.ligo.caltech.edu]

  • Center all oplevs, transom QPDs (and IPPOS + IPANG if they are set up)
  • Align the arm cavities for IR and align the green lasers to the arms.
  • Update the SUS Driftmon values
  • Reconcile all SDF differences
  • Reduce input power to no more than 100mW (measured at the PSL shutter) by adjusting wave plate+PBS setup on the PSL table BEFORE the PMC. (Using the WP + PBS that already exist after the laser.)
  • Replace 10% BS before MC REFL PD with Y1 mirror and lock MC at low power.
  • Close shutter of PSL-IR and green shutters at the ends

All shutters are closed. Ready for Steve to check nuts and begin venting!

ELOG V3.1.3-