40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 245 of 335  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  12285   Sun Jul 10 17:33:00 2016 ericqUpdateGeneralVent progress

It took a little time, but I relocked the IMC and realigned to the point where the PRC is flashing, visible on REFL and AS, and tiny flashes are visible in TRY.

  12247   Tue Jul 5 23:38:42 2016 gautamUpdateGeneralVent progress - ETMX SUS Coil driver electronics investigation

With Koji's help, I've hacked together an arrangement that will allow us to monitor the output of the coil driver to the UL coil. 

The arrangement consists of a short custom ribbon cable with female DB25 connectors on both ends - the particular wire sending the signal to the UL coil has a 100 ohm resistor wired in series, because the coil has resistance ~20ohm, and the output of the coil driver board has a series 200(?) ohm resistor, so by directly monitoring the voltage at this point, we may not see a glitch as it may register too small. Tangentially related: the schematic of the coil driver board suggests that the buffered output monitor has a gain of 0.5. 

To monitor the voltage, I use the board to which the 4 Oplev signals are currently hooked up. Channel 7 on this particular board (corresponding to ADC channel 30 on c1scx) was conveniently wired up for some prior test, so I used this channel. Then, I modified the C1SCX model to add a testpoint to monitor the output of this ADC. Then, I turned OFF the input on the coil output filter for the UL Coil (i.e. C1:SUS-ETMX_ULCOIL_SW1) so that we can send a known, controlled signal to the UL Coil by means of awggui. Next, I added an excitation at 5 Hz, amplitude 20 counts (as the signal to the coil under normal conditions was approximately of this amplitude) to the excitation channel of the same filter module, which is the state I am leaving the setup in for the night. I have confirmed that I see this 5Hz oscillation on the monitor channel I set up. Oddly, the 0 crossings of the oscillations happen at approximately -1000 counts and not at 0 counts. I wonder where this offset is coming from? The two points I am monitoring the voltage across is shown in the attached photograph - the black clip is connected to the lead carrying the return signal from the coil.

I also wanted to set up a math block in the model itself that monitors, in addition to the raw ADC channel, a copy from which the known applied signal has been cancelled, as presumably a glitch would be more obvious in such a record. However, I was unable to access the excitation channel to the ULCOIL filter from within the SCX model. So I am just recording the raw output for tonight...

Attachment 1: image.jpeg
image.jpeg
  12261   Wed Jul 6 22:58:01 2016 gautamUpdateGeneralVent progress - ETMX SUS Coil driver electronics investigation

I've made a few changes to the monitoring setup in the hope we catch a glitch in the DAC output/ sus coil driver electronics. Summary of important changes:

  1. I'm using a CDS oscillator to send a signal of 20counts amplitude, 5.0 Hz to the coil rather than an excitation point. This way, I have access to the known signal we are sending, and can subtract it from the measured signal. 
  2. To account for the phase delay between the excitation from the oscillator to the measured excitation, I am using an all-pass filter to manually delay the oscillator signal (internally in the model) before subtracting it from the measured output.

It remains to see if we will actually be able to see the glitch in long stretches of data - it is unclear to me how big a glitch will be in terms of ADC counts.

The relevant channels are : C1:SCX-UL_DIFF_MON and C1:SCX-UL_DIFF_MON_EPICS (pardon the naming conventions as the setup is only temporary after all). Both these should be hovering around 0 in the absence of any glitching. The noise in the measured signal seems to be around 2 ADC counts. I am leaving this as is overnight, hopefully the ETMX coil drive signal chain obliges and gives us some conclusive evidence...

I have not committed any of the model changes to the SVN. 

  12263   Thu Jul 7 00:25:07 2016 ericqUpdateGeneralVent progress - ETMX SUS Coil driver electronics investigation

It may be advantageous to look at the coil output data from when the OSEM damping is on, to try and reproduce the real output signal amplitude that gets sent to the coils.

  12265   Thu Jul 7 10:49:03 2016 gautamUpdateGeneralVent progress - ETMX SUS Coil driver electronics investigation
Quote:

It may be advantageous to look at the coil output data from when the OSEM damping is on, to try and reproduce the real output signal amplitude that gets sent to the coils.

The amplitude of the applied signal (20) was indeed chosen to roughly match what goes to the coils normally when the OSEM damping is on.

There appears to be no evidence of a detectable glitch in the last 10 hours or so (see attachment #1 - of course this is a 16Hz channel and the full data is yet to be looked at)... I guess the verdict on this is still inconclusive.

Attachment 1: UL_glitchMon_Striptool.png
UL_glitchMon_Striptool.png
  12271   Fri Jul 8 11:35:45 2016 gautamUpdateGeneralVent progress - ETMX SUS Coil driver electronics investigation

Yesterday, I expanded the extent of the ETMX suspension coil driver investigation. I set up identical monitors for two more coils (so now we are monitoring the voltage sent to UL, UR and LL - I didn't set one up for LR because it is on a second DB25 connector). Furthermore, I increased the excitation amplitude from ~20 to ~2000 (each coil had an independent oscillator at slightly different frequency between 5Hz and 8.5 Hz), the logic being that during LSC actuation we send signals of approximately this amplitude to the coils and we wanted to see if a larger amplitude signal somehow makes the system more prone to glitches.

Over ~10 hours of observation, there is no clear evidence of any glitch. About 2 hours ago (~930am PDT Fri Jul 8), the watchdog tripped - but this was because even though I had increased the trip threshold to ~800 for the course of this investigation, megatron runs this script every 20 minutes or so that automatically reduces this threshold by 17 counts - so at some point, the threshold went lower than the coil voltage, causing the watchdog to trip. So this was not a glitch. The other break around 2am PDT earlier today was an FB crash.

Do we now go ahead and pull the suspension out, and proceed with the swap?

Attachment 1: coilGlitchMon.png
coilGlitchMon.png
  12272   Fri Jul 8 11:48:09 2016 KojiUpdateGeneralVent progress - ETMX SUS Coil driver electronics investigation

YES

Move the suspension on the south clean bench and make more close inspection. We need to remove the OSEMs.

Then unmount the mirror. Bring it to the clean room and work on the bond removal.
Meanwhile, set up all suspension components inclusing the alignment test setup.

  12281   Fri Jul 8 21:22:38 2016 gautamUpdateGeneralVent progress - ETMX SUS Coil driver electronics investigation

While ETMX is out, I'm leaving the larger amplitude excitations to the coils on over the weekend, in case any electronic glitch decides to rear its head over the weekend. The watchdog should be in no danger of tripping now that we have removed the ETM.

Unrelated to this work: while removing the ETMX suspension from the chamber, I also removed the large mirror that was placed inside to aid photo taking, so that there is no danger of an earthquake knocking it over and flooding the chamber with dust.

  12297   Wed Jul 13 00:38:25 2016 JohannesUpdateGeneralVent progress - ETMY attempted repositioning

[Lydia, Johannes]

We attempted to move the ETMY suspension near the access port in preparation for the cleaning process. The plan was to move in the face restraints first to the point of almost making contact, then the ones underneath so the optic is sitting on them, followed by the top one facing down, and then bringing in the stops on the faces.

While moving in the stoppers I noticed that the far lower stopper on the HR side was barely touching the face of the optic in its resting position and was basically pushing it sideways when moved forward. It was just on the edge, so I tried to compensate minimally by moving the underneath stops a little further on the near side, trying to let it 'slide' over a little so the screw would have better contact. I must have been too generous with the adjustment, because while proceeding I noticed at some point that the stick magnets on one side of the optic were not attached anymore but laying inside the OSEMs. The side magnet was also missing, it is now sitting on the suspension jig base plate. The dumbbells all seem intact, but we'll test them before we reglue the magnets to the optic. This is extremely unfortunate, but hopefully won't take too long to fix. At the very least, as Koji put it, the cleaning will be easier with the optic out of the suspension. Still, what a bummer.

  12310   Tue Jul 19 13:21:42 2016 JohannesUpdateGeneralVent progress - ETMY attempted repositioning

[Lydia, Johannes]

We moved ITMY from its original position to a place near the access point. We took the OSEMs off first, and noticed that the short flat head screw driver was still a little too long to properly reach the set screws for the lower OSEMs. We were able to gradually loosen them, though and thus remove the lower OSEMs as well. We had to move a cable tower out of the way, but used clamps to mark its position. After making sure the optic is held by its earthquake stops, we moved it to its cleaning location. All magnets are still attached.

  12295   Tue Jul 12 23:51:16 2016 JohannesUpdateGeneralVent progress - ETMY inspection

On Monday I inspected ETMY, and found nothing really remarkable. There was only little dust on the HR side, and nothing visible in the center. The AR side has some visible dust, nothing too crazy, but some of it near the center.

  12289   Mon Jul 11 15:13:22 2016 gautamUpdateGeneralVent progress: in-date First Contact procured

I have obtained 2x100cc bottles of in-date first contact from Garilynn (use before date is 09/14/2016) for cleaning of our test-masses. They are presently wrapped in foil in the plastic box with all the other first contact supplies.

Attachment 1: image.jpeg
image.jpeg
  12534   Wed Oct 5 19:43:13 2016 gautamSummaryGeneralVent review

This elog is meant to review some of the important changes made during the vent this summer - please add to this if I've forgotten something important. I will be adding this to the wiki page for a more permanent record shortly.


Vent objectives:

  1. Clean ITMX, ITMY, ETMX, ETMY
  2. Replace ETMX suspension cage, replace Al wire standoffs with Ruby (sapphire?) standoffs.
  3. Shorten Y arm length by 20mm
  4. Replace 40mm aperture baffles in ETM chambers with 50mm black glass baffles

Optics, OSEM and suspension status:

ITMX & ITMY

  • ITMX and ITMY did not have any magnets broken off during the vent - all five OSEM coils for both were removed and the optic EQ stopped for F.C. cleaning.
  • Both HR and AR faces were F.Ced, ~20mm dia area cleaned.
  • The coils were re-inserted in an orientation as close to the original (as judged from photos), and the shadow sensor outputs were made as close to half their open values as possible, although in the process of aligning the arms, this may have changed
  • OSEM filter existense was checked (to be updated)
  • Shadow sensor open values were recorded (to be updated)
  • Checked that tables were level before closing up
  • The UL OSEM on ITMY was swapped for a short OSEM while investigating glitchy shadow sensor outputs. This made no difference. However, the original OSEM wasn't replaced. Short OSEM was used as we only had spare short OSEMs. Serial number (S/N 228) and open voltage value have been recorded, wiki page will be updated. Does this have something to do with the input matrix diagonalization weirdness we have been seeing recently?
  • ITMX seems to be prone to getting stuck recently, reason unknown although I did notice the LL OSEM was kind of close to the magnet while inserting (but this magnet is not the one getting stuck, as we can see this clearly on the camera - the prime suspect is UL I believe)
  • OL beam centering on in vacuum steering optics checked before closing up

ETMY

  • UL, UR and LR magents broke off at various points, and so have been reglued
  • No standoff replacement was done
  • Re-suspension was done using newly arrived SOS wire
  • Original OSEMs were inserted, orientations have changed somewhat from their previous configuration as we did considerable experimentation with the B-R peak minimization for this optic
  • OSEM filter status, shadow sensor open voltage values to be updated.
  • New wire suspension clamp made at machine shop is used, 5 in lb of torque used to tighten the clamp
  • HR face cleaned with F.C.
  • Optic + suspension towers air baked (separately) at 34C for curing of EP30
  • Checked that tables were level before closing up
  • 40mm O.D. black glass baffle replaced with 50mm O.D. baffle.
  • Suspension cage was moved towards ITMY by 19mm (measured using a metal spacer) by sliding along stop marking the position of the tower.

ETMX

  • Al wire standoffs <--> Ruby wire standoffs (this has changed the pitch frequency)
  • All magnets were knocked off at some point, but were successfully reglued
  • New SOS tower, new SOS wire, new wire clamp used
  • OSEM filter status, shadow sensor open voltage values to be updated.
  • OSEM orientation is close to horizontal for all 5 OSEMs
  • Table leveling was checked before closing up.
  • 40mm O.D. black glass baffle replaced with 50mm O.D. baffle.\

PRM

  • Some issues with the OSEMs were noticed, and were traced down to the Al foil caps covering the back of the (short) OSEMs, which are there to minimize the scattererd 1064nm light interfering with the shadow sensor, shorting one of the OSEMs
  • To mitigate this, all Al foil caps now have a thin piece of Kapton between foil and electrical contacts on rear of OSEM
  • No OSEMs were removed from the suspension cage during this process, we tried to be as gentle as possible and don't believe the shadow sensor values changed during this work, suggesting we didn't disturb the coils (PRM wasn't EQ stopped either)

SRM

  • The optic itself wasn't directly touched during the vent - but was EQ stopped as work was being done on ITMY
  • It initially was NOT EQ stopped, and the shift in table level caused by moving ITMY cage to the edge of the table for F.C. cleaning caused the optic to naturally drift onto the EQ stops, leading to some confusion as to what happened to the shadow sensor outputs
  • The problem was diagnosed and restoring ITMY to its original position made the OSEM signals come back to normal.

SR3

  • Was cleaned by drag wiping both front and back faces

SR2/PR2/PR3/BS/OMs

  • These optics were NOT intentionally touched during this vent
  • The alignment on the OMs was not checked before close-up
 

Other checks/changes

  • OL beams were checked on in-vacuum input and output steering mirrors to make sure none were close to clipping
  • Insides of viewport windows were checked for general cleanliness, given that we have found the outside of some of these to be rather dirty. Insides of viewports checked were deemed clean enough.
  • Steve has installed a new vacuum guage to provide a more realiable pressure readout. 
  • We forgot to investigate the weird behaviour of the AS beam that Yutaro and Koji identified in November. In any case, looks like the clipping of the AS beam is worse now. We will have to try and fix this using the PZT mounted OMs, and if not, we may have to consider venting again

Summary of characterization tasks to be done:

  1. Mode matching into the Y arm cavity given the arm length change
  2. HOM content in transmitted IR light from Y arm given the arm length change (Finesse models suggest that the 2f second order HOM resonance may have moved closer to the 00 resonance)
  3. Arm loss measurement
  4. Suspension diagonalization
  5. Check the Qs of the optics eigenmodes - should indicate if any of our magnets, reglued or otherwise, are a little loose
  9572   Thu Jan 23 23:10:19 2014 ericq UpdateGeneralVent so far

[ericq, Manasa, Jenne]

Summary: We opened up the BS and both ITM chambers today, and put the light doors on. //Edit : Manasa  Post-vent the MC was very much misaligned in yaw. Both the ITMs moved in pitch as inferred from the oplev; but there is still light on the oplev PDs//. We toiled with the PMC and mode cleaner for a while to get reasonable transmission and stability (at least for a period of time). We then tried to lock IR to the y-arm, to no avail. 

Locking the PMC doesn't seem very robust with the low power level we have; adjusting the gain at all when it's locked throws it right out. The mode cleaner spot was visibly moving around on MC2 as well. We'll continue tomorrow. 

Details about alignment efforts: Manasa and I tried for a while to try and align the y-arm for IR. Straight out of venting the green TM00 would lock to the y-arm with about .45, as compared to .8 before venting, so it didn't seem to drift too far. The x-arm would even flash any modes, however. For a while, IR was no where to be seen after the mode cleaner. Eventually, we used the tip tilts to bring the AS beam onto the camera, which exhibited fringes, so we knew we were hitting the ITMs somewhere. We wandered around with the ETM to see if any retroflection was happening, and saw the IR beam scatter off of the earthquake stop. We moved it to the side to see it hitting the OSEM holder, and moved down to the bottom OSEM holder to get an idea of where to put pitch to get roughly the center of the ITM, then undid the yaw motion.

There, we would see very infrequent, weak flashes. We weren't able to distinguish the mode shape though; however, the flashes were coincident with where the green would lock to a very yaw-misaligned fishbone mode, to the lower right of the optic's center. We figured that if we gradually fixed the green alignment with the mode shapes we could see and actually lock on, we could use the tip tilts to adjust the IR pointing and keep it coincident and eventually resonate more. However, this didn't really work out. The flashes were very infrequent, and at this point the PMC/MC were getting very touchy, and would cease to stay locked for more than a minute or two. At this point, we stopped for the day. 

 

  9579   Mon Jan 27 21:36:35 2014 ericq UpdateGeneralVent so far

After turning the slow FSS threshold down, the mode cleaner stays locked enough to do other things. We were able to align the tip tilts to the y-arm such that we were able to get some flashes in what looks like a TM00-ish mode. (It was necessary to align the PRM such that there was some extra power circulating in the PRC to be able to see the IR flashes on the ITMY face camera) This is enough to convince us that we are at least near a reasonable alignment, even though we couldn't lock to the mode. 

The x-arm was in a hairier situation; since the green beam wouldn't flash into any modes, we don't even know that a good cavity axis exists. So, I used the green input PZTs to shine the green beam directly on the earthquake stops on the ITMX cage, and then inferred the PZT coordinates that would place the green beam roughly on the center of ITMX. I moved the ETMX face camera such that it points at the ETMX baffle. I tried looking for the retroreflected green spot to no avail. Hopefully tomorrow, we can get ourselves to a reasonably aligned state, so we can begin measuring the macroscopic PRC length. 

  2360   Mon Dec 7 09:38:05 2009 KojiUpdateVACVent started

Steve, Jenne, Koji

The PSL was blocked by the shutter and the manual block.
We started venting
at 9:30

09:30  25 torr
10:30 180 torr
11:00 230 torr

12:00 380 torr

13:00 520 torr
14:30 680 torr - Finish. It is already over pressured.

  14607   Tue May 14 10:35:58 2019 gautamUpdateGeneralVent underway
  1. PSL had stayed on overnight. There was an EQ (M 4.6 near Costa Rica) which showed up on the Seis BLRMS, and I noticed that several optics were reporting Oplev spots off their QPDs (I had just centered these yesterday). So I did a quick alignment check:
    • IMC was readily locked
    • After moving test mass bias sliders to bring Oplev spots back to the center, the EX and EY green beams were readily locked to a TEM00 mode
    • IR flashes could be seen in TRX and TRY (though their levels are low, since we are operating with 1/10th the nominal power
    • The IP-POS QPD channels were reporting a "segmentation fault" so I keyed the c1iscaux crate and they came back. Still the QPD was reporting a low SUM value, but this too is because of the lower power. Conveniently, there was an ND2.0 filter in the beam path on a flip mount which I just flipped out of the way for the low-power tracking.
    • Then, PSL and green shutters were closed and Oplev loops were disengaged.
  2. Checked that we have an RGA scan from today
  3. During the walkthrough to check the jam nuts, Chub noticed that the outer nuts on the bellows between the OMC chamber and the IMC chamber were loose to the finger! He is tightening them now and checking the remaining jam nuts. AFAIK, Steve made it sound like this was always a formality. Should we be concerned? The other jam nuts are fine according to Chub.
  4. We valved off the pumpspool from the main volume and annuli, and started letting Nitrogen into the main volume at ~1045am.
  5. Started letting instrument grade air into the main volume at ~1130am. We are aiming for a pressure increase of 3 torr/min
  6. 4 cylinders of dry air were exhausted by ~330pm. It actually looks like we over-pressured the main volume by ~20torr - this is bad, we should've stopped the air inletting at 700 psi and then let it equilibriate to lab air pressure.
  7. At some point during the vent, the main volume pressure exceeded the working range of the cold cathode gauge CC1. It reports "Current Fail" on its LED display, which I'm assuming meant it auto-shutoff its HV to protect itself, Jon tells me the vacuum code isn't responsible for initiating any manual shutoff.
  8. A new vacuum state was added to reflect these conditions (pumpspool under vacuum, main volume at atmosphere).
  9. The annuli remain under vacuum for now. Tomorrow, when we remove the EY door, we will vent the EY annulus.

IMC was locked, MC2T ~ 1200cts after some alginment touch ups. The test mass oplevs indicate some drift, ~100urad. I didn't realign them.

The EY door removal will only be done tomorrow. I will take some free-swinging ETMY data today (suspension was kicked at 1241919438) to see if anything has changed (it shouldn't have). I need to think up a systematic debugging plan in the meantime.

Attachment 1: vent.png
vent.png
Attachment 2: Screenshot_from_2019-05-14_16-35-16.png
Screenshot_from_2019-05-14_16-35-16.png
  10545   Fri Sep 26 16:10:14 2014 ericqUpdateGeneralVent update

Today so far:

  • I moved SRM forward by 3mm
  • Then I leveled the ITMY table 
  • At this point, bringing the ITMY oplev beam back onto its QPD got me back to green locking and IR flashes 
  • AS and POY beams are both making it out to their tables, as seen by IR card. (Though not to their in-air optics)

Here's my quick brain dump of things to do before we can pump down (anyone see anything missing?):

  • Check the clearance of the POY beam at the SRM cage
  • Re-do distance reconstruction measurements, confirm desired SRC length
  • Lock the SRM cage down fully (right now, has 2 clamps on, and one laying unused)
  • Align SRM for SRC flashes
  • Adjust SRM OSEM positions as needed
  • Adjust SRM oplev beam path, measure lever arm for calibration
  • Confirm beam spots on output mirrors in ITMY and BS chambers are ok
  • Take pictures of ITMY chamber. 
  • Closeup checklist
  10546   Fri Sep 26 17:13:39 2014 ericqUpdateGeneralVent update

Quote:
  • Check the clearance of the POY beam at the SRM cage
  • Re-do distance reconstruction measurements, confirm desired SRC length

POY has >2 inches of clearance from the SRM cage. 

Distance reconstruction indicates an SRC length of 5399mm, which was exactly our target. 

  10549   Mon Sep 29 12:47:51 2014 ericqUpdateGeneralVent update

Quote:
  •  Lock the SRM cage down fully (right now, has 2 clamps on, and one laying unused)
  • Align SRM for SRC flashes
  • Adjust SRM OSEM positions as needed
  • Adjust SRM oplev beam path, measure lever arm for calibration
  • Confirm beam spots on output mirrors in ITMY and BS chambers are ok

 [Koji, ericq]

We have completed the above points; the ITMY table is still level.

Despite what the wiki says, the SRM LR OSEM open voltage is ~1.97V instead of ~1.64, so we shot for half of that. 

The in-air steering of the SRM oplev return beam needs adjustment. I'll estimate the beam path length when I'm taking pictures and closing up. 

Left to do:

  • Now that AS is back on diode, lock arms and align everything. Confirm everyone's happiness. 
  • Take numerous pictures of ITMY chamber.
  • Center oplevs
  • Put doors on
  • Close shutters
  • Pump down
  • Replace MC refl Y1 with the beamsplitter
  • Turn PSL power back up

Related In-Air work:

  • Fix POY steering
  • Fix SRM oplev return steering
  10550   Mon Sep 29 17:10:51 2014 ericqUpdateGeneralVent update

Everything is aligned, AS and POY make it out of vacuum unclipped, OSEM readings look good.

I set up the SRM oplev, centered all oplevs.

Tomorrow, we just have to take pictures of the ITMY chamber before we put the heavy doors on. 

  10551   Mon Sep 29 18:12:24 2014 ericqUpdateGeneralVent update

I closed the PSL shutter as we didn't want to burn the mirror surface when we are not working.

  10552   Tue Sep 30 11:53:29 2014 ericqUpdateGeneralVent update

 

Photos have been taken of the ITMY chamber, and uploaded to picasa. Here's a slideshow:

  7306   Wed Aug 29 11:47:21 2012 ericqUpdateVACVenting

 [Steve, Eric]

I've been helping Steve vent this morning. The following things were done (from Steve's logbook):

  • Particle counts: 0.5 micron particles, 4200 counts per cubic ft
  • Vertex crane drive checked to be ok
  • Optical Levers set for local damping only
  • Saved some screens
  • PSL shutter and green shutters closed
  • HV Off checked, JAM nuts checked
  • Vac: Close V1, VM1, ans - VA6, open VM3 - RGA, cond: chamber open mode
  • 8AM: VV1 open to N2, regulator set  to 14 psi
  • 8:23AM: 35psi Instrument grade Air

(At this point, I took over the air canisters, while Steve made preparations around the lab. 

  • 9:00AM: 2nd air cylinder, 14 psi 
  • 9:40AM: 3rd air cyl
  • 10:20AM: 4th air cyl
  • 11:00AM: 5th air cyl

With the 5th cylinder, we began approaching 1 atm, so we slowed the regulator down to 5psi. Around 750 torr, Steve opened VV1 to air.

According to Steve, we will be at atmospheric pressure at  ~12:30pm.

  38   Wed Oct 31 10:31:23 2007 Andrey RodionovRoutineVACVenting is in progress

We (Steve, David, Andrey) started venting the vacuum system at 9.50AM Wednesday morning.
  13620   Thu Feb 8 00:01:08 2018 gautamUpdateCDSVertex FEs all crashed

I was poking around at the LSC rack to try and set up a temporary arrangement whereby I take the signals from the DAC differentially and route them to the D990694 differentially. The situation is complicated by the fact that, afaik, we don't have any break out boards for the DIN96 connectors on the back of all our Eurocrate cards (or indeed for many of the other funky connecters we have like IDE/IDC 10,50 etc etc). I've asked Steve to look into ordering a few of these. So I tried to put together a hacky solution with an expansion card and an IDC64 connector. I must have accidentally shorted a pair of DAC pins or something, because all models on the c1lsc FE crashedindecision. On attempting to restart them (c1lsc was still ssh-able), the usual issue of all vertex FEs crashing happened. It required several iterations of me walking into the lab to hard-reboot FEs, but everything is back green now, and I see the AS beam on the camera so the input pointing of the TTs is roughly back where it was. Y arm TEM00 flashes are also seen. I'm not going to re-align the IFO tonight. Maybe I'll stick to using a function generator for the THD tests, probably routing non AI-ed signals directly is as bad as any timing asynchronicity between funcGen and DAQ system...

Attachment 1: CDSrecovery_20180207.png
CDSrecovery_20180207.png
  15383   Mon Jun 8 18:14:55 2020 gautamUpdateCDSVertex FEs crashed

Summary:

Around 5pm local time, the three vertex FEs crashed. AFAIK, no one was in the lab or working on anything CDS related, so this is worrying.

Details:

  • Reboot script was used to bring all FEs back - only soft reboots were required.
  • The IMC and arms can now be locked.
  • I think combination of burt + SDF would have reverted all the settings as they should be, but if something appears off, it could be that some EPICS value didn't get reset correctly.
Attachment 1: FEcrash_CDSoverview.png
FEcrash_CDSoverview.png
  7770   Fri Nov 30 23:10:36 2012 CharlesUpdateElectronicsVertex Illuminators

 3 of the 4 remote controlled illuminators at the vertex are installed and can now be turned on via sitemap. There are a total of 15 controls for "Illum", but only the 3 labeled with MC, BS-PRM and ITMY-SRM are functional.

  4828   Thu Jun 16 08:45:14 2011 steveUpdateSUSVertex SUS Binary Output Boxes removed

Quote:

- I was investigating the SUS whitening issue.

- I could not find any suspension which can handle the input whitening switch correctly.

- I went to 1X5 rack and found that both of the two binary output boxes were turned off.
As far as I know they are pulling up the lines which are switched by the open collector outputs.

- I tried to turn on the switch. Immediately I noticed the power lamps did not work. So I need an isolated setup to investigate the situation.

- The cables are labelled. I will ask steve to remove the boxes from the rack.

 I shut down damping to the Vertex optics and removed Binary IO  Adapter chassy BO0 and BO1

About a week ago I discussed the BO0's power indicator lights with Kiwamu. They were  not on or they were blinking on-off.

I put screws into ps connectors in the back, but it did not helped.

Attachment 1: P1070894.JPG
P1070894.JPG
  4829   Thu Jun 16 23:19:09 2011 KojiUpdateSUSVertex SUS Binary Output Boxes removed

[Jamie, Koji]

- We found the reason why some of the LEDs had no light. It was because the LEDs were blown as they were directly connected to the power supply.
The LEDs are presumably designed to be connected to a 5V supply (with internal current-limiting resistor of ~500Ohm). The too much current
with the 15V (~30mA) made the LED blown, or the life-time of them shorter.

- Jamie removed all of the BO modules and I put 800Ohm additional resister such that the resultant current is to be 12mA.
The LEDs were tested and are fine now.

- The four BO boxes for C1SUS were restored on the rack. I personally got confused what should be connected where
even though I had labeled for BO0 and BO1. I just have connected CH1-16 for BO0. The power supplies have been connected only to BO0 and BO1.

- I tested the whitening of PRM UL sensor by exciting PRM UL sensor. The transfer function told us that the pendulum response can be seen
up to 10-15Hz. When the whitening is on, I could see the change of the transfer function in that freq band. This is good.
So the main reason why I could not see theis was that the power supply for the BOs were not turned on.

- I suppose Jamie/Joe will restore all of the BO boxes on the racks tomorrow. I am going to make a test script for checking the PD whitenings.

  4827   Thu Jun 16 00:43:36 2011 KojiUpdateSUSVertex SUS Binary Output Boxes were turned off / need investigation

- I was investigating the SUS whitening issue.

- I could not find any suspension which can handle the input whitening switch correctly.

- I went to 1X5 rack and found that both of the two binary output boxes were turned off.
As far as I know they are pulling up the lines which are switched by the open collector outputs.

- I tried to turn on the switch. Immediately I noticed the power lamps did not work. So I need an isolated setup to investigate the situation.

- The cables are labelled. I will ask steve to remove the boxes from the rack.

  16502   Fri Dec 10 21:35:15 2021 KojiSummarySUSVertex SUS DAC adapter ready

4 units of Vertex SUS DAC adapter (https://dcc.ligo.org/LIGO-D2100035) ready.

https://dcc.ligo.org/LIGO-S2101689

https://dcc.ligo.org/LIGO-S2101690

https://dcc.ligo.org/LIGO-S2101691

https://dcc.ligo.org/LIGO-S2101692

The units are completely passive right now and has option to extend to have a dewhitening board added inside.
So the power switch does nothing.

Some of the components for the dewhitening enhancement are attached inside the units.

 

 

Attachment 1: PXL_20211211_053155009.jpg
PXL_20211211_053155009.jpg
Attachment 2: PXL_20211211_053209216.jpg
PXL_20211211_053209216.jpg
Attachment 3: PXL_20211211_050625141-1.jpg
PXL_20211211_050625141-1.jpg
  11577   Fri Sep 4 15:20:31 2015 ericqUpdateLSCVertex Sensing

I've now made a collection of sensing matrix measurements. 

In all of the plots below, the radial scale is logarithmic, each grid line is a factor of 10. The units of the radial direction are calibrated into demod board output Volts per meter. The same radial scale is used on all plots and subplots.

I did two PRMI measurements: with MICH locked and excited with either the ITMS or the BS + PRM compensation. This tells us if our PRM compensation is working; I think it is indeed ok. I though I remembered that we came up with a number for the SRM compensation, but I haven't been able to find it yet. 

The CARM sensing int he PRFPMI measurement has the loop gain at the excitation frequency undone. All excitations were simultaneously notched out of all control filters, via the NotchSensMat filters. 

The angular scale is set to the analog I and Q signals; the dotted lines show the digitial phase rotation angle used at the time of measurement. 

Attachment 1: PRFMI_ITM.pdf
PRFMI_ITM.pdf
Attachment 2: PRFMI_BS.pdf
PRFMI_BS.pdf
Attachment 3: DRMI.pdf
DRMI.pdf
Attachment 4: PRFPMI.pdf
PRFPMI.pdf
  4225   Sat Jan 29 00:31:05 2011 SureshUpdateGeneralVertex crane upgrade completed

The Vertex crane is smarter and safer now.  This upgrade ensures that the two sections of I-beam (8ft, 4ft) remain firmly latched to form a straight member till the latch is released.

In specific, it ensures that problems such as this one do not occur in the future.

 

The new safety features are:

When the I-beam sections are latched together, a pneumatic piston ensures that the latch is secure. 

If the latch is not engaged the trolley does not move outward beyond the end of the 8-foot section of the I beam.

If the trolley is out on the 4-foot section of the beam then we cannot disengage the latch.

 

How does it work?

 

 Vertex_Crane-2.png Vertex_Crane-4.png

 

The state of the Limit Switch 1 changes when the trolly goes past it.    The Limit Switch 2 gets pressed when the two sections are latched together.

The pneumatic piston raises or lowers the latch.  The Pneumatic Latch Switch operates a pneumatic valve controlling the state of the piston.

 

 

Vertex_Crane-3.png P1280545.JPG

The new controller now has Pneumatic Latch Switch in addition to the usual Start, Stop, Up, Down, In and Out buttons. 

Each of the Up, Down, In and Out buttons have two operational states:  Half pressed (low speed) and Full pressed (High Speed).  Their functions remain the same as before.

 

The new Pneumatic Switch:

When this switch is 'Engaged' and the 4 ft section is swung in-line with the 8 ft section, the two sections get latched together.

To unlatch them we have to throw the switch into the 'Disengage' state.  This makes the piston push the latch open and a spring rotates the 4 ft section about its pivot.

Limit Switch 2 is not pressed (I-beams not aligned straight) ==> Limit Switch 1 will prevent the trolley from out going beyond the 8 ft section.

While Limit Switch 2 is pressed we cannot disengage the latch.

 

Note: 

   The pneumatic piston requires 80psi of pressure to operate.  However we have only 40psi in the lab and the piston seems to operate quite well at this pressure as well.  I believe a request has been made to get an 80psi line laid just for this application.

 

Attachment 1: Vertex_Crane-2.png
Vertex_Crane-2.png
Attachment 2: Vertex_Crane-4.png
Vertex_Crane-4.png
  4233   Mon Jan 31 16:12:11 2011 steveUpdateVACVertex crane upgrade shorth coming

 The upgrade is almost finished. I found that the passive latch lock  is not closing down all the way. It has about a 3/8" gap.    See Atm. 1 & 2

The service man was here this morning and agreed to fix it.  They will be back next week. The latch needs an other spring to push it into full lock. 

We tested all possible sequences of operation of the new upgrade. It performed to specification.

 

 

Quote:

 

Attachment 1: P1070364.JPG
P1070364.JPG
Attachment 2: P1070358.JPG
P1070358.JPG
  15035   Tue Nov 19 15:08:48 2019 gautamUpdateCDSVertex models rebooted

Jon and I were surveying the CDS situation so that he can prepare a report for discussion with Rolf/Rich about our upcoming BHD upgrade. In our poking around, we must have bumped something somewhere because the c1ioo machine went offline, and consequently, took all the vertex models out. I rebooted everything with the reboot script, everything seems to have come back smoothly. I took this opportunity to install some saturation counters for the arm servos, as we have for the CARM/DARM loops, because I want to use these for a watch script that catches when the ALS loses lock and shuts stuff off before kicking optics around needlessly. See Attachment #1 for my changes.

Attachment 1: armSat.png
armSat.png
  93   Mon Nov 12 10:53:58 2007 pkpUpdateOMCVertical Transfer functions
[Norna Sam Pinkesh]

These plots were created by injected white noise into the OSEMs and reading out the response of the shadow sensors ( taking the power spectrum). We suspect that some of the additional structure is due to the wires.
Attachment 1: VerticalTrans.pdf
VerticalTrans.pdf VerticalTrans.pdf VerticalTrans.pdf VerticalTrans.pdf
  105   Thu Nov 15 17:09:37 2007 pkpUpdateOMCVertical Transfer functions with no cables attached.
[Norna Pinkesh]

The cables connecting all the electronics ( DCPDs, QPDs etc) have been removed to test for the vertical transfer function. Now the cables are sitting on the OMC bench and it was realigned.
Attachment 1: VerticaltransferfuncnocablesattachedNov152007.pdf
VerticaltransferfuncnocablesattachedNov152007.pdf VerticaltransferfuncnocablesattachedNov152007.pdf VerticaltransferfuncnocablesattachedNov152007.pdf VerticaltransferfuncnocablesattachedNov152007.pdf
  12181   Wed Jun 15 09:52:02 2016 jamieUpdateCDSVery encouraging results from overnight split daqd test

laughVery encouraging results from the test last night.  The new configuration did not crash once overnight, and seemed to write out full, second trend, and minute trend frames without issueyes.  However, full validity of all the written out frames has not been confirmed.

overview

The configuration under test involves two separate daqd binaries instead of one.  We usually run with what is referred to as a "framebuilder" (fb) configuration:

  • fb: a single daqd binary that:
    • collect the data from the front ends
    • coallate full data into frame file format
    • calculates trend data
    • writes frame files to disk.

The current configuration separates the tasks into multiple separate binaries: a "data concentrator" (dc) and a "frame writer" (fw):

  • dc:
    • collect data from front ends
    • coallate full data into frame file format
    • broadcasts frame files over local network
  • fw:
    • receives frame files from broadcast
    • calculates trend data
    • writes frame files to disk

This configuration is more like what is run at the sites, where all the various components are separate and run on separate hardware.  In our case, I tried just running the two binaries on the same machine, with the broadcast going over the loopback interface.  None of the systems that use separated daqd tasks see the failures that we've been seeing with the all-in-one fb configuration (and other sites like AEI have also seen).

My guess frown is that there's some busted semaphore somewhere in daqd that's being shared between the concentrator and writer components.  The writer component probably aquires the lock while it's writing out the frame, which prevents the concentrator for doing what it needs to be doing while the frame is being written out.  That causes the concentrator to lock up and die if the frame writing takes too long (which it seems to almost necessarily do, especially when trend frames are also being written out).

results

The current configuration hasn't been tweaked or optimized at all.  There is of course basically no documentation on the meaning of the various daqdrc directives.  Hopefully I can get Keith Thorne to help me figure out a well optimized configuration.

There is at least one problem whereby the fw component is issuing an excessively large number of re-transmission requests:

2016-06-15_09:46:22 [Wed Jun 15 09:46:22 2016] Ask for retransmission of 6 packets; port 7097
2016-06-15_09:46:22 [Wed Jun 15 09:46:22 2016] Ask for retransmission of 8 packets; port 7097
2016-06-15_09:46:22 [Wed Jun 15 09:46:22 2016] Ask for retransmission of 3 packets; port 7097
2016-06-15_09:46:22 [Wed Jun 15 09:46:22 2016] Ask for retransmission of 5 packets; port 7097
2016-06-15_09:46:22 [Wed Jun 15 09:46:22 2016] Ask for retransmission of 5 packets; port 7097
2016-06-15_09:46:22 [Wed Jun 15 09:46:22 2016] Ask for retransmission of 5 packets; port 7097
2016-06-15_09:46:22 [Wed Jun 15 09:46:22 2016] Ask for retransmission of 5 packets; port 7097
2016-06-15_09:46:22 [Wed Jun 15 09:46:22 2016] Ask for retransmission of 6 packets; port 7097
2016-06-15_09:46:23 [Wed Jun 15 09:46:23 2016] Ask for retransmission of 1 packets; port 7097

It's unclear why.  Presumably the retransmissions requests are being honored, and the fw eventually gets the data it needs.  Otherwise I would hope that there would be the appropriate errors.

The data is being written out as expected:

 full/11500: total 182G
drwxr-xr-x  2 controls controls 132K Jun 15 09:37 .
-rw-r--r--  1 controls controls  69M Jun 15 09:37 C-R-1150043856-16.gwf
-rw-r--r--  1 controls controls  68M Jun 15 09:37 C-R-1150043840-16.gwf
-rw-r--r--  1 controls controls  68M Jun 15 09:37 C-R-1150043824-16.gwf
-rw-r--r--  1 controls controls  69M Jun 15 09:36 C-R-1150043808-16.gwf
-rw-r--r--  1 controls controls  69M Jun 15 09:36 C-R-1150043792-16.gwf
-rw-r--r--  1 controls controls  68M Jun 15 09:36 C-R-1150043776-16.gwf
-rw-r--r--  1 controls controls  68M Jun 15 09:36 C-R-1150043760-16.gwf
-rw-r--r--  1 controls controls  69M Jun 15 09:35 C-R-1150043744-16.gwf

 trend/second/11500: total 11G
drwxr-xr-x  2 controls controls 4.0K Jun 15 09:29 .
-rw-r--r--  1 controls controls 148M Jun 15 09:29 C-T-1150042800-600.gwf
-rw-r--r--  1 controls controls 148M Jun 15 09:19 C-T-1150042200-600.gwf
-rw-r--r--  1 controls controls 148M Jun 15 09:09 C-T-1150041600-600.gwf
-rw-r--r--  1 controls controls 148M Jun 15 08:59 C-T-1150041000-600.gwf
-rw-r--r--  1 controls controls 148M Jun 15 08:49 C-T-1150040400-600.gwf
-rw-r--r--  1 controls controls 148M Jun 15 08:39 C-T-1150039800-600.gwf
-rw-r--r--  1 controls controls 148M Jun 15 08:29 C-T-1150039200-600.gwf
-rw-r--r--  1 controls controls 148M Jun 15 08:19 C-T-1150038600-600.gwf

 trend/minute/11500: total 152M
drwxr-xr-x 2 controls controls 4.0K Jun 15 07:27 .
-rw-r--r-- 1 controls controls  51M Jun 15 07:27 C-M-1150023600-7200.gwf
-rw-r--r-- 1 controls controls  51M Jun 15 04:31 C-M-1150012800-7200.gwf
-rw-r--r-- 1 controls controls  51M Jun 15 01:27 C-M-1150002000-7200.gwf

The frame sizes look more or less as expected, and they seem to be valid as determined with some quick checks with the framecpp command line utilities.

  4266   Wed Feb 9 23:48:12 2011 SureshConfigurationCamerasVideo Cable work: New Labels

[Larisa, Aidan,Steve,Suresh]

   Today was the first session for implementing the new video cabling plan laid out in the document " CCD_Cable_Upgrade_Plan_Jan11_2011.pdf"  by Joon Ho attached to his elog entry 4139.  We started to check and label all the existing cables according to the new naming scheme. 

So far we have labeled the following cables. Each has been checked by connecting it to a monitor near the Video Mux and a camera at the other end.

C1:IO VIDEO 8ETMYF

C1:IO-VIDEO 6 ITMYF

C1:IO-VIDEO 21 SRMF

C1:IO-VIDEO 25 OMCT

C1:IO-VIDEO 19 REFL

C1:IO-VIDEO 22 AS

C1:IO-VIDEO 18 IMCR

C1:IO-VIDEO 14 PMCT

C1:IO-VIDEO 12 RCT

C1:IO-VIDEO 9 ETMXF

C1:IO-VIDEO 1 MC2T

 

Next we need to continue and finish the labeling of existing cables.  We then choose a specific set of cables which need to be laid together and proceed to lay them after attaching suitable lables to them.

 

 

  2304   Fri Nov 20 00:18:45 2009 ranaSummaryCamerasVideo MUX Selection Wiki page

Steve is summarizing the Video Matrix choices into this Wiki page:

http://lhocds.ligo-wa.caltech.edu:8000/40m/Electronics/VideoMUX

Requirements:

Price: < 5k$

Control: RS-232 and Ethernet

Interface: BNC (Composite Video)

Please check into the page on Monday for a final list of choices and add comments to the wiki page.

  4519   Wed Apr 13 16:38:17 2011 Larisa ThorneUpdateElectronicsVideo MUX camera/monitor check

 [Kiwamu, Larisa]

 

The following Video MUX inputs(cameras) and outputs(monitors) have been checked:

MC2F, FI, AS Spare, ITMYF, ITMXF, ETMYF, ETMXF, PSL Spare, ETMXT, MC2T, POP, MC1F/MC3F, SRMF, ETMYT, PRM/BS, CRT1(MON1), ETMY Monitor, CRT2(MON2), CRT4(MON4), MC1 Monitor, CRT3(MON3), PSL1 Monitor, PSL2 Monitor, CRT6(MON6), CRT5(MON5), ETMX Monitor, MC2 Monitor, CRT9, CRT7(MON7), CRT10, and Projector.

 

Their respective statuses have been updated on the wiki:   (wiki is down at the moment, I will come back and add the link when it's back up)

  16661   Thu Feb 10 21:10:43 2022 KojiUpdateGeneralVideo Mux setting reset

Now the video matrix is responding correctly and the web interface shows up. (Attachment 1)

Also the video buttons respond as usual. I pushed Locking Template button to bring the setting back to nominal. (Attachment 2)

Attachment 1: Screenshot_2022-02-10_21-11-21.png
Screenshot_2022-02-10_21-11-21.png
Attachment 2: Screenshot_2022-02-10_21-11-54.png
Screenshot_2022-02-10_21-11-54.png
  12694   Fri Jan 6 17:00:26 2017 ranaFrogsTreasureVideo of Lab Tour

In this video: https://youtu.be/iphcyNWFD10, the comments focus on the orange crocs, my wrinkled shirt, and the first aid kit.

  7945   Mon Jan 28 17:01:19 2013 DenUpdateLockingVideo of PRM-flat test cavity

What mode will you get if lock the cavity PRM - ITMY/ITMX/TEST MIRROR without PR2, PR3 and BS?

Is it possible to skip MC1, MC3 and lock the laser to this test cavity to make sure that this is not actuator/electronics noise?

  7951   Tue Jan 29 10:50:02 2013 JenneUpdateLockingVideo of PRM-flat test cavity

 

I think Den accidentally edited and overwrote my entry, rather than replying, so I'm going to recreate it from memory:

I aligned the PRM-flat test cavity (although not as well as Jamie and Koji did later in the evening) and took some videos. Note that these may not be as relevant any more, since Jamie and Koji improved things after I left.

 

Also, before doing anything with the cavity, I tuned up the PMC since the pitch input alignment wasn't perfect (we were getting ~0.7 transmission), and also tuned up the MC alignment and remeasured the MC spot positions, to maintain a record.

  2314   Mon Nov 23 16:28:12 2009 steveSummaryCamerasVideo swicher options

Quote:

Steve is summarizing the Video Matrix choices into this Wiki page:

http://lhocds.ligo-wa.caltech.edu:8000/40m/Electronics/VideoMUX

Requirements:

Price: < 5k$

Control: RS-232 and Ethernet

Interface: BNC (Composite Video)

Please check into the page on Monday for a final list of choices and add comments to the wiki page.

 Composite video matrix switchers with 32 BNC in and 32 BNC channels out are listed.

  4498   Thu Apr 7 13:12:23 2011 KojiHowToVIDEOVideo switching tip

Long time ago, I looked at the manual of the video switcher.
http://media.extron.com/download/files/userman/Plus_Ultra_MAV_C.pdf
Here is the summary. This will be the basic of the more sophisticated switching program which may have GUI.

In principle, you can manually control the matrix via telnet. At the console machines, you can connect to the matrix using telnet

telnet 192.168.113.92

This opens TCP/IP port 23 of the specified machine. You will receive some messages.
Then type some command like:
--------------------

  • 1*2!       (connect input#1 to output#2)
  • 1,           (save the current setting into preset1)
  • 1.           (restore the setting from preset1)

--------------------

Basicaly that's all. There are many other features but I don't think we need them.

We can create a simple program with any of the language as any of the language has the capability of the TCP/IP connection.
e.g. C, Perl, Python. Tcl/Tk
Any of them are fine.

Now what we have to think about is how to implement the interface in the epics screen (or whatever).
It needs some investigation how the people is thinking as the ideal interface.
But, first of all, you should make the above three operations available as a simple UNIX command like:

videoswitch -i 192.168.113.92 1 2
videoswitch -i 192.168.113.92 -store 1
videoswitch -i 192.168.113.92 -recall 1
(There is no such command yet. These are showing what it should be!)

This can be done by a single day work and our life will be much better.

  4529   Fri Apr 15 02:30:24 2011 KojiHowToVIDEOVideo switching tip

I have made a small python script to handle the video matrix.

It is too far from the perfection, but I release it as it is already useful in some extent.

The script is in the /cvs/cds/rtcds/caltech/c1/scripts/general directory.

usage:

videoswitch.py in_ch_name out_ch_name

in_ch_name is one of the followings

MC2F, IFOPO, OMCR, FI, AS_Spare, ITMYF, ITMXF, ETMYF, ETMXF,
PMCR, RCR, RCT, PSL_Spare, PMCT, ETMXT, MC2T, POP, IMCR, REFL,
MC1F, SRMF, AS, ETMYT, PRM, OMCT, Quad1, Quad2, Quad3

out_ch_name is one of the followings

Mon1, Mon2, Mon3, Mon4, Mon5, Mon6, Mon7,
ETMY, MC1, PSL1, PSL2, ETMX, MC2, CRT9,CRT10,Projector,
Quad1_1, Quad1_2, Quad1_3, Quad1_4,
Quad2_1, Quad2_2, Quad2_3, Quad2_4,

Quad3_1, Quad3_2, Quad3_3, Quad3_4

  7839   Mon Dec 17 14:45:01 2012 JenneUpdateAlignmentVideos with PRMI locked

[Jamie, Jenne]

Koji and Jamie locked the PRMI, and then Jamie and I took some videos. 

Video 1:   https://www.youtube.com/watch?v=jszTeyETyxU shows the face of PR2.

Video 2:   https://www.youtube.com/watch?v=Tfi4I4Q3Mqw shows the back of PR3, the face of PR2, as well as REFL and AS.

Video 3:   https://www.youtube.com/watch?v=bLHNWHAWZBA is the camera looking at the face of PRM and (through a viewing mirror) BS.

 

If you watch video 1, you'll see how large the beam gets on the face of PR2.  The main spot, where the straight-through, no-cavity beam is, is a little high of center.  The rest of the inflated beam swirls around that point.

Video 2 shows the same behavior, but you also see that we're much too high on PR3, and too close to the right (as seen on the video) side.

Video 3 is very disconcerting to me.  The main, stationary beam spot seems nicely centered, but the resonant beam, since it inflates and gets big, is very close to the right side of the PRM (as seen on the video). 

It wouldn't surprise me if, were we able to quantify the beam clipping loss on PR3 and PRM, the clipping were the reason we have a crappy PRC gain.  This doesn't explain why we have such a weird inflated beam though.

ELOG V3.1.3-