40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 316 of 341  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  8451   Sat Apr 13 23:11:04 2013 DenUpdateLockingprcl angular motion

Quote:

For the PRM, it is also a mostly translation effect as calculated at the PRC waist position (ITM face).

I made another estimation assuming that PRCL RIN is caused by translation of the cavity axis:

  • calibrated RIN to translation, beam waist = 4mm
  • measured PRM yaw motion using oplev
  • estimated PR3 TT yaw motion: measured BS yaw spectrum with oplev OFF, divided it by pendulum TF with f0=0.9 Hz, Q=100 (BS TF), multiplied it by pendulum TF with f0 = 1.5 Hz, Q = 2 (TT TF with eddy current damping), accounted for BS local damping that reduces Q down to 10.

PRM and TT angular motion to cavity axis translation I estimated as 0.11 mm/urad and 0.22 mm/urad assuming that TTs are flat. We can make a more detailed analysis to account for curvature.

I think beam motion is caused by PR3 and PR2 TT angular motion. I guess yaw motion is larger because horizontal g-factor is closer to unity then vertical.

  8454   Sun Apr 14 17:56:03 2013 ranaUpdateLockingprcl angular motion

Quote:

Quote:

For the PRM, it is also a mostly translation effect as calculated at the PRC waist position (ITM face).

I made another estimation assuming that PRCL RIN is caused by translation of the cavity axis:

  • calibrated RIN to translation, beam waist = 4mm

 In order to get translation to RIN, we need to know the offset of the input beam from the cavity axis...

This should be possible to calibrate by putting a pitch and yaw excitation lines into the PRM and measuring the RIN.

See secret document from Koji.

  8564   Mon May 13 18:44:04 2013 JenneUpdateLockingprcl angular motion

I want to redo this estimate of where RIN comes from, since Den did this measurement before I put the lens in front of the POP PD. 

While thinking about his method of estimating the PR3 effect, I realized that we have measured numbers for the pendulum frequencies of the recycling cavity tip tilt suspensions. 

I have been secreting this data away for years.  My bad.  The relevant numbers for Tip Tilts #2 and #3 were posted in elog 3425, and for #4 in elog 3303.  However, the data for #s 1 and 5 were apparently never posted.  In elog 3447, I didn't put in numbers, but rather said that the data was taken.

Anyhow, attached is the data that was taken back in 2010.  Look to elog 7601 for which TT is installed where. 

 

Conclusion for the estimate of TT motion to RIN - the POS pendulum frequency is ~1.75Hz for the tip tilts, with a Q of ~2.

  14437   Wed Feb 6 10:07:23 2019 ChubUpdate pre-construction inspection

The Central Plant building will be undergoing seismic upgrades in the near future.  The adjoining north wall along the Y arm will be the first to have this work done, from inside the Central Plant.  Project manager Eugene Kim has explained the work to me and also noted our concerns.  He assured me that the seismic noise from the construction will be minimized and we will always be contacted when the heaviest construction is to be done.

Tomorrow at 11am, I will bring Mr. Kim and a few others from the construction team to look at the wall from inside the lab.  If you have any questions or concerns that you want to have addressed, please email them to me or contact Mr. Kim directly at x4860 or through email at eugene.kim@caltech.edu . 

  5591   Fri Sep 30 19:12:56 2011 KojiUpdateGeneralprep for poweroutage

 

 [Koji Jenne]

The lasers were shutdown

The racks were turned off

We could not figure out how to turn off JETSTOR

The control room machines were turned off

FInally we will turn off nodus and linux1 (with this order).

Hope everything comes back with no trouble

(Finger crossing)

  13383   Tue Oct 17 17:53:25 2017 jamieSummaryLSCprep for tests of Gabriele's neural network cavity length reconstruction

I've been preparing for testing Gabriele's deep neural network MICH/PRCL reconstruction.  No changes to the front end have been made yet, this is all just prep/testing work.

Background:

We have been unable to get Gabriele's nn.c code running in kernel space for reasons unknown (see tests described in previous post).  However, Rolf recently added functionality to the RCG that allows front end models to be run in user space, without needing to be loaded into the kernel.  Surprisingly, this seems to work very well, and is much more stable for the overall system (starting/stopping the user space models will not ever crash the front end machine).  The nn.c code has been running fine on a test machine in this configuration.  The RCG version that supports user space models is not that much newer than what the 40m is running now, so we should be able to run user space models on the existing system without upgrading anything at the 40m.  Again, I've tested this on a test machine and it seems to work fine.

The new RCG with user space support compiles and installs both kernel and user-space versions of the model.

Work done:

  • Create 'c1dnn' model for the nn.c code.  This will run on the c1lsc front end machine (on core 6 which is currently empty), and will communicate with the c1lsc model via SHMEM IPC.  It lives at:
    • /opt/rtcds/userapps/release/isc/c1/models/c1dnn.mdl
  • Got latest copy of nn.c code from Gabriele's git, and put it at:
    • /opt/rtcds/userapps/release/isc/c1/src/nn/
  • Checked out the latest version of the RCG (currently SVN trunk r4532):
    • /opt/rtcds/rtscore/test/nn-test
  • Set up the appropriate build area:
    • /opt/rtcds/caltech/c1/rtbuild/test/nn-test
  • Built the model in the new nn-test build directory ("make c1dnn")
  • Installed the model from the nn-test build dir ("make install-c1dnn")

Test:

I tried a manual test of the new user space model.  Since this is a user space process running it should have no affect on the rest of the front end system (which it didn't):

  • Manually started the c1dnn EPICS IOC:
    • $ (cd /opt/rtcds/caltech/c1/target/c1dnn/c1dnnepics && ./startupC1)
  • Tried running the model user-space process directly:
    • $ taskset -c 6 /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -m  c1dnn

Unfortunately, the process died with an "ADC TIMEOUT" error.  I'm investigating why.

Once we confirm the model runs, we'll add the appropriate SHMEM IPC connections to connect it to the c1lsc model.

  13390   Wed Oct 18 12:14:08 2017 jamieSummaryLSCprep for tests of Gabriele's neural network cavity length reconstruction
Quote:

I tried a manual test of the new user space model.  Since this is a user space process running it should have no affect on the rest of the front end system (which it didn't):

  • Manually started the c1dnn EPICS IOC:
    • $ (cd /opt/rtcds/caltech/c1/target/c1dnn/c1dnnepics && ./startupC1)
  • Tried running the model user-space process directly:
    • $ taskset -c 6 /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -m  c1dnn

Unfortunately, the process died with an "ADC TIMEOUT" error.  I'm investigating why.

Once we confirm the model runs, we'll add the appropriate SHMEM IPC connections to connect it to the c1lsc model.

I tried moving the model to c1ioo, where there are plenty of free cores sitting idle, and the model seems runs fine.  I think the problem was just CPU contention on the c1lsc machine, where there were only two free cores and the kernel was using both for all the rest of the normal user space processes.

So there are two options:

  • Use cpuset on c1lsc to tell the kernel to remove all other processes from CPU6 and save it just for the c1dnn model.  This should not have any impact on the running of c1lsc, since that's exactly what would be happening if we were running the model in kernel space (e.g. isolating the core for the front end model).  The auxilliary support user space processes (epics seq/ioc, awgtpman) should all run fine on CPU0, since that's what usually happens.  Linux is only using the additional core since it's there.  We don't have much experience with cpuset yet, though, so more offline testing will be required first.
  • Run the model on c1ioo and ship the needed signals to/from c1lsc via PCIe dolphin.  This is potentially slightly more invasive of a change, and would put more work on the dolphin network, but it should be able to handle it.

I'm going to start testing cpuset offline to figure out exactly what would need to be done.

  6892   Fri Jun 29 02:17:40 2012 yutaUpdateIOOprep for the vent - beam attenuating

[Koji, Jamie, Yuta]

We attenuated the incident beam (1.2 W -> 11 mW) to the vacuum chamber to be ready for the vent.
The beam spot on the MC mirrors didn't changed significantly, which means the incident beam was not shifted so much.

What we did:
 1. Installed HWP, PBS(*) and another HWP between the steering mirrors on PSL table for attenuating the beam. We didn't touched steering mirrors(**), so the incident beam to the IFO should be recovered easily, by just taking HWPs and PBS away. The power to the MC was reduced from 1.2 W to 11 mW.

(*) We stole PBSO from the AS AUX laser setup.
(**) Actually, we accidentally touched one of the steering mirrors, but we recovered them. We did the recovery tweaking the touched nob and minimizing the MC reflection. We confirmed the incident beam was recovered by measuring MC beamspot positions(below).

 2. Aligned PBS by minimizing MC reflection, adjusted first HWP so that the incident beam will be ~10 mW, and adjusted last HWP to minimize MC reflection (make the incident beam to the MC be p-polarization).

 3. To do the alignment and adjusting, we put 100% reflection mirror (instead of 10% BS) for the MC reflection PD to increase the power to the PD. That means, we don't have MC WFS right now.

 4. Tweaked MC servo gains to that we can lock MC in low power mode. It is quite stable right now. We didn't lose lock during beam spot measurement.

 5. Measured beam spot positions on the MC mirrors and convinced that the incident beam was not shifted so much (below). They look like they moved ~0.2 mm, but it is with in the error of the MC beam spot measurement.

# filename      MC1pit  MC2pit  MC3pit  MC1yaw  MC2yaw  MC3yaw  (spot positions in mm)
./dataMCdecenter/MCdecenter201206281154.dat     3.193965        4.247243        2.386126        -6.639432       -0.574460       4.815078    this noon
./dataMCdecenter/MCdecenter201206282245.dat     3.090762        4.140716        2.459465        -6.792872       -0.651146       4.868740    after recovered steering mirrors
./dataMCdecenter/MCdecenter201206290135.dat     2.914584        4.240889        2.149244        -7.117336       -1.494540       4.955329    after beam attenuation

 6. Rewrote matlab code sensemcass.m to python script sensemcass.py. This script is to calculate beam spot positions from the measurement data(see elog #6727). I think we should make senseMCdecenter script better, too, since it takes so much time and can't stop and resume the measurement if MC is unlocked.

  6893   Fri Jun 29 03:21:32 2012 yutaUpdateGeneralprep for the vent - others

1. Turned off high voltage power supplies for PZT1/2 (input PZTs) and OMC stage 1/2. They live in 1Y3 rack and AUX_OMC_NORTH rack.

2. Restored all IFO optics alignment to the position where I aligned this afternoon (for SRM, I didn't aligned it; it restored at the saved value on May 26).

3. Centered all the oplevs. They can be used for a reference for alignment change before and after the vent.

I will leave PSL mechanical shutter and green shutters closed just in case.

Some MEDM screenshots below.
MEDMscreenshotswithCOW_20120629.png

  14022   Tue Jun 26 20:59:36 2018 aaronUpdateOMCprep for vent in a couple weeks

I checked out the elog from the vent in October 2016 when the OMC was removed from the path. In the vent in a couple weeks, we'd like to get the beam going through the OMC again. I wasn't really there for this last vent and don't have a great sense for how things go at the 40m, but this is how I think the procedure for this work should approximately go. The main points are that we'll need to slightly translate and rotate OM5, rotate OM6, replace one mirror that was removed last time, and add some beam dumps. Please let me know what I've got wrong or am missing.

[side note, I want to make some markup on the optics layouts that I see as pdfs elsewhere in the log and wiki, but haven't done it and didn't much want to dig around random drawing software, if there's a canonical way this is done please let me know.]

Steps to return the OMC to the IFO output:

  1. Complete non-Steve portions of the pre-vent checklist (https://wiki-40m.ligo.caltech.edu/vent/checklist)
  2. Steve needs to complete his portions of the checklist (as in https://nodus.ligo.caltech.edu:8081/40m/12557)
  3. Need to lock some things before making changes I think—but I’m not really sure about these, just going from what I can glean from the elogs around the last vent
    1. Lock the IMC at low power
    2. Align the arms to green
    3. Lock the arms
    4. Center op lev spots on QPDs
    5. Is there a separate checklist for these things? Seems this locking process happens every time there is a realignment or we start any work, which makes sense, so I expect it is standardized.
  4. Turn/add optics in the reverse order that Gautam did
    1. Check table leveling first?
    2. Rotate OM5 to send the beam to the partially transmissive mirror that goes to the OMC; currently OM5 is sent directly to OM6. OM5 also likely needs to be translated forward slightly; Gautam tried to maintain 45 deg AOI on OM5/6.
    3. A razor beam dump was also removed, which should be replaced (see attachment 1 on https://nodus.ligo.caltech.edu:8081/40m/12568)
    4. May need to rotate OM6 to extract AS beam again, since it was rotated last time
    5. Replace the mirror just prior to the window on the AP table, mentioned here in attachment 3: https://nodus.ligo.caltech.edu:8081/40m/12566
      1. There is currently a rectangular weight on the table where the mirror was, for leveling
  5. Since Gautam had initially made this change to avoid some backscattered beams and get a little extra power, we may need to add some beam dumps to kill ghosts
    1. This is also mentioned in 12566 linked above, the dumps are for back-reflection off the windows of the OMC
  6. Center beam in new path
  7. Check OMC table leveling
  8. AS beam should be round on the camera, with no evidence of clipping on any optics in the path (especially check downstream of any changes)
  4574   Wed Apr 27 18:14:48 2011 kiwamuUpdateLSCpreparation for DRMI locking : RF status

RF_Work_Status.png

POX11 (see this entry) is now listed as REFL11 (on the very top row).

We will rename POY11 to POP11 for DRMI locking.

The files are on https://nodus.ligo.caltech.edu:30889/svn/trunk/suresh/40m_RF_upgrade/.

  2644   Fri Feb 26 15:32:13 2010 steveConfigurationVACpreparation for power outage: vacuum all off

There is a planned power outage tomorrow, Saturday from 7am till midnight.

I vented all annulies and switched to ALL OFF configuration. The small region of the RGA is still under vacuum.

The vac-rack: gauges, c1vac1 and UPS turned off.

  13806   Wed May 2 10:03:58 2018 SteveHowToSEIpreparation of load cell measurement at ETMX

Gautam and Steve,

We have calibrated the load  cells. The support beams height monitoring is almost ready.

The danger of this measurment that  the beams height changes can put shear and torsional forces on this formed (thin walled) bellow

They are designed for mainly axial motion.

The plan is to limit height change to 0.020" max

0, center oplev at X arm locked

1, check that  jack screws are carrying full loads and set height indicator dials to zero ( meaning: Stacis is bypassed )

2, raise beam height with aux leveling wedge  by 0.010"  on all 3 support point and than raise it an other 0.005"

3, replace levelling wedge with load cell that is centered and shimmed.     Dennis   Coyne pointed out that the Stacis foot has to be loaded at the center of the foot and formed bellow can shear at their limits.

4, lower the support beam by 0.005" ......now full load on the cells

Note: jack screw heights will not be adjusted or  touched.......so the present condition will be recovered

Quote:

We could use similar load cells   to make the actual weight measurement on the Stacis legs. This seems practical in our case.

I have had bad experience with pneumatic Barry isolators.

Our approximate max compression loads are 1500 lbs on 2 feet and 2500 lbs on the 3rd one.

 

 

  13809   Thu May 3 09:56:42 2018 SteveHowToSEIpreparation of load cell measurement at ETMX

[ Dennis Coyne'  precision answer ]

Differential Height between Isolators

According to a note on the bellows drawing (D990577-x0/A), the design life of the bellows at ± 20 minutes rotational stroke is 10,000 cycles. A 20 minute angular (torsional) rotation of the bellows corresponds to 0.186" differential height change across the 32" span between the chamber support beams (see isolator bracket, D000187-x0/B).

Another consideration regarding the bellows is the lateral shear stress introduced by the vertical translation. The notes on the bellows drawing do not give lateral shear limits. According to MDC's web page for formed bellows in this size range the lateral deflection limit is approximately 10% of the "live length" (aka "active length", or length of the convoluted section). According to the bellows drawing the active length is 3.5", so the maximum allowable lateral deflection should be ~0.35".

Of course when imposing a differential height change both torsional and lateral shear is introduced at the same time. Considering both limits together, the maximum differential height change should be < 0.12".

One final consideration is the initial stress to which the bellows are currently subjected due to a non-centered support beam from tolerances in the assembly and initial installation. Although we do not know this de-centering, we can guess that it may be of the order of ~ 0.04". So the final allowable differential height adjustment from the perspective of bellows stress is < 0.08".   Steve:  accumulated initial stress is unknown.  We used to adjust the original jack screws for IFO aligment in the early days of ~1999. This kind of adjustment was stopped when we realized how dangereous it can be. The fact is that there must be unknown amount of accumulated initial stress. This is my main worry but I'm confident that 0.020" change is safe.

So, with regard to bellows stress alone, your procedure to limit the differential height change to <0.020" is safe and prudent.

However, a more stringent consideration is the coplanarity requirement (TMC Stacis 2000 User's Manual, Doc. No. SERV 04-98-1, May 6, 1991, Rev. 1), section 2, "Installation",which stipulates < 0.010"/ft, or < 0.027" differential height across the 32" span between the chamber support beams. Again, your procedure to limit the differential height change to < 0.02" is safe.

Centered Load on the STACIS Isolators

According to the TMC Stacis 2000 User's Manual (Document No. SERV 04-98-1, May 6, 1991, Rev. 1), section 2, "Installation", typical installations (Figure 2-3) are with one payload interface plate which spans the entire set of 3 or 4 STACIS actuators. Our payload interface is unique.

Section 2.3.1, "Installation Steps": "5. Verify that the top of each isolator is fully under the payload/interface plate; this is essential to ensure proper support and leveling. The payload or interface plate should cover the entire top surface of the Isolator or the entire contact area of the optional jack."

section 2.3.2, "Payload/STACIS Interface": "... or if the supporting points do not completely cover the top surface of each Isolator, an interface plate will be needed."

The sketch in Figure 2-2 indicates an optional leveling jack which appears to have a larger contact surface area than the jacks currently installed in the 40m Lab. Of course this is just a non-dimensioned sketch. Are the jacks used by the 40m Lab provided by TMC, or did we (LIGO) choose them? I beleive Larry Jones purchased them.

A load centering requirement is not explicitly stated, but I think the stipulation to cover the entire top surface of each actuator is not so much to reduce the contact stress but to entire a centered load so that the PZT stack does not have a reaction moment.

From one of the photos in the 40m elog entry (specifically jack_screw.jpg), it appears that at least some isolators have the load off center. You should use this measurement of the load as an opportunity to re-center the loads on the Isolators.

In section 2.3.3, "Earthquake Restraints" restraints are suggested to prevent damage from earth tremors. Does the 40m Lab have EQ restraints? Yes, it has

Screw Jack Location

I could not tell where all of the screw jacks will be placed from the sketch included in the 40m elog entry which outlines the proposed procedure.

Load Cell Locations

The sketch indicates that the load cells will be placed on the center of the tops of the Isolators. This is good. However while discussing the procedure with Gautam he said that he was under the impression that the load cell woudl be placed next to the leveling jack, off-center. This condition may damage the PZT stack. I suggest that the leveling jack be removed and replaced (temporarily) with the load cell, plus any spacer required to make up the height difference. Yes

If you have any further question, just let me know.

    Dennis

 

 

Dennis Coyne
Chief Engineer, LIGO Laboratory
California Institute of Technology
MC 100-36, 1200 E. California Blvd.

 

 

 

  13840   Mon May 14 08:55:40 2018 Dennis CoyneHowToSEIpreparation of load cell measurement at ETMX

follow up email from Dennis 5-13-2018. The last line agrees with the numbers in elog13821.

Hi Steve & Gautam,

I've made some measurements of the spare (damaged) 40m bellows. Unfortunately neither of our coordinate measurement arms are currently set up (and I couldn't find an appropriate micrometer or caliper), so I could not (yet) directly measure the thickness. However from the other dimensional measurements, and a measurement of the axial stiffness (100 lb/in), and calculations (from the Standards of the Expansion Joint Manufacturers Association (EJMA), 6th ed., 1993) I infer a thickness of 0.010 inch in . This is close to a value of 0.012 in used by MDC Vacuum for bellows of about this size.

I calculate that the maximum allowable torsional rotation is 1.3 mrad. This corresponds to a differential height, across the 32 in span between support points, of 0.041 in.

In addition using the EJMA formulas I find that one can laterally displace the bellows by 0.50 inch (assuming a simultaneous axial displacement of 0.25 inch, but no torsion), but no more than ~200 times. I might be good to stay well below this limit, say no more than ~0.25 inch (6 mm).

If interested I've uploaded my calculations as a file associated with the bellows drawing at D990577-A/v1.

BTW in some notes that I was given (by either Larry Jones or Alan Weinstein) related to the 40m Stacis units, I see a sketch from Steve dated 3/2000 faxed to TMC which indicates 1200 lbs on each of two Stacis units and 2400 on the third Stacis.

  5089   Tue Aug 2 02:35:23 2011 kiwamuUpdateGeneralpreparation of the vent : status and plan

The vent will take place on Wednesday.

Plan for Tuesday :

  (Morning) Preparation of necessary items for the low power MC (Steve / Jamie)

  (Daytime) Measurement of the MC spot positions (Suresh)

  (Daytime) Arm length measurement (Jenne)

  (Nighttime) Locking of the low power MC (Kiwamu / Volunteers)

 

Plan for Wednesday :

  (Early morning) Final checks on the beam axis, all alignments and green light (Steve / Kiwamu / Volunteers )

  (Morning) Start the vent (Steve)

  (daytime-nighttime) Taking care of the Air/Nitrogen cylinders (Everybody !!)

 

Status of the vent preparation :

 

  (not yet) Low power MC

  (ongoing) Measurement of the arm lengths

  (ongoing) Measurement of the MC spot positions

  (80% done) Estimation of the tolerance of the arm length (#5076)

  (done) Alignment of the Y green beam (#5084)

  (done) Preparation of beam dumps (#5047)

  (done) Health check of shadow sensors and the OSEM damping gain adjustment (#5061)

  (done) Alignment of the incident beam axis (#5073)

  (done) Loss measurement of the arm cavities (#5077)

  5078   Sun Jul 31 22:48:35 2011 kiwamuSummaryGeneralpreparation of the vent : status update

Status update for the vent preparation:

The punchline is : We can not open the chamber on Monday !

 

##### Task List for the vent preparation #####

  (not yet) Low power MC

  (not yet) Measurement of the arm lengths

  (not yet) Alignment of the Y green beam (#5066)

  (not yet) Measurement of the MC spot positions

  (80% done) Estimation of the tolerance of the arm length (#5076)

  (done) Preparation of beam dumps (#5047)

  (done) Health check of shadow sensors and the OSEM damping gain adjustment (#5061)

  (done) Alignment of the incident beam axis (#5073)

  (done) Loss measurement of the arm cavities (#5077)

Quote from #5048

Quote:

The vent will start from 1 st of August ! 

 

  5080   Mon Aug 1 08:52:37 2011 steveUpdateVACpreparation to vent

Both arms locked easely around 1V transmited.  We should recenter oplevs.

  2778   Wed Apr 7 09:00:01 2010 steveHowToPEMprepare to open chamber

In order to minimize the diffusion of more dust particles into the vented IFO vacuum envelope

BEFORE opening chamber:

-Have a  known plan,

-Heavy 1" thick door requires 3 persons- of  one experienced and one certified crane operator and steel tow safety shoes

-Block IFO beams, be ware of experimental set up of other hazards: 1064,  visible or new-special installation

- Look at the particle counter, do not open above 6,000 particles of 0.5 micron. Construction activities are winding down. See  plot of 35 days since we  vented.

-Have clean door stand for heavy door, covered with merostate at the right location and dry-clean screws for light covers,

-Prepare lint free wipers for o-rings,(no solvent on o-ring!) Kimwipes for outside of chamber and metal covers, methanol and powder free gloves

-Wipe with wet Kimwipe-tissue of methanol around the door, chamber of interest and o-ring cover ring

-Cut door covering merostate and tape it into position,..if in place...check  folded-merostate position, if dusty... replace it

-Is your cleanroom garment clean?.......if in doubt ....replace it

-Keep surrounding area free and clean

-Make sure that HEPAs are running: PSL-enclosure, two mobile units and south end flow banch

-Check the tools: are they really clean? wipe it with wet Kimwipe, do you see anything on the Kimwipe?

 

-You are responsible to close chamber ASAP with light door or doors as you finished for the day.

Merostate cover down is appropriate during daily brakes.

  5950   Fri Nov 18 16:37:14 2011 steveUpdateVACpreparing for ac power interruption

The vacuum is ready for no AC power for 1 hr on Sunday morning at 10am

 

I did the follwing:

 

Closed V1,  stopped the rotation of TP-1 maglev, waited till it reached 0 Hz_ rpm  and  turned it's controller off.

Closed V4 and stopped TP-2 rotating

Closed all annuloses and VA6

Closed VM1 and opened VM3 This means the RGA is being pumped by TP3. RGA is running in background mode. V5 will close instantly as the AC will be turned off.

VAC STATUS:  IFO envelope and annulosses are not pumped.  P1  pressure will reach 5-6 mTorr by Sunday morning.

                                 The PSL output shutter will be closed by the interlock at 3 mTorr

 

Kiwamu will turn off Piezo Jena PZT power supplies and computers Saturday.

I will be here around 1pm Sunday to star pumping. I will need EPICs MEDM running by than.

  10467   Mon Sep 8 08:24:49 2014 SteveUpdateComputer Scripts / Programspreparing vac system to reboot

Q and Steve will follow elog 10028 entry to prepare the vacuum system for safe reboot

  10468   Mon Sep 8 11:10:26 2014 ericqUpdateComputer Scripts / Programspreparing vac system to reboot

Quote:

Q and Steve will follow elog 10028 entry to prepare the vacuum system for safe reboot

Here's the sequence of the morning so far:

  • I aligned the IFO (IR arms with ASS, X green with PZTs, PRM with PRMI locked on REFL33)
  • I closed the PSL shutter, and went inside to align PRM and both ITM oplevs (all others were within 10urad of zero in both directions)
  • While aligning those oplevs, I noticed the smell of burnt electronics. We tracked it down to the +15V sorensen in the rack nearest the PSL table
    • I claim the precipitating event was PSL shutter activity. If I recall correctly, the seismic rainbow traces went bonkers around the same time as the shutter was closed. There is a Guralp interface in the rack powered by the failed sorensen, so this would explain the erratic seismometer signals correlated with the power supply failure. We will look into potential shorts caused by the shutter. (Steve looked up the PMC trans and Guralp DQ channels, and confirmed the temporal coincidence of the events.)
  • We shut off all of the sorensens so that electronics were not being driven asymmetrically. 
  • Steve and I secured the vacuum system for computer reboots, as referred to in Steve's elog. Some combination of Jenne, Rana and Manasa shut down the control room computers, and turned off the watchdogs. 
  • Manasa and I moved Chiara inside, next to Mafalda, along with its backup HDs. It has been labeled. 
  • Booted up control room machines, they came up happy. 
  • FB and front-ends didn't need reboot, for some lucky reason. Watchdogs came back happily, oplev spots didn't move noticeably. 

The IFO is still down, as the PMC won't lock without the rack power, and we haven't pinned down the shorting mechanism. We don't want the replacement sorensen to immediately blow when plugged in. 

  10469   Mon Sep 8 11:34:47 2014 ranaUpdateComputer Scripts / Programspreparing vac system to reboot

FYI: in that rack, the +15V pulls ~0.5 A more than -15V usually. I think this is due to some RF amplifiers which are powered by this (e.g. the AOM that Manasa set up). The Sorensen's can source ~30A in principle, so we should make sure to set the current limit appropriately so as to not overheat them when there is a short.

Was this power supply not fused for all of its connections? I remember that this was connected to at least one un-fused connection in the past year.

  10470   Mon Sep 8 12:11:36 2014 manasaUpdateComputer Scripts / Programspreparing vac system to reboot

Quote:

FYI: in that rack, the +15V pulls ~0.5 A more than -15V usually. I think this is due to some RF amplifiers which are powered by this (e.g. the AOM that Manasa set up). The Sorensen's can source ~30A in principle, so we should make sure to set the current limit appropriately so as to not overheat them when there is a short.

Was this power supply not fused for all of its connections? I remember that this was connected to at least one un-fused connection in the past year.

 +15V supply powers the following (from what I see):

1. PMC and MC boards on the rack.

2. RF amplifiers on the rack for the beat signals from the green beat PDs.

3. Beatbox itself.

The beatbox was the one that had an un-fused connection last year. I re-did it properly to go through a fuse quite sometime ago.

I dont see any other un-fused connections now from the +15V supply right now.
 

P.S. AOM driver takes a 0 to +28V power supply and not connected to the +15V

  10477   Tue Sep 9 14:18:40 2014 SteveUpdateComputer Scripts / Programspreparing vac system to reboot

Quote:

Quote:

Q and Steve will follow elog 10028 entry to prepare the vacuum system for safe reboot

Here's the sequence of the morning so far:

  • I aligned the IFO (IR arms with ASS, X green with PZTs, PRM with PRMI locked on REFL33)
  • I closed the PSL shutter, and went inside to align PRM and both ITM oplevs (all others were within 10urad of zero in both directions)
  • While aligning those oplevs, I noticed the smell of burnt electronics. We tracked it down to the +15V sorensen in the rack nearest the PSL table
    • I claim the precipitating event was PSL shutter activity. If I recall correctly, the seismic rainbow traces went bonkers around the same time as the shutter was closed. There is a Guralp interface in the rack powered by the failed sorensen, so this would explain the erratic seismometer signals correlated with the power supply failure. We will look into potential shorts caused by the shutter. (Steve looked up the PMC trans and Guralp DQ channels, and confirmed the temporal coincidence of the events.)
  • We shut off all of the sorensens so that electronics were not being driven asymmetrically. 
  • Steve and I secured the vacuum system for computer reboots, as referred to in Steve's elog. Some combination of Jenne, Rana and Manasa shut down the control room computers, and turned off the watchdogs. 
  • Manasa and I moved Chiara inside, next to Mafalda, along with its backup HDs. It has been labeled. 
  • Booted up control room machines, they came up happy. 
  • FB and front-ends didn't need reboot, for some lucky reason. Watchdogs came back happily, oplev spots didn't move noticeably. 

The IFO is still down, as the PMC won't lock without the rack power, and we haven't pinned down the shorting mechanism. We don't want the replacement sorensen to immediately blow when plugged in. 

Vacuum safe reboot required one hour of no pumping of the vac envelope.

  11384   Tue Jun 30 11:33:00 2015 JamieSummaryCDSprepping for CDS upgrade

This is going to be a big one.  We're at version 2.5 and we're going to go to 2.9.3.

RCG components that need to be updated:

  • mbuf kernel module
  • mx_stream driver
  • iniChk.pl script
  • daqd
  • nds

Supporting software:

  • EPICS 3.14.12.2_long
  • ldas-tools (framecpp) 1.19.32-p1
  • libframe 8.17.2
  • gds 2.16.3.2
  • fftw 3.3.2

Things to watch out for:

  • RTS 2.6:
    • raw minute trend frame location has changed (CRC-based subdirectory)
    • new kernel patch
  • RTS 2.7:
    • supports "commissioning frames", which we will probably not utilize.  need to make sure that we're not writing extra frames somewhere
  • RTS 2.8:
    • "slow" (EPICS) data from the front-end processes is acquired via DAQ network, and not through EPICS.  This will increase traffic on the DAQ lan.  Hopefully this will not be an issue, and the existing network infrastructure can handle it, but it should be monitored.
  4567   Mon Apr 25 22:38:49 2011 kiwamuUpdateLSCprepration for DRMI : Y arm flashing
This week is going to be a recycled Michelson week.
As a preparation I did several things today :
 1. Alignment of the Y arm
 2. Alignment of PRM
 3. Checking of all the pick-off ports

 


 
(Y arm alignment)
 The idea to have the Y arm aligned is that : once we lock the Y arm we will be able to align the input PZTs using the Y arm as a reference.
 I tried aligning the Y arm and successfully made the Y arm flashing with IR. I can see it flashing on ITMY camera but no flashing on ETMY camera.
 
(PRM alignment)
PRM has been intentionally misaligned for the single arm green locking test.
I just confirmed that we can bring PRM back to a good alignment. Now we can see the central part is flashing too.
 
(picked-off beams)
I went checking through all the picked off beams to see if they are available or not.
POX : lost
POY : fine
POP : very clipped
POSRM : fine
  14247   Fri Oct 12 17:37:03 2018 SteveUpdateVACpressure gauge choices

We want to measure the pressure gradient in the 40m IFO

Our old MKS cold cathodes are out of order. The existing working gauge at the pumpspool is InstruTech CCM501

The plan is to purchase 3 new gauges for ETMY, BS and MC2 location.

Basic cold cathode     or    Bayard-Alpert Pirani

    

 

  5017   Fri Jul 22 10:24:34 2011 steveUpdateVACpressure plot at day 213

Dec 21, 2010 we pumped down the MARK4 rebuilt 40m-IFO and the malev has been pumping on it since than

  5767   Mon Oct 31 08:55:19 2011 steveUpdateVACpressure plot at day 53

Quote:

I was lucky to notice that the nitrogen supply line to the vacuum valves was leaking. Closed ALL valves. Open supply line to atm. Fixed leak. 

This was done fast so the pumps did not have to be shut down. Pressurized supply line and open valves to

"Vac Normal" condition in the right sequence.

 

  13184   Thu Aug 10 14:14:17 2017 KiraUpdatePEMpreviously built temp sensor

I decided to see what was inside the sensor that had been previously made. According to elog 1102, the temperature sensor is LM34, the specs of which can be found here:

http://www.ti.com/lit/ds/symlink/lm34.pdf

The wiring of this sensor confused me, as it appears that the +Vs end (white) connects to the input, but both the ground (left) and the Vout (middle) pins are connected to the box itself. I don't see how the signal can be read.

  5889   Mon Nov 14 21:22:48 2011 ranaConfigurationComputersprimetime RSYNC slowing down NODUS

nodus:elog>w; who ; date
  9:20pm  up 44 day(s),  5:14,  5 users,  load average: 0.29, 1.04, 1.35
User     tty           login@  idle   JCPU   PCPU  what
controls pts/1         9:18pm            5         -tcsh
controls pts/2         2:37pm  6:39  25:02  25:02  /opt/rsync/bin/rsync -avW /cvs/c
controls pts/3         9:14pm                      w
controls pts/4         4:20pm  1:56   5:02   5:02  ssh -X rosalba
controls pts/8         8:23pm    47   4:03         -tcsh
controls   pts/1        Nov 14 21:18    (pianosa.martian)
controls   pts/2        Nov 14 14:37    (ldas-cit.ligo.caltech.edu)
controls   pts/3        Nov 14 21:14    (rosalba)
controls   pts/4        Nov 14 16:20    (192.168.113.128)
controls   pts/8        Nov 14 20:23    (gwave-103.ligo.caltech.edu)
Mon Nov 14 21:20:48 PST 2011

we will ask the man to stop running backups at this time of night...

  3863   Thu Nov 4 17:53:29 2010 yutaUpdateCDSprimitive python script for A2L measurement

Summary:
  I wrote a python script for A2L measurement.
 Currently it is really primitive, but I tested the basic functionality of the script.

 We already have A2L script(at /cvs/cds/rtcds/caltech/c1/scripts/A2L) that uses ezlockin, but python is more stable and easy to read.

A2L measurement method:
  1. Dither a optic using software oscillator in LOCKIN and demodulate the length signal by that frequency.
  2. Change coil output gains to change the pivot of the dithering and do step 1.
  3. Coil output gain set that gives the smallest demodulated magnitude tells you where the current beam spot is.

  Say you are dithering the optic in PIT and changing the coil gains keeping UL=UR and LL=LL.
  If the coil gain set UL=UR=1.01, LL=LR=-0.99 gives you demodulated magnitude 0, that means the current beam spot is 1% upper than the center, compared to 1/2 of UL-LL length.
  You do the same thing for YAW to find horizontal position of the beam.

Description of the script:
  Currently, the script lives at /cvs/cds/caltech/users/yuta/scripts/A2L.py
  If you run;
     ./A2L.py MC1 PIT
  it gives you vertical position of the beam at MC1.

  It changes the TO_COIL matrix gain by "DELTAGAINS", turns on the oscillator, and get X_SIN, X_COS from C1IOO_LOCKIN.
  Plots DELTAGAINS vs X_SIN/X_COS and fit them by y=a+bx+cx^2.(Ideally, c=0)
  Rotates (X_SIN, X_COS) vectors to get I-phase and Q-phase.
    (I,Q)=R*(X_SIN,X_COS)
  Rotation angle is given by;
    rot=arctan(b(X_COS)/b(X_SIN))
  which gives Q 0 slope(Ideally, Q=0).
  x-intercept of DELTAGAINS vs I plot gives the beam position.

Checking the script:
  1. I used the same setup when I checked LOCKIN(see elog #3857). C1:SUS-MC2_ULCOIL output goes directly to C1:IOO-LOCKIN_SIG input.

  2. Set oscillator frequency to 18.13Hz, put 18.13Hz band-pass filter to C1:IOO-LOCKIN_SIG filter module, and put 1Hz low-pass filter to C1:IOO-LOCKIN_X_SIN/X_COS filter modules.
        Drive frequency 18.13Hz is same as the previous script(/cvs/cds/rtcds/caltech/c1/scripts/A2L/A2L_MC2).

  3. Ran the script. Checked that Q~0 and rot=-35deg.

  4. Put phase shifting filter to C1:IOO-LOCKIN_SIG filter module and checked Q~0 and rotation angle.
     fitler rot(deg)
     w/o    -35
     +90deg  45
     -90deg  56
     -45deg -80

  5. Put some noise in C1:SUS-MC2_ULCOIL by adding SUSPOS feedback signal and ran the script.(Attachment #1)
      During the measurement, the damping servo was off, so SUSPOS feedback signal can be treated as noise.

Conclusion:
  The result from the test measurement seems reasonable.
  I think I can apply it to the real measurement, if MCL signal is not so noisy.[status: yellow]

Plan:
  - add calculating coherence procedure, averaging procedure to the script
  - add setting checking procedure to the script
  - apply it to real A2L measurement

Bay the way:
  Computers in the control room is being so slow (rossa, allegra, op440m, rosalba). I don't know why.

  5296   Wed Aug 24 11:40:21 2011 jamie, jenne, kiwamu, suresh, steveUpdateSUSproblem with ITMX

ITMX was drag wiped, and the suspension was put back into place.  However, after removing all of the earthquake stops we found that the suspension was hanging in a very strange way.

The optic appears to heavily pitched forward in the suspension.  All of the rear face magnets are high in their OSEMs, while the SIDE OSEM appears fine.  When first inspected, some of the magnets appeared to be stuck to their top OSEM plates, which was definitely causing it to pitch forward severely.  After gently touching the top of the optic I could get the magnets to sit in a more reasonable position in the OSEMs.  However, they still seem to be sitting a little high.  All of the PDMon values are also too low:

  nominal now
UL 1.045 0.767
UR 0.855 0.718
LR

0.745

0.420

LL

0.780

0.415
SD

0.840

0.752

Taking a free swing measurement now.

  7863   Thu Dec 20 12:14:19 2012 JamieUpdateGeneralproblem with in-vac wiring for TTs

Nic and I discovered a problem with the in-vac wiring from the feed-thru to the top of the table.  Pin 13 at the top of the stack, which is one of the coil pins on the tip-tilt quadrapus cables, is *the* shield braid on the cable that goes to the feed-thru.  This effectively shorts one of our coil signals.

There are three solutions as we see it:

* swap pin 13 for something else at the top of the stack, and then swap it back somewhere else outside of the vacuum.

* swap *all* the pins at the top of the table to be the mirror.  We would then need to mirror our cables on the outside, but that's less of an issue.

* make a mirror adapter that sits at the top.  This would obviously need to be cleaned/baked.

None of these solutions is particularly good or fast.

  7341   Tue Sep 4 20:20:47 2012 jamieUpdateGeneralproblematic tip-tilts

Quote:

We clearly need a better plan for adjusting the tip tilts in pitch, because utilizing their hysteresis is ridiculous.  Koji and Steve are thinking up a set of options, but so far it seems as though all of those options should wait for our next "big" vent.  So for now, we have just done alignment by poking the tip tilt.

Tomorrow, we want to open up the MC doors, open up ETMY, and look to see where the beam is on the optic.  I am concerned that the hysteresis will relax over a long ( >1hour ) time scale, and we'll loose our pointing.  After that, we should touch the table enough to trip the BS, PRM optics, since Koji is concerned that perhaps the tip tilt will move in an earthquake.  Jamie mentioned that he had to poke the tip tilt a pretty reasonable amount to get it to change a noticeable amount at ETMY, so we suspect that an earthquake won't be a problem, but we will check anyway.

 I'm very unhappy with the tip-tilts right now.  The amount of hysteresis is ridiculous.  I have no confidence that they will stay pointing wherever we point them.  It's true I poked the top more than it would normally move, but I don't actually believe it wouldn't move in an earthquake.  Given how much hysteresis we're seeing, I expect it will just drift on it's own and we'll loose good pointing again.

And as a reminder, IPPOS/ANG don't help us here before the tip-tilts are in the PRC after the IP pointing sensors.

I think we need to look seriously at possible solutions to eliminate or at least reduce the hysteresis, by either adding weight, or thinner wire, or something.

  2013   Mon Sep 28 17:39:34 2009 robUpdatePSLproblems

The PSL/IOO combo has not been behaving responsibly recently. 

The first attachment is a 15 day trend of the MZ REFL, ISS INMON, and MC REFL power.  These show two separate problems--recurring MZ flakiness, which may actually be a loose cable somewhere which makes the servo disengage.  Such disengagement is not as obvious with the MZ as it is with other systems, because the MZ is relatively stable on its own.  The second problem is more recent, just starting in the last few days.  The MC is drifting off the fringe, either in alignment, length, or both.  This is unacceptable.

The second attachment is a two-day trend of the MC REFL power.  Last night I carefully put the beam on the center of the MC-WFS quads.  This appears to have lessened the problem, but it has not eliminated it. 

It's probably worth trying to re-measure the MCWFS system to make sure the control matrix is not degenerate. 

  6699   Tue May 29 00:53:57 2012 DenUpdateCDSproblems

I've noticed several CDS problems:

  1. Communication sign on C1SUS model turns to red once in a while. I press diag reset and it is gone. But after some time comes back.
  2. On C1LSC machine red "U" lamp shines with a period ~5 sec.
  3. I was not able to read data from the SR785 using netgpibdata.py. Either connection is not established at all, or data starts to download and then stops in the middle. I've checked the cables, power supplies and everything, still the same thing.
  11252   Sun Apr 26 00:56:21 2015 ranaSummaryComputer Scripts / Programsproblems with new restart procedures for elogd and apache

Since the nodus upgrade, Eric/Diego changed the old csh restart procedures to be more UNIX standard. The instructions are in the wiki.

After doing some software updates on nodus today, apache and elogd didn't come back OK. Maybe because of some race condition, elog tried to start but didn't get apache. Apache couldn't start because it found that someone was already binding the ELOGD port. So I killed ELOGD several times (because it kept trying to respawn). Once it stopped trying to come back I could restart Apache using the Wiki instructions. But the instructions didn't work for ELOGD, so I had to restart that using the usual .csh script way that we used to use.

  11267   Fri May 1 20:33:31 2015 ranaSummaryComputer Scripts / Programsproblems with new restart procedures for elogd and apache

Same thing again todaysad. So I renamed the /etc/init/elog.conf so that it doesn't keep respawning bootlessly. Until then restart elog using the start script in /cvs/cds/caltech/elog/ as usual.

I'll let EQ debug when he gets back - probably we need to pause the elog respawn so that it waits until nodus is up for a few minutes before starting.

Quote:

Since the nodus upgrade, Eric/Diego changed the old csh restart procedures to be more UNIX standard. The instructions are in the wiki.

After doing some software updates on nodus today, apache and elogd didn't come back OK. Maybe because of some race condition, elog tried to start but didn't get apache. Apache couldn't start because it found that someone was already binding the ELOGD port. So I killed ELOGD several times (because it kept trying to respawn). Once it stopped trying to come back I could restart Apache using the Wiki instructions. But the instructions didn't work for ELOGD, so I had to restart that using the usual .csh script way that we used to use.

 

  16991   Tue Jul 12 13:59:12 2022 ranaSummaryComputersprocess monitoring: Monit

I've installed Monit on megatron and nodus just now, and will set it up to monitor some of our common processes. I'm hoping that it can give us a nice web view of what's running where in the Martian network.

  11093   Tue Mar 3 11:38:11 2015 SteveUpdatesafetyprofessional crane inspection

 

Quote:

Safety glasses were measured and they are all good. I'd like to measure your personal glass if it is not on this picture.

Quote:

Safety audit went soothly. We thank all participients.

Correction list:

1, Bathroom water heater cable to be stress releived and connector replaced by twister lock type.

2, Floor cable bridge at the vacuum rack to be replaced. It is cracked.

3, Sprinkler head to be moved eastward 2 ft in room 101

4, Annual crane inspection is scheduled for 8am Marc 3, 2015

5, Annual safety glasses cleaning and transmission measurement will get done tomorrow morning.

 

Konecranes' Fred inspected and load tested all tree cranes at with 450 lbs

  12015   Wed Mar 2 10:09:28 2016 SteveUpdatesafetyprofessional crane inspection

The crane inspection is scheduled for this coming Friday from 8-12

 

  716   Tue Jul 22 16:50:09 2008 steveMetaphysicsEnvironmentprofessorial clean up of work bench
Atm1: is showing the spiritual satisfaction after work bench clean up by the professor himself.

Atm2: some items are still waiting to be placed back to their location
  8048   Fri Feb 8 23:22:48 2013 DenSummaryModern Controlprogress report

 I wrote a small document on the application of LQG method to a Fabry-Perot cavity control.

  7646   Wed Oct 31 17:11:40 2012 jamieUpdateAlignmentprogress, then setback

jamie, nic, jenne, den, raji, manasa

We were doing pretty well with alignment, until I apparently fucked things up.

We were approaching the arm alignment on two fronts, looking for retro-reflection from both the ITMs and the ETMs.

Nic and Raji were looking for the reflected beam off of ETMY, at the ETMY chamber.  We put an AWG sine excitation into ETMY pitch and yaw.  Nic eventually found the reflected beam, and they adjusted ETMY for retro-reflection.

Meanwhile, Jenne and I adjusted ITMY to get the MICH Y arm beam retro-reflecting to BS.

Jenne and I then moved to the X arm.  We adjusted BS to center on ITMX, then we moved to ETMX to center the beam there.  We didn't both looking for the ETMX reflected beam.  We then went back to BS and adjusted ITMX to get the MICH X arm beam retro-reflected to the BS.

At this point we were fairly confident that we had the PRC, MICH, and X and Y arm alignment ok.

We then moved on the signal recycling cavity.  Having removed and reinstalled the SRC tip-tilts, and realigning everything else, they were not in the correct spot.  The beam was off-center in yaw on SR3, and the SR3 reflected beam was hitting low and to the right on SR2.  I went to loosen SR3 so that I could adjust it's position and yaw, and that when things went wrong.

Apparently I hit something BS table and completely lost the input pointing.  I was completely perplexed until I found that the PZT2 mount looked strange.  The upper adjustment screw appeared to have no range.  Looking closer I realized that we somehow lost the gimble ball between the screw and the mount.  Apparently I somehow hit PZT2 hard enough to separate from the mirror mount from the frame which caused the gimble ball to drop out.  The gimble ball probably got lost in a table hole, so we found a similar mount from which we stole a replacement ball.

However, after putting PZT2 back together things didn't come back to the right place.  We were somehow high going through PRM, so we couldn't retro-reflect from ITMY without completely clipping on the PRM/BS apertures.  wtf.

Jenne looked at some trends and we saw a big jump in the BS/PRM osems.  Clearly I must have hit the table/PZT2 pretty hard, enough to actually kick the table.  I'm completely perplexed how I could have hit it so hard and not really realized it.

Anyway, we stopped at this point, to keep me from punching a hole in the wall.  We will re-asses the situation in the morning.  Hopefully the BS table will have relaxed back to it's original position by then.

  7647   Wed Oct 31 17:18:34 2012 JenneUpdateAlignmentprogress, then setback - trend of BS table shift

Here is a two hour set of second trends of 2 sensors per mirror, for BS, PRM, ITMY and MC1.  You can see about an hour ago there was a big change in the BS and PRM suspensions, but not in the ITMY and MC1 suspensions.  This corresponds as best we can tell with the time that Jamie was figuring out and then fixing PZT2's mount.  You can see that the table takes some time to relax back to it's original position.  Also, interestingly, after we put the doors on ~10 or 20 minutes ago, things change a little bit on all tables. This is a little disconcerting, although it's not a huge change.

  7649   Wed Oct 31 17:36:39 2012 jamieUpdateAlignmentprogress, then setback - trend of BS table shift

Quote:

Here is a two hour set of second trends of 2 sensors per mirror, for BS, PRM, ITMY and MC1.  You can see about an hour ago there was a big change in the BS and PRM suspensions, but not in the ITMY and MC1 suspensions.  This corresponds as best we can tell with the time that Jamie was figuring out and then fixing PZT2's mount.  You can see that the table takes some time to relax back to it's original position.  Also, interestingly, after we put the doors on ~10 or 20 minutes ago, things change a little bit on all tables. This is a little disconcerting, although it's not a huge change.

 what's going on with those jumps on MC1?  It's smaller, but noticeable, and looks like around the same time.    Did the MC table jump as well?

more looking tomorrow.

  7651   Thu Nov 1 01:51:37 2012 ranaUpdateAlignmentprogress, then setback - trend of BS table shift

  But these jumps in the OSEMs are all at the level of 10-20 microns. Seems like that wouldn't be enough to account for anything; 20 microns / (pend length) ~ 50-60 microradians.

  7652   Thu Nov 1 08:48:42 2012 steveUpdateAlignmentprogress, then setback - trend of BS table shift

Quote:

  But these jumps in the OSEMs are all at the level of 10-20 microns. Seems like that wouldn't be enough to account for anything; 20 microns / (pend length) ~ 50-60 microradians.

 BS table and suspensions are fine.

ELOG V3.1.3-