40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m elog, Page 331 of 357  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  8746   Tue Jun 25 19:18:07 2013 gautamConfigurationendtable upgradeplan of action for PZT installation

  This entry is meant to be a sort of inventory check and a tentative plan-of-action for the installation of the PZT mounted mirrors and associated electronics on the Y-endtable. 

Hardware details:

  •  PZT mounts are cleaned and ready to be put on the end-tables.
  • The PZTs being used are PI S-330.20L Piezo Tip/Tilt Platforms. Each endtable requires two of these. The input channels have male single-lemo connectors. There are 3 channels on each tip/tilt platform, for tilt, yaw and a bias voltage.
  • The driver boards being used are D980323 Rev C. Each board is capable of driving 2 piezo tip/tilt platforms. I am not too sure of this but I think that the SMA female connector on these boards is meant to be connected with the bias voltage from our Kepco high-voltage power supplies. The outputs on these boards are fitted with SMB female connectors, while the piezo tip/tilt platforms have male single-lemo connectors. We will have to source cables with the appropriate connectors to run between the end-table and rack 1Y4 (see below). The input to these boards from the DAC will have to be made with a custom ribbon connector as per the pin out configuration given in the circuit drawing.
  • High-voltage power supply: KEPCO BHK 300-130 MG. This will supply the required 100V DC bias voltage to the piezo tip/tilts via the driver board. Since each board is capable of driving two piezos, we will only need one unit per end-table. The question is where to put these (photo attached). It doesn't look like it can be accommodated in 1Y4 (again photo attached) and the power cable the unit came with is only about 8ft long. If we put these under the end-tables, then we will need an additional long (~10m) cable to run from these to the driver boards at 1Y4 carrying 100 V. 
  •  We will need long (~10m by my rough measurement at the X and Y ends) cables to run from rack 1Y4 to the endtable to drive the piezos. These will have to be high-voltage tolerant (at least to 100V DC) and should have SMB male connectors at one end and female single-lemo connectors at the other. I have emailed 3 firms (CD International Technologies Inc., Stonewall Cables, and Fairview Microwave) detailing our requirements and asking for a quote and estimated time for delivery. We will need 6 of these, plus another cable with an SMA connector on one end and the other end open to connect the 100V DC bias voltage from the high voltage power supply to the driver boards (the power supply comes with a custom jack to which we can solder open leads). We will also possibly need ~3m long lemo-to-?(I need to check what the input connector for the data acquisition channels) cables for the monitoring channels, I am not sure if these are available, I will check with Steve tomorrow.

Other details:

  • I have attached a wiring diagram with the interconnects between various devices at various places and the type of connectors required etc. The error signal will the the transmitted green light from the cavity, and there is already a DQ channel logging this information, so nothing additional wiring is required to this end.
  • Jamie had detailed channel availability in elog 8580. I had a look at rack 1Y4, and there were free DAC channels available, but I am not sure as to which of the ones listed in the elog it corresponds to. In any case, Jamie did mention that there are sufficient channels available at the end-stations for this purposes, but all of these are fast channels. What needs to be decided is if we are going ahead and using the fast channels, or if we need to find slow DAC channels. 
  • I spoke to Koji about gluing the mirrors to the PZTs, and he says we can use superglue, and also to be sure to clean both the mirror and the tip/tilt surfaces before gluing. In any case, all the other hardware issues need to be sorted out first before thinking about gluing the mirrors.

High-Voltage Power Supply

photo_3.JPG

 

Situation at rack 1Y4

 

photo_4.JPG

 Wiring diagram

ASC_schematic.pdf

  8800   Wed Jul 3 21:19:04 2013 gautamConfigurationendtable upgradeplan of action for PZT installation

 This is an update on the situation as far as PZT installation is concerned. I measured the required cable (PZT driver board to PZT) lengths for the X and Y ends as well as the PSL table once again, with the help of a 3m long BNC cable, just to make sure we had the lengths right. The quoted cable lengths include a meter tolerance. The PZTs themselves have cable lengths of 1.5m, though I have assumed that this will be used on the tables themselves. The inventory status is as follows.

  1. Stuff ordered:
    • RG316 LEMO 00 (female) to SMB (female) cables, 10 meters - 6pcs (for the Y-end)
    • RG316 LEMO 00 (female) to SMB (female) cables, 11 meters - 6pcs (for the X-end)
    • RG316 LEMO 00 (female) to SMB (female) cables, 15 meters - 8pcs (6 for the PSL, and two spares)
    • RG316 SMA (male) to open cables, 3 meters - 3pcs (1 each for the X end, Y end and PSL table, for connecting the driver boards to the 100V DC power supply)
    • 10 pin IDC connectors for connecting the DAC interface to the PZT driver boards 
  2. Stuff we have:
    • 40 pin IDC connectors which connect to the DAC interface
    • PZT driver boards
    • PZT mounts
    • Twisted ribbon wire, which will be used to make the custom ribbon to connect the 10 pin IDC to the 40 pin IDC connector

I also did a preliminary check on the driver boards, mainly to check for continuity. Some minor modifications have been made to this board from the schematic shown here (using jumper wires soldered on the top-side of the PCB). I will have to do a more comprehensive check to make sure the board as such is functioning as we expect it to. The plan for this is to first check the board without the high-voltage power supply (using an expansion card to hook it up to a eurocrate). Once it has been verified that the board is getting powered, I will connect the high-voltage supply and a test PZT to the board to do both a check of the board as well as a preliminary calibration of the PZTs.

To this end, I need something to track the spot position as I apply varying voltage to the PZT. QPDs are an option, the alternative being some PSDs I found. The problem with the latter is that the interfaces to the PSD (there are 3) all seem to be damaged (according to the labels on two of them). I tried connecting a PSD to the third interface (OT301 Precision Position Sensing Amplifier), and hooked it up to an oscilloscope. I then shone a laser pointer on the psd, and moved it around a little to see if the signals on the oscilloscope made sense. They didn't on this first try, though this may be because the sensing amplifier is not calibrated. I will try this again. If I can get one of the PSDs to work, mount it on a test optical table and calibrate it. The plan is then to use this PSD to track the position of the reflected beam off a mirror mounted on a PZT (temporarily, using double sided tape) that is driven by feeding small-amplitude signals to the driver board via a function generator. 

 

Misc

The LEMO connector on the PZTs have the part number LEMO.FFS.00, while the male SMB connectors on the board have the part number PE4177 (Pasternack)

Plan of Action:

  • The first task will be to verify that the board is working by the methods outlined above.
  • Once the board has been verified, the next task will be to calibrate a PZT using it. I have to first identify a suitable way of tracking the beam position (QPD or PSD?)
  • I have identified a position in the eurocrate at 1Y4 to install the board, and I have made sure that for this slot, the rear of the eurocrate is not hooked up to the cross-connects. I now need to figure out the exact pin configuration at the DAC interface: the bank is marked 'DAC Channels 9-16' (image attached) but there are 40 pins in the connector, so I need to map these pins to DAC channels, so that when making the custom ribbon, I get the pin-to-pin map right.

DAC_bank.png

 

The wiring scheme has been modified a little, I am uploading an updated one here. In the earlier version, I had mistaken the monitor channels as points from which to log data, while they are really just for debugging. I have also revised the coaxial cable type used (RG316 as opposed to RG174) and the SMB connector (female rather than male).

ASC_schematic.pdf 

 

 

 

 

  4387   Tue Mar 8 15:33:09 2011 kiwamuSummaryGreen Lockingplan on Mar.8th
Today's goal is to measure the contribution from the intensity noise to the beatnote.
 
Plans for today
  - check the ADC for the DCPD that Jenne installed yesterday
  - adjust RF power on the AOM
  - take spectrum of the differential noise and measure the coupling from the intensity noise
  - update the noise budget

Quote: from #4382
This week's goal is to investigate the source of the differential noise and to lower it.

 

  5122   Fri Aug 5 08:08:42 2011 kiwamuSummaryGeneralplan today

Today's main mission is : adjustment of the arm length

 

   + Open the ETMX(Y) door, starting from 9:00 AM

   + Secure the ETMX(Y) test mass by tightening the earthquake stops.

   + Move the ETMX(Y) suspension closer to the door side

   + Inspect the OSEMs and take pictures before and after touching the OSEMs.

   + Level the table

   + Adjust the OSEM positions

   + Move the ETMX(Y) suspension to have designed X(Y)arm length

   + Level the table again

   + Align the ETMX(Y) such that the green beam resonate

  3982   Tue Nov 23 23:13:40 2010 kiwamuSummaryCDSplan: we will install C1LSC

 [Joe, Suresh, Kiwamu]

 We will fully install and run the new C1LSC front end machine tomorrow.

And finally it is going to take care of the IOO PZT mirrors as well as LSC codes. 

 


 (background stroy)

 During the in-vac work today, we tried to energize and adjust the PZT mirrors to their midpoints.

However it turned out that C1ASC, which controls the voltage applying on the PZT mirrors, were not running.

We tried rebooting C1ASC by keying the crate but it didn't come back.

 The error message we got in telnet  was :

   memory init failure !!

 

 We discussed how to control the PZT mirrors from point of view of both short term and long term operation.

We decided to quit using C1ASC and use new C1LSC instead.

A good thing of this action is that, this work will bring the CDS closer to the final configuration. 

 

(things to do)

 - move C1LSC to the proper rack (1X4).

 - pull out the stuff associated with C1ASC from the 1Y3 rack.

 - install an IO chasis to the 1Y3 rack.

- string a fiber from C1LSC to the IO chasis.

- timing cable (?)

- configure C1LSC for Gentoo

- run a simple model to check the health

- build a model for controlling the PZT mirrors

  2568   Wed Feb 3 11:13:15 2010 steveConfigurationGeneralplaned power outage for Sat. Feb 20

The electrical shop has to connect the new power transformer at CES. This means we will have no AC power for ~8 hrs on Saturday, February 20

Is this date good for us to power down ALL equipment in the lab?

Rana:  Yes

  4382   Mon Mar 7 18:20:01 2011 kiwamuSummaryGreen Lockingplans
This week's goal is to investigate the source of the differential noise and to lower it.
 
Plans for tonight
 - realign GREEN_TRANS PD at the PSL table
 - update the noise budget
 - take spectrum of the differential noise
 - investigate a noise coupling to the differential noise especially from the intensity noise
 - update the noise budget again
 
Plans for this week :
 - Auto alignment scripts for green (Kiwamu)
 - connect the end REFL_DC  to an ADC (Kiwamu)
 - make an active phase rotation circuit for the end PDH (undergrads)
 - bounce-roll notches (Suresh)
 - optimization of the suspensions including the input matrices and the Q-values (Jenne)
 - optimization of MFSS (Koji/Rana/Larisa)
 - rewire the mechanical shutter on the 1X9 binary outputs (Steve)

 

  6353   Mon Mar 5 06:11:08 2012 kiwamuSummaryLSCplans

Plans:

  •  DRMI (PRMI) + one arm test before the LVC meeting
  •  Study of the funny sensing matrix and the RAM offset effects before the LVC meeting
  •  Glitch hunting

Action items:

  • MC beam pointing 
    • to make the PZT1 pitch relax
  • OSA setup
    •    a long BNC cable for monitoring the signal in the control room
  • Power budget on the AP table
    • in order to ensure the laser power on each photo diode
  •  POP22/110 sideband monitor
    • installation of an RF amp
    • building a diplexer
    • connect the signals to the demod boards 
  •  Calibration of the demod boards
    • calibrate the conversion loss of the mixers to calibrate all the LSC signals to watts / meter
  •  (1+G) correction for the glitch time series data
  • Simulation study for the RAM offset
    • How much offset do we get due to the RAM ? and how do the offsets screw up the sensing matrix ?
  •  A complete set of the MICH characterization
    •   DC power
    •   Sensing matrix
    •   Noise budget
    •   OSA
    •   Estimation of the RAM offset 
    •  Summarize the results in the wiki
  •  A complete set of the PRMI/DRMI characterization
    •  The same stuff as the MICH characterization
  •  DRMI + one arm test
    •   Monitor the evolution of the sensing matrix during the arm is brought to the resonance

   
 

  1624   Mon May 25 21:31:47 2009 carynUpdatePEMplugged in Guralp channels

Guralp Vert1b and Guralp EW1b are plugged back in to PEM ADCU #10 and #12 respectively. Guralp NS1b remains plugged in. So,  PEM-SEIS_MC1_X,Y,Z should now corrsp to seismometer as before.

  1648   Wed Jun 3 12:31:13 2009 carynUpdatePEMplugged in guralp channels
  5063   Fri Jul 29 18:43:02 2011 Manuel, IshwitaUpdatePEMplugging seismometers to ADC

[Manuel, Ishwita, Jenne, Jamie]

We changed the C1PEM model and the names of the C1:PEM channels.

We reinstalled the blue breakout box, since the purple one still didn't work.

So, now the AA board channels are connected as follows...

C1 = C1:PEM-SEIS_GUR1_X

C2 = C1:PEM-SEIS_GUR1_Y

C3 = C1:PEM-SEIS_GUR1_Z

C4 = C1:PEM-SEIS_GUR2_X

C5 = C1:PEM-SEIS_GUR2_Y

C6 = C1:PEM-SEIS_GUR2_Z

C7 = C1:PEM-SEIS_STS_1_X

C8 = C1:PEM-SEIS_STS_1_Y

C9 = C1:PEM-SEIS_STS_1_Z

C11 = C1:PEM-SEIS_STS_2_X

C12 = C1:PEM-SEIS_STS_2_Y

C13 = C1:PEM-SEIS_STS_2_Z

C14 = C1:PEM-SEIS_STS_3_X

C15 = C1:PEM-SEIS_STS_3_Y

C16 = C1:PEM-SEIS_STS_3_Z

C17 = C1:PEM-ACC_MC1_X

C18 = C1:PEM-ACC_MC1_Y

C19 = C1:PEM-ACC_MC1_Z

C20 = C1:PEM-ACC_MC2_X

C21 = C1:PEM-ACC_MC2_Y

C22 = C1:PEM-ACC_MC2_Z

Although the channels for all 3 STS-2 seismometers are made but only one is installed. So only Channels C1 to C9 are now in operation...

We checked the data from the plugged channels with the Dataviewer. We could see the peak whenever someone jumped in the lab. Even Kiwamu jumped and saw his signal.

  4345   Wed Feb 23 16:34:42 2011 valeraConfiguration pmc lens staged

I put the PMC last mode matching lens (one between the steering mirrors) on a translation stage to facilitate the PMC mode matching.

Currently 4% of incident power is reflected by the PMC. But the reflected beam does not look "very professional" on the camera to Rana - meaning there is too much TEM20 (bulls eye) mode in the reflected beam.

I locked the  PMC  on bulls eye mode and measured  the ratio of the TEM20/TEM00 in transmission to be 1.3%. Thus the PMC mode matching is ~99% and the incident beam HOM content is ~3%.

While working on the PMC I found that the source of PMC "blinking" is not the frequency control signal from MC to the laser (the MC servo was turned off) but possibly some oscillation which could be affected even by a small change of the pump current 2.10 A to 2.08 A. I showed this behaviour to Kiwamu and we decided to leave the the current at 2.08 A for now where things look stable and investigate later.

Attachment 1: PMCrefl.JPG
PMCrefl.JPG
Attachment 2: P1070438.JPG
P1070438.JPG
Attachment 3: P1070439.JPG
P1070439.JPG
  9961   Fri May 16 09:46:05 2014 SteveUpdatePSLpointing monitoring

Quote:

 Tonight I noticed that the drop in PMC transmission was ~1V, more than the usual of ~0.5V from the daily drift.

While re-aligning on the table, I noticed that the misalignment was not from either of the steering mirrors; i.e. I has to walk them both to get the alignment back. This implies that the misalignment is generated far upstream. Maybe the the laser itself is moving. We need some updates from Steve's laser misalignment tracker.

I'd like to replace the paper target with IOO -QPD_POS so we can log it.

  11820   Sat Nov 28 11:46:40 2015 yutaroUpdateLSCpossible error source of loss map measurement

I found that TRY level degraded and the beam shape seen with CCD camera at AS port was splitted when the beam spot on ETMY was not close to the center. This was because dither started not working well. I suspect so because in such a case TRY level went up when I did iteration with TT1 and TT2 after freezing dither. Splitted beam shape indicates that incident light did not match well with the cavity mode.

TRY level for each point was this:

TRYDC
[[ 0.6573      0.8301      0.8983      0.8684      0.6773    ]
 [ 0.7555      0.8904      0.9394      0.8521      0.6779    ]
 [ 0.6844      0.8438      0.9318      0.8834      0.6593    ]
 [ 0.7429      0.8688      0.9254      0.8427      0.6474    ]
 [ 0.7034      0.8447      0.8834      0.8147      0.6966    ]]

 In the worst case, TRY level was 70 % of the maximum level. Assuming that this degrade was totally due to the mode mismatch, this corresponds to ~50 urad difference between the angle of incident light and resonant lighe in the arm (see elog 11819).

  8078   Wed Feb 13 19:09:32 2013 yutaSummaryGeneralpossible explanations to oval REFL beam

[Jenne, Manasa, Jamie, Yuta]

The shape of the REFL beam reflected from PRM is oval after the Faraday.
We tried to fix it by MC spot position centering and by tweaking input TT1/TT2/PRM. But REFL still looks bad (below).

REFL_1044844506.bmp

What has changed since:
  REFL looks OK in mid-Dec 2012. Possibly related things changed are;

  1. New active input TTs with new mirrors installed
  2. Leveling of IMC stack changed a little (although leveling was done after installing TTs)

Possible explanations to oval REFL:
  A. Angled input beam:
    Input beam is angled compared with the Faraday apertures. So, beam coming back from PRM is angled, and clipped by the Faraday aperture at the rejection port.

  B. Mode mis-match to PRM:
    New input TTs have different curvatures compared with before. Input mode matching to PRM is not good and beam reflected from PRM is expanding. So, there's clipping at the Faraday.

  C. Not clipping, but astigmatism:
    New input TTs are not flat. Incident angle to TT2 is ~ 45 deg. So, it is natural to have different tangential/sagittal waist sizes at REFL.

How to check:
  A. Angled input beam:
    Look beam position at the Faraday apertures. If it doesn't look centered, the incident beam may be angled.
   (But MC centering didn't help much......)

  B. Mode mis-match to PRM:
    Calculate how much the beam size will be at the Faraday when the beam is reflected back from PRM. Put some real numbers to curvatures of input TTs for calculation.

  C. Not clipping, but astigmatism:
    Same calculation as B. Let's see if REFL is with in our expectation or not by calculating the ratio of tangential/sagittal waist sizes at REFL.

  8079   Wed Feb 13 19:30:45 2013 KojiSummaryGeneralpossible explanations to oval REFL beam

>> "What has changed since:"

Recently the REFL path has been rearranged after I touched it just before Thanksgiving.
(This entry)

If the lenses on the optical table is way too much tilted, this astigmatism happens.
This is frequently observed as you can find it on the POP path right now.

Also the beam could be off-centered on the lens.

I am not sure the astigmatism is added on the in-air table, but just in case
you should check the table before you put much effort to the in-vacuum work.

  8080   Wed Feb 13 19:41:07 2013 yutaSummaryGeneralpossible explanations to oval REFL beam

We checked that REFL beam is already oval in the vacuum. We also centered in-air optics, including lens, in the REFL path, but REFL still looks bad.

By using IR card in vacuum, PRM reflected beam looks OK at MMTs and at the back face of the Faraday. But the beam looks bad after the output aperture of the Faraday.

  6159   Tue Jan 3 15:49:27 2012 JamieUpdateComputerspossible front-end timing issue

Quote:

Is there a reason the framebuilder status light is red for all the front ends?

Also, I reenabled PRM watchdog.

Apparently there is a bug in the timing cards having to do with the new year roll-over that is causing front-end problems.  From Rolf:

For systems using the Spectracom IRIG-B cards for timing information, the code did not properly roll over the time for
2012 (still thinks it is 2011 and get reports from DAQ of timing errors (0x4000)). I have made a temporary fix for this
in the controller.c code in branch-2.3, branch-2.4 and release 2.3.1. 

I was going to check to see if the 40m is suffering from this. I'll be over to see if that's the problem.

  6168   Wed Jan 4 09:06:50 2012 steveUpdateComputerspossible front-end timing issue

Quote:

Quote:

Is there a reason the framebuilder status light is red for all the front ends?

Also, I reenabled PRM watchdog.

Apparently there is a bug in the timing cards having to do with the new year roll-over that is causing front-end problems.  From Rolf:

For systems using the Spectracom IRIG-B cards for timing information, the code did not properly roll over the time for
2012 (still thinks it is 2011 and get reports from DAQ of timing errors (0x4000)). I have made a temporary fix for this
in the controller.c code in branch-2.3, branch-2.4 and release 2.3.1. 

I was going to check to see if the 40m is suffering from this. I'll be over to see if that's the problem.

 The problem is the same as yesterday.

Attachment 1: rtntstat.png
rtntstat.png
  6574   Thu Apr 26 18:15:59 2012 JamieUpdateCDSpossible issue with mx_stream on front ends

I'm noticing what appears to be occasional failures of mx_stream on the front end machines.  It doesn't happen that frequently, but I've noticed it a couple of times already since the upgrade.

The symptom is that the DC Status goes to "0xbad" (red) and the "FE NET" goes red for all models on a given front end.

The solution seems to be restarting mx_stream on the given front end:    sudo  /etc/init.d/mx_stream restart"

There is nothing in the mx_stream log:

 controls@c1sus ~ 0$ cat /opt/rtcds/caltech/c1/target/fb/mx_stream_logs/c1sus.log 
 c1x02
 c1sus
 c1mcs
 c1rfm
 c1pem
 mmapped address is 0x7f43740ec000
 mapped at 0x7f43740ec000
 mmapped address is 0x7f43700ec000
 mapped at 0x7f43700ec000
 mmapped address is 0x7f436c0ec000
 mapped at 0x7f436c0ec000
 mmapped address is 0x7f43680ec000
 mapped at 0x7f43680ec000
 mmapped address is 0x7f43640ec000
 mapped at 0x7f43640ec000
 send len = 263596
 Connection Made

but I do see some funny messages in the front end dmesg:

 [200341.317912] DXH Adapter 0 : Heartbeat alive-check for node=12 failed (cnt=8387 state=0x1 deb=0 val=0).
 [200341.318670] DXH Adapter 0 : Session for node 12 is disabled - Status = 0x5
 [200341.319062] Session callback reason=1 status=5 target_node=12
 [200341.319069] Session callback reason=3 status=0 target_node=12
 [200341.359534] (map_table_check_access:752):my id 1 ->  remote id 2 : entry was valid - is now tentatively valid
 [200341.859584] DXH Adapter 0 : Probe failure for node=12 - disabling session probeStatus=0x40000f02
 [200341.860335] DXH Adapter 0 : Session for node 12 is disabled - Status = 0x3
 [200341.860728] Session callback reason=1 status=3 target_node=12
 [200374.006111] DXH Adapter 0 : Set reachable remote node list.
 [200409.020670] DXH Adapter 0 : Set reachable remote node list.
 [200409.021076] DXH Adapter 0 : Session for node 12 is deleted - Status = 0x0
 [200409.021468] Session callback reason=5 status=0 target_node=12
 [200412.362824] (map_table_insert:648):** successfully inserted **(valid unicast) inst 0 node 1->0 fwd 0 fwd_tp 4 egress 0
 [200418.025994] (map_table_check_access:752):my id 1 ->  remote id 0 : entry was valid - is now invalid
 [200418.025998] (map_table_insert:648):** successfully inserted **(valid unicast) inst 0 node 1->2 fwd 0 fwd_tp 4 egress 0
 [200421.743916] Session callback reason=0 status=0 target_node=12
 [200422.073776] DXH Adapter 0 : Set reachable remote node list.
 [200422.342446] Session callback reason=7 status=0 target_node=12
 [200422.342454] DXH Adapter 0 : Session for node 12 is ok.

I'm awaiting feedback from experts.

 

  12096   Thu Apr 28 08:49:47 2016 SteveUpdatePEMpossible noise sources schedule

Building:         Campus (see attached Map)        

       

Date:              Manhole 1 - May 3 through May 5

Manhole 2 – May 6 through May 10

 Manhole 2 - May 16 through May 19

Manhole 3 – May 11 through May 19           

          

Time:              Noise:  7:00 a.m. To 5:00 p.m.

                  Access: 24 Hours a day

           

Interruption:      Noise/Vehicular & Pedestrian Access

                  Storm Drain Manholes

         

*In order to repair 3 manholes associated with a large storm drain that runs north-south through the campus, work will take place at the

3 manholes shown on the map. This work will interrupt vehicular and pedestrian access on the paths adjacent to the manholes. Though the work at Manholes 1 and 2 will allow vehicular and pedestrian access around the manholes, the work at Manhole 3 will completely block the driveway running south from the southeast corner of Parking Lot 11. Noise will also be created by the repair

 

 

Attachment 1: Campus_B&W_Map-2.pdf
Campus_B&W_Map-2.pdf
  109   Thu Nov 15 18:37:06 2007 tobinUpdateComputerspossible replacement for linux1's disk
It looks like the existing disk in linux1 is a Seagate ST380013A (this can be found either via the smartctl utility or by looking at the file /proc/ide/hda/model). It appears that you can still buy this disk from amazon, though I think just about any ATA disk would work. I'll ask Steve to buy one for us.
  13149   Fri Jul 28 20:22:41 2017 JamieUpdateCDSpossible stable daqd configuration with separate DC and FW

This week Jonathan Hanks and I have been trying to diagnose why the daqd has been unstable in the configuration used by the 40m, with data concentrator (dc) and frame writer (fw) in the same process (referred to generically as 'fb').  Jonathan has been digging into the core dumps and source to try to figure out what's going on, but he hasn't come up with anything concrete yet.

As an alternative, we've started experimenting with a daqd configuration with the dc and fw components running in separate processes, with communication over the local loopback interface.  The separate dc/fw process model more closely matches the configuration at the sites, although the sites put dc and fwprocesses on different physical machines.  Our experimentation thus far seems to indicate that this configuration is stable, although we haven't yet tested it with the full configuration, which is what I'm attempting to do now.

Unfortunately I'm having trouble with the mx_stream communication between the front ends and the dc process.  The dc does not appear to be receiving the streams from the front ends and is producing a '0xbad' status message for each.  I'm investigating.

  11757   Thu Nov 12 10:22:33 2015 SteveUpdatePEMpossible vibration for 4days

Building:         San Pasqual walkway East to West

                  (Between Holliston & Wilson)         

 

Date:             Thursday 11-12-15 to Wednesday 11-18-15

 

Time:             Between 6:00 a.m. and 4:00 p.m. each day        

 

Notification:     Possible Noise Vibration

 

Contact:          Ken Lewis (626) 298-2037       

 

* Plumbing contractor will be inspecting and water jetting Storm drains

Type of interruption: (Some vehicle noise and small vibrations limited to close surrounding area)

Areas effected: San Pasqual walkway from Holliston Street to Wilson)

Potential effects: storm drain loss of use

Reason for interruption: Storm drain cleaning in preparation for rainy season

 

  13310   Mon Sep 11 23:31:50 2017 johannesUpdateCameraspost-vent camera capture comparison

The latest pre-unintended vent captures of the test mass face cameras were taken on June 2nd, 2017. Only exposures for ITMYF, ETMYF, and ETMXF exist in /users/sensoray/SensorayCaptures/. I took new captures for those three after locking the arms and having the dither-alignment on for 5+ minutes (exposures were taken after turning the dithering off). The capture script is choking on ITMXF, saying the channel can't lock on. Maybe that's why there's also no reference image for it. Capturing QUAD3, which shows ITMXF in the lower right corner, works, but we don't have a capture for reference. I also recorded dark fields after closing the PSL shutter. Naturally, these don't subtract out as well for the three-month old pictures, but it's actually not terrible and qualitatively one can still compare the subtracted images

Visually, ITMYF and ETMYF do not show a dramatic difference between then and now. ETMXF however, does. To get a numerical estimate for the difference in counts, I worked with the subtracted images and placed an aperture about 1.5x the size of the visible beam blob. I summed up the pixel values inside and subtracted the sum of the pixel values of an equally sized area from the upper left corner of the respective image, which looks free of subtraction artifacts and looks qualitatively similar to the background in the central region.

The pixel sum has gone up by about 50% between the exposures. I still have to do the same for the YARM optics but don't expect such a large discrepancy. Unfortunately we're missing those ITMYF expsures...

All pictures are organized in this format:

Pre-vent exposure Post-vent exposure
Pre-vent subtracted Post-vent subtracted

 

ITMYF

   

   

ETMYF

   

   

ETMXF

   

   

Attachment 11: ETMXF_pre_sub.bmp
  13334   Tue Sep 26 22:11:08 2017 johannesUpdateCameraspost-vent camera capture comparison

I configured the remaining GigE-Camera to work on the 40m network. We currently have 3 operational Basler cameras:

The 120gm's have been assigned the IPs 192.168.113.152  (was already configured) and 192.168.113.153 (freshly configured) and have been labeled accordingly. Note that it was not necessary to connect the out-of-the-box camera directly to a dedicated ethernet adapter whose IP was set manually to 169.254.0.XXX as pointed out in earlier posts - a few seconds after connecting the camera to the control room switch (with PoE adapter to power it) the camera showed up in the configuration software tool which is launched via

/opt/rtcds/caltech/c1/scripts/GigE/pylon5/bin/./IpConfigurator

and can be assigned a corrected, static IP.

We have a plethora of 2" tubes for the lens assembly, but not a great variety of focal lengths for 2" lenses. Present with the camera gear were two f=250 mm and one f=150 mm 2" lenses with a NIR broadband AR coating

To determine the lens positions relativ to the sensor I assumed that the camera we're setting up looks at its test mass from a distance of 1m. Using the two available focal lengths we can look for solutions which have reasonable lens separations <~10cm and suitable magnification. We primarily want to image the central mirror area onto a 1/4" sized sensor, which can be achieved with a magnification of ~1/8.

I chose a lens separation of 6cm, which gives a theoretical magnification of -.12 and a sensor-lens 2 distance of 7.95 cm. I placed the lenses accordingly in the tubes and checked the focusing with Gautam's help:

       

It's pretty close to what we would expect. We will do the calibration using the auxiliary laser on the PSL table. For this I temporarily routed a fiber from the PSL enclosure to the SP table. Since the main cable hole is sort of cramped it's going in through a gap near the ceiling instead.  

 

Attachment 1: lens_distance.pdf
lens_distance.pdf
  15550   Sun Aug 30 11:29:33 2020 ranaUpdateGeneralpower blink?

My power at home winked out for a second this morning, but it looks like either nothing happened in the 40m lab or else it rode it out.

MC is locked - lost lock around 11:25 AM and then relocked.

  4448   Mon Mar 28 16:24:35 2011 kiwamuUpdateGreen Lockingpower budget on PSL table

   I measured some laser powers associated with the beat-note detection system on the PSL table.

The diagram below is a summary of the measurement. All the data were taken by the Newport power meter.

 The reflection from the beat-note PD is indeed significant as we have seen.

In addition to it the BS has a funny R/T ratio maybe because we are using an unknown BS from the Drever cabinet. I will replace it by a right BS.

RFPD.png

(background)

 During my work for making a noise budget I noticed that we haven't carefully characterize the beat-note detection system.

The final goal of this work is to draw noise curves for all the possible noise sources in one plot.

To draw the shot noise as well as the PD dark noise in the plot, I started collecting the data associated with the beat-note detection system.

 

(Next actions)

 * Estimation and measurement of the shot noise

 * measurement of the PD electrical noise (dark noise)

 * modeling for the PD electrical noise

 * measurement of the doubling efficiency

 * measurement of an amplitude noise coupling in the frequency discriminators

  6355   Mon Mar 5 14:10:35 2012 kiwamuUpdateLSCpower budget on the AP table

I checked the laser powers on the AP table and confirmed that their powers are low enough at all the REFL photo diodes.

When the HWP( which is for attenuating the laser power with a PBS) is at 282.9 deg all of the REFL diodes receives about 5 mW.

This will be the nominal condition. 

If the HWP is rotated to a point in which the maximum laser power goes through, the diodes get about 10 mW, which is still below the power rate of 18 mW (#6339).

I used the Coherent power meter for all the measurements.

The table below summarizes the laser powers on the REFL diodes and the OSA. Also the same values were noted on the attached picture.

 

 nominal power [mW]

(when HWP is at 282.9 deg)

expected max power [mW]

(when HWP is at a point where the max power goes through)

REFL11 5.5 10
REFL33 4.5 10
REFL55 5.3 10
REFL165 4.8 10
REFL OSA 0.7 0.7

 

A note:
I found that the OSA for the REFL beam was receiving a unnecessary bright laser.
So I put an ND1 attenuator stacked on the existing ND2 attenuator. The laser power entering in the OSA is currently at 0.7 mW.
Attachment 1: power_budget.png
power_budget.png
  12593   Thu Nov 3 08:07:52 2016 SteveUpdateGeneralpower glitch

Building:         Campus Wide         

       

Date:             Thursday 11/03/16 at Approx. 6:20 a.m.   

          

Notification:     Unplanned City Wide Power Glitch Affecting Campus   

 

*This is to notify you that the Caltech Campus experienced a campus wide power glitch at approx. 6:20 a.m. this morning.

The city was contacted and they do not expect any further interruptions related to this event.

 

The vacuum was not effected. ITM sus damping restored. IFO room air conditions on.

PSL Innolight and ETMY Lightwave lasers turned on

 

Attachment 1: powerGlitch.png
powerGlitch.png
  12696   Mon Jan 9 09:18:47 2017 SteveUpdatePEMpower glitch

There was a power glitch last night around 1:15am

The vacuum was not effected.

PSL laser turned on, PMC locked, PSL shutter opened and MC locked.

IR lasers at the ends turned on.

East arm air cond turned on.

The computers are all done.

The last power glitch was at Nov 3, 2016

 

 

Attachment 1: MondayMorning.png
MondayMorning.png
  12700   Tue Jan 10 21:47:00 2017 ranaUpdateCDSpower glitch

Does "done" mean they are OK or they are somehow damaged? Do you mean the workstations or the front end machines?

The computers are all done.

megatron and optimus are not responding to ping commands or ssh -- please power them up if they are off; we need them to get data remotely

  12594   Thu Nov 3 11:33:24 2016 gautamUpdateGeneralpower glitch - recovery

I did the following:

  • Hard reboots for fb, megatron, and all the frontends, in that order
  • Checked time on all FEs, ran sudo ntpdate -b -s -u pool.ntp.org where necessary
  • Restarted all realtime models
  • Restarted monit on all FEs
  • Reset Marconi to nominal settings, fCarrier=11.066209MHz, +13dBm amplitude
  • In the control room, restarted the projector and set up the usual StripTool traces
  • Realigned PMC
  • Slow machines did not need any touchups - interestingly, ITMX did not get stuck during this power glitch!

There was a regular beat coming from the speakers. After muting all the channels on the mixer and pulling the 3.5mm cable out, the sound persisted. It now looks like the mixer is broken sad

     ProFX8v2

 

  12702   Wed Jan 11 16:35:03 2017 gautamUpdateCDSpower glitch - recovery progress

[lydia, ericq, gautam]

We set about following the instructions linked in the previous elog. A few notes/remarks:

  1. It is important to run the ntpdate commands before restarting the models. Sometimes, multiple restarts of the models were required to turn all the indicator blocks on the MEDM screen green.
  2. There was also an issue of multiple ntpd processes running on the same machine, which obviously caused all sorts of timing havoc. EricQ helped us diagnose and fix these. At the moment, all the lights are green on the CDS status MEDM screen
  3. On the hardware side, apart from the usual suspects of frontends/megatron/optimus/fb needing to be rebooted, I noticed that the ETMX OSEM lights were off on the control room monitors. Investigation pointed to the 2 20V sorensens at the X end outputting 0V, 0A after the power glitch. We turned down both dials, and then gradually ramped them up again. Both Sorensens now read +/-20V, 0.3A, which is in agreement with the label stuck onto them.
  4. Restarted MC autolocker and FSS Slow scripts on megatron. I have not yet looked at the status of the nds2 server on megatron.
  5. 11 MHz Marconi has yet to be restarted - but I am unable to get even the IMC locked at the moment. For some reason, the RMS of the MC1 and MC3 coils are way higher than what I am used to seeing (~5mV rms as compared to the <1mV rms I am used to seeing for a damped optic). I will investigate further. Leaving MC autolocker disabled for now.
  12701   Tue Jan 10 22:55:43 2017 gautamUpdateCDSpower glitch - recovery steps

Here is a link to an elog with the steps I had to follow the last time there was a similar power glitch.

The RAID array restart was also done not too long ago, we should also do a data consistency check as detailed here, if not already..

If someone hasn't found the time to do this, I can take care of it tomorrow afternoon after I am back.

Quote:

Does "done" mean they are OK or they are somehow damaged? Do you mean the workstations or the front end machines?

The computers are all done.

megatron and optimus are not responding to ping commands or ssh -- please power them up if they are off; we need them to get data remotely

 

  12699   Tue Jan 10 16:20:11 2017 SteveUpdateCDSpower glitch......Raid is rebuilding

Jamie started the fm40m Raid rebuilding. It has been beeping since the power outage.

Summary pages have no reading since power glitch.

 

Attachment 1: rebuilding_in_progress.png
rebuilding_in_progress.png
  5270   Fri Aug 19 15:31:53 2011 steveUpdateGeneralpower interruption rescheduled to 10-1-2011

                UTILITY & SERVICE INTERRUPTION

**PLEASE POST**

 

Building:               Central Engineering Services (C.E.S.)

          LIGO Gravitational Physics building adjacent to C.E.S. 40M- Lab

          Safety Storage adjacent to CES

          Steele House 

          Keck Lab

 

Date:                   Saturday, October 1, 2011

Time:                   8:00 a.m. To 9:00 a.m.            

Interruption:   Electricity

Contact:                Mike Anchondo ext. 4999  Tom Brennan 4984

*This interruption is required for maintenance of high voltage switchgear in Campus Sub Station.

(If there is a problem with this Interruption, please notify

 the Service Center X-4717 or the above Contact as soon as possible.

 If no response is received we will proceed with the interruption.)

         

                                Reza Ohadi,

                                Director, Campus Operations & Maintenance

  12808   Tue Feb 7 16:23:49 2017 SteveUpdateGeneralpower interruption tomorrow

                                                                                                                                   received this note: at 4:11pm Tuesday, Feb 7, 2017

**PLEASE POST**

 

Building:         Campus

    

Date:             Wednesday, February 8, 2017

          

Time:             7:30 AM – 8:30 AM  

 

Contact:          Rick Rodriguez x-2576

           

Pasadena Water and Power (PWP) will be performing a switching operation of the

Caltech Electrical Distribution System that is expected to be transparent to Caltech,

but could result in a minor power anomaly that might affect very sensitive equipment.

 

IMPACT: Negligible impact......?

There may be temporary  power interruption tomorrow!

PS:we did not see any effect   

  3924   Mon Nov 15 15:02:00 2010 KojiSummaryPSLpower measurements around the PMC

[Valera Yuta Kiwamu Koji]

Kiwamu burtrestored c1psl. We measured the power levels around the PMC.

With 2.1A current at the NPRO:

Pincident = 1.56W
Ptrans_main = 1.27W
Ptrans_green_path = .104W

==> Efficiency =88%

----

We limited  the MC incident power to ~50mW. This corresponds to the PMC trans of 0.65V.
(The PMC trans is 1.88V at the full power with the actual power of 132mW)

  6156   Fri Dec 30 22:05:16 2011 kiwamuUpdateLSCpower normalization in LSC

Now a power normalization is doable for the LSC error signals.

It is working fine, but at some point we may want to have some kind of a saturation filter or limiter to avoid dividing a signal by a small number.

 

 (How to set the normalization)

  •   Click a small matrix panel on the LSC OVERVIEW window (shown in the attached screen shot below).
    •     This will give you a pop-up-window, which shows a matrix to route the normalization signals
POW_NORM_MTRX.png
  •   Choose a numerator channel, which you want to divide, and choose denominator channels, which you want to use as a power normalization factor.
  •   Put some number in the corresponding matrix elements.
  •   Once you put a non-zero element in the matrix, the corresponding numerator channel will be divided by the specified denominator channels.
    •     Otherwise the static normalization factors (e.g. C1:LSC-AS55_POW_NORM, etc.,) will be used for the denominator.
  6158   Tue Jan 3 15:48:39 2012 kiwamuUpdateLSCpower normalization in LSC

It turned out that the power normalization need a modification.

I will work on it tomorrow and it will take approximately 2 hours to finish the modification.

 

     Concept of Power Normalization         

Koji pointed out that the dynamic power normalization, which I have installed(#6156),  should be placed after the LSC input matrix rather than before the matrix.
Now let us review the concept of the power normalization to avoid some confusions.
We will need two kinds of power normalizations as follows:
  1.  Static power normalization, which should be placed before the input matrix.
  2.  Dynamic power normalization, which should be placed after the input matrix.
 The static power normalization will be applied to each I and Q signals in all the LSC signals and also DCPD signals.
This normalization is supposed to cancel the effects from the incident laser power and depths of the phase modulations.
Because the variations in the laser power and modulation depth are expected to be relatively slow, we will apply static normalizations.
 
 The dynamic power normalization will be applied to the DOFs error signals, for example C1:LSC-DARM_IN and so on.
This normalization is supposed to cancel the effect of the internal states of the interferometer, for example alignments.
In addition to it, this dynamic normalization can expand the linear range of the error signals.

Quote from #6156

Now a power normalization is doable for the LSC error signals.

 

  6170   Wed Jan 4 16:22:30 2012 kiwamuUpdateLSCpower normalization in LSC : modification done

The dynamic power normalization system has been modified such that the normalization happen after the LSC input matrix.

The attached screen shot below tells you how the signals flow.
The red circled region in the picture is the place where the power normalization are performed.
pow_norm.png
 
The dynamic normalization will be activated once you put some numbers into the elements in the matrix.
Otherwise the error signals are always normalized by 1.

Quote from #6158

It turned out that the power normalization need a modification.

I will work on it tomorrow and it will take approximately 2 hours to finish the modification.

 

  4011   Sun Dec 5 22:28:39 2010 ranaSummaryall down cond.power outage

Looks like there was a power outage. The control room workstations were all off (except for op440m). Rosalba and the projector's computer came back, but rossa and allegra are not lighting up their monitors.

linux1 and nodus and fb all appear to be on and answering their pings.

I'm going to leave it like this for the morning crew. If it

  4012   Mon Dec 6 11:53:20 2010 josephb, kiwamuSummaryall down cond.power outage

The monitors for allegra and rossa's seemed to be in a weird state after the power outage.  I turned allegra and rossa on, but didn't see anything.  However, I was after awhile able to ssh in.  Power cycling the monitors did apparently got them talking with the computers again and displaying.

I had to power cycle the c1sus and c1iscex machines (they probably booted faster than linux1 and the fb machines, and thus didn't see their root and /cvs/cds directories).  All the front ends seem to be working normally and we have damped optics.

The slow crates look to be working, such as c1psl, c1iool0, c1auxex and so forth.

Kiwamu turned the main laser back on.

Quote:

Looks like there was a power outage.

 

  4013   Mon Dec 6 11:57:21 2010 KojiSummaryall down cond.power outage

I checked the vacuum system and judged there is no apparent issue.

The chambers and annulus had been vented before the power failure.
So the matters are only on the TMPs.

TP1 showed the "Low Input Voltage" failure. I reset the error and the turbine was lift up and left not rotating.
TP2 and TP3 seem rotating at 50KRPM and the each lines show low pressur (~1e-7)
although I did not find the actual TP2/TP3 themselves.

Quote:

Looks like there was a power outage. The control room workstations were all off (except for op440m). Rosalba and the projector's computer came back, but rossa and allegra are not lighting up their monitors.

linux1 and nodus and fb all appear to be on and answering their pings.

I'm going to leave it like this for the morning crew. If it

 

  7476   Thu Oct 4 08:39:58 2012 SteveUpdateGeneralpower outage

There had to be a power outage. Laser and air condition turned back on. The vacuum is OK

Sorensen DC power supplies were tripped, so they were reset: at AUX OMC South 18V and 28V for RF PS and at 1X1 24V

 

Power Outage confirmed:

** Notification **

 

CALIFORNIA INSTITUTE OF TECHNOLOGY

                 FACILITIES MANAGEMENT

 

**PLEASE POST**

 

 

Building:         Campus

 

Date:             Thursday October 04,2012

 

This morning at 2:17 a.m. much of the City of Pasadena including our Campus experienced a electric power sag of short duration, approximately 1/10 of a second. The cause was a fault on one of Pasadena’s 17KV circuits. Some sensitive equipment have been impacted.

                 

Contact:          Mike Anchondo x-4999

 

Attachment 1: Oct4R2012.png
Oct4R2012.png
  13492   Tue Dec 26 17:24:24 2017 SteveUpdateGeneralpower outage

There was a power outage.

The IFO pressure is 12.8 mTorr-it and it is not pumped. V1 is still closed. TP1 is not running. The Rga is not powered.

The PSL output shutter is still closed. 2W Innolight turned on and manual beam block placed in its beampath.

3 AC units turned on at room temp 84F

Attachment 1: powerOutage.png
powerOutage.png
  13755   Mon Apr 16 22:09:53 2018 KevinUpdateGeneralpower outage - BLRM recovery

I've been looking into recovering the seismic BLRMs for the BS Trillium seismometer. It looks like the problem is probably in the anti-aliasing board. There's some heavy stuff sitting on top of it in the rack, so I'll take a look at it later when someone can give me a hand getting it out.

In detail, after verifying that there are signals coming directly out of the seismometer, I tried to inject a signal into the AA board and see it appear in one of the seismometer channels.

  1. I looked specifically at C1:PEM-SEIS_BS_Z_IN1 (Ch9), C1:PEM-SEIS_BS_X_IN1 (Ch7), and C1:PEM-ACC_MC2_Y_IN1 (Ch27). All of these channels have between 2000--3000 cts.
  2. I tried injecting a 200 mVpp signal at 1.7862 Hz into each of these channels, but the the output did not change.
  3. All channels have 0 cts when the power to the AA board is off.
  4. I then tried to inject the same signal into the AA board and see it at the output. The setup is shown in the first attachment. The second BNC coming out of the function generator is going to one of the AA board inputs; the 32 pin cable is coming directly from the output. All channels give 4.6 V when when the board is powered on regardless of wheter any signal is being injected.
  5. To verify that the AA board is likely the culprit, I also injected the same signals directly into the ADC. The setup is shown in the second attachment. The 32 pin cable is going directly to the ADC. When injecting the same signals into the appropriate channels the above channels show between 200--300 cts, and 0 cts when no signal is injected.
Attachment 1: AA.jpg
AA.jpg
Attachment 2: ADC.jpg
ADC.jpg
  13493   Thu Dec 28 17:22:02 2017 gautamUpdateGeneralpower outage - CDS recovery
  1. I had to manually reboot c1lsc, c1sus and c1ioo.
  2. I edited the line in /etc/rt.sh (specifically, on FB /diskless/root.jessie/etc/rt.sh) that lists models running on a given frontend, to exclude c1dnn and c1oaf, as these are the models that have been giving us most trouble on startup. After this, I was able to bring back all models on these three machines using rtcds restart --all. The original line in this file has just been commented out, and can be restored whenever we wish to do so.
  3. mx_stream processes are showing failed status on all the frontends. As a result, the daqd processes are still not working. Usual debugging methods didn't work.
  4. Restored all sus dampings.
  5. Slow computers all seem to be responsive, so no action was required there.
  6. Burtrestored c1psl to solve the "sticky slider" problem, relocked PMC. I didn't do anything further on the PSL table w.r.t. the manual beam block Steve has placed there till the vacuum situation returns to normal.

@Steve: I noticed that we are down to our final bottle of N2, not sure if it will last till 2 Jan which is presumably when the next delivery will come in. Since V1 is closed and the PSL beam is blocked, perhaps this doesn't matter.

from Steve: there are spare full N2 bottles at the south end outside and inside. I replaced the N2 on Sunday night. So the system should be Ok as is.

I also hard-rebooted megatron and optimus as these were unresponsive to ping.

*Seems like the mx_stream errors were due to the mx process not being started on FB. I could fix this by running sudo systemctl start mx on FB. After which I ran sudo systemctl restart daqd_*. But the DC errors persist - not sure how to fix this. Elogging suggests that "0x4000" errors are connected to timing problems on FB, but restarting the ntp service on FB (which is the suggested fix in said elogs) didn't fix it. Also unsure if mx process is supposed to automatically start on FB at startup.

Attachment 1: 28.png
28.png
  13510   Sat Jan 6 18:27:37 2018 gautamUpdateGeneralpower outage - IFO recovery

Mostly back to nominal operating conditions now.

  1. EX TransMon QPD is not giving any sensible output. Seems like only one quadrant is problematic, see Attachment #1. I blame team EX_Acromag for bumping some cabling somewhere. In any case, I've disabled output of the QPD, and forced the LSC servo to always use the Thorlabs "High Gain" PD for now. Dither alignment servo for X arm does not work so well with this configuration - to be investigated.
  2. BS Seismometer (Trillium) is still not giving any sensible output.
    • I looked under the can, the little spirit level on the seismometer is well centered.
    • I jiggled all the cabling to rule out any obvious loose connections - found none at the seismometer, or at the interface unit (labelled D1002694 on the front panel) in 1X5/1X6.
    • All 3 axes are giving outputs with DC values of a few hundred - I guess there could've been some big earthquake in early December which screwed the internal alignment of the sensing mass in the seismometer. I don't know how to fix this.
    • Attachment #2 = spectra for the 3 channels. Can't say they look very seismicy frown. I've assumed the units are in um/sec.
    • This is mainly bothering me in the short term because I can't use the angular feedforward on PRC alignment, which is usually quite helpful in DRMI locking.
    • But I think the PRM Oplev loop is actually poorly tuned, in which case perhaps the feedforward won't really be necessary once I touch that up.

What I did today (may have missed some minor stuff but I think this is all of it):

  1. At EX:
    • Toggled power to Thorlabs trans monitoring PD, checked that it was actually powered, squished some cables in the e- rack.
    • Removed PDA55 in the green path (put there for EX laser AM/PM measurement). So green beam can now enter the X arm cavity.
    • Re-connected ALS cabling.
    • Turned on HV supply for EX Green PZT steering mirrors (this has to be done every time there is a power failure).
  2. At ITMY table:
    • Removed temporary HeNe RIN/ Oplev sensing noise measurement setup. HeNe + 1" vis-coated steering mirror moved to SP table.
    • Turned on ITMY/SRM Oplev HeNe.
    • Undid changes on ITMY Oplev QPD and returned it to its original position.
    • Centered ITMY reflected beam on this QPD.
  3. At vertex area
    • Looked under Trillium seismometer can - I've left the clamps undone for now while we debug this problem.
    • Power-cycled Trillium interface box.
    • Touched up PMC alignment.
  4. Control room
    • Recover IFO alignment using combination of IR and Green beams.
    • Single arm locking recovered, dither alignment servos run to maximize arm transmission. Single arm locks holding for hours, that's good.
    • The X arm dither alignment isn't working so well, the transmission never quite hits 1 and it undergoes some low frequency (T~30secs) oscillations once the transmission reaches its peak value.
    • Had to do the usual ipcrm thing to get dataviewer to run on pianosa.

Next order of business:

  1. Recover ALS:
    • aim is to replace the vertex area ALS signals derived from 532nm with their 1064nm counterparts.
    • Need to touch up end PDH servos, alignment/MM into arms, and into Fibers at ends etc.
    • Control the arms (with RMs misaligned) in the CARM/DARM basis using the revised ALS setup.
    • Make a noise budget - specifically, we are interested in how much actuation range is required to maintain DARM control in this config.
  2. Recover DRMI locking
    • Continue NBing.
    • Do a statistical study of actuation range required for acquiring and maintaining DRMI locking.
Attachment 1: EX_QPD_Quad1_Faulty.pdf
EX_QPD_Quad1_Faulty.pdf
Attachment 2: Trillium_faulty.pdf
Trillium_faulty.pdf
ELOG V3.1.3-