40m QIL Cryo_Lab CTN SUS_Lab CAML OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
  40m Log, Page 329 of 355  Not logged in ELOG logo
ID Date Author Type Category Subjectup
  12702   Wed Jan 11 16:35:03 2017 gautamUpdateCDSpower glitch - recovery progress

[lydia, ericq, gautam]

We set about following the instructions linked in the previous elog. A few notes/remarks:

  1. It is important to run the ntpdate commands before restarting the models. Sometimes, multiple restarts of the models were required to turn all the indicator blocks on the MEDM screen green.
  2. There was also an issue of multiple ntpd processes running on the same machine, which obviously caused all sorts of timing havoc. EricQ helped us diagnose and fix these. At the moment, all the lights are green on the CDS status MEDM screen
  3. On the hardware side, apart from the usual suspects of frontends/megatron/optimus/fb needing to be rebooted, I noticed that the ETMX OSEM lights were off on the control room monitors. Investigation pointed to the 2 20V sorensens at the X end outputting 0V, 0A after the power glitch. We turned down both dials, and then gradually ramped them up again. Both Sorensens now read +/-20V, 0.3A, which is in agreement with the label stuck onto them.
  4. Restarted MC autolocker and FSS Slow scripts on megatron. I have not yet looked at the status of the nds2 server on megatron.
  5. 11 MHz Marconi has yet to be restarted - but I am unable to get even the IMC locked at the moment. For some reason, the RMS of the MC1 and MC3 coils are way higher than what I am used to seeing (~5mV rms as compared to the <1mV rms I am used to seeing for a damped optic). I will investigate further. Leaving MC autolocker disabled for now.
  12701   Tue Jan 10 22:55:43 2017 gautamUpdateCDSpower glitch - recovery steps

Here is a link to an elog with the steps I had to follow the last time there was a similar power glitch.

The RAID array restart was also done not too long ago, we should also do a data consistency check as detailed here, if not already..

If someone hasn't found the time to do this, I can take care of it tomorrow afternoon after I am back.

Quote:

Does "done" mean they are OK or they are somehow damaged? Do you mean the workstations or the front end machines?

The computers are all done.

megatron and optimus are not responding to ping commands or ssh -- please power them up if they are off; we need them to get data remotely

 

  12699   Tue Jan 10 16:20:11 2017 SteveUpdateCDSpower glitch......Raid is rebuilding

Jamie started the fm40m Raid rebuilding. It has been beeping since the power outage.

Summary pages have no reading since power glitch.

 

Attachment 1: rebuilding_in_progress.png
rebuilding_in_progress.png
  5270   Fri Aug 19 15:31:53 2011 steveUpdateGeneralpower interruption rescheduled to 10-1-2011

                UTILITY & SERVICE INTERRUPTION

**PLEASE POST**

 

Building:               Central Engineering Services (C.E.S.)

          LIGO Gravitational Physics building adjacent to C.E.S. 40M- Lab

          Safety Storage adjacent to CES

          Steele House 

          Keck Lab

 

Date:                   Saturday, October 1, 2011

Time:                   8:00 a.m. To 9:00 a.m.            

Interruption:   Electricity

Contact:                Mike Anchondo ext. 4999  Tom Brennan 4984

*This interruption is required for maintenance of high voltage switchgear in Campus Sub Station.

(If there is a problem with this Interruption, please notify

 the Service Center X-4717 or the above Contact as soon as possible.

 If no response is received we will proceed with the interruption.)

         

                                Reza Ohadi,

                                Director, Campus Operations & Maintenance

  12808   Tue Feb 7 16:23:49 2017 SteveUpdateGeneralpower interruption tomorrow

                                                                                                                                   received this note: at 4:11pm Tuesday, Feb 7, 2017

**PLEASE POST**

 

Building:         Campus

    

Date:             Wednesday, February 8, 2017

          

Time:             7:30 AM – 8:30 AM  

 

Contact:          Rick Rodriguez x-2576

           

Pasadena Water and Power (PWP) will be performing a switching operation of the

Caltech Electrical Distribution System that is expected to be transparent to Caltech,

but could result in a minor power anomaly that might affect very sensitive equipment.

 

IMPACT: Negligible impact......?

There may be temporary  power interruption tomorrow!

PS:we did not see any effect   

  3924   Mon Nov 15 15:02:00 2010 KojiSummaryPSLpower measurements around the PMC

[Valera Yuta Kiwamu Koji]

Kiwamu burtrestored c1psl. We measured the power levels around the PMC.

With 2.1A current at the NPRO:

Pincident = 1.56W
Ptrans_main = 1.27W
Ptrans_green_path = .104W

==> Efficiency =88%

----

We limited  the MC incident power to ~50mW. This corresponds to the PMC trans of 0.65V.
(The PMC trans is 1.88V at the full power with the actual power of 132mW)

  6156   Fri Dec 30 22:05:16 2011 kiwamuUpdateLSCpower normalization in LSC

Now a power normalization is doable for the LSC error signals.

It is working fine, but at some point we may want to have some kind of a saturation filter or limiter to avoid dividing a signal by a small number.

 

 (How to set the normalization)

  •   Click a small matrix panel on the LSC OVERVIEW window (shown in the attached screen shot below).
    •     This will give you a pop-up-window, which shows a matrix to route the normalization signals
POW_NORM_MTRX.png
  •   Choose a numerator channel, which you want to divide, and choose denominator channels, which you want to use as a power normalization factor.
  •   Put some number in the corresponding matrix elements.
  •   Once you put a non-zero element in the matrix, the corresponding numerator channel will be divided by the specified denominator channels.
    •     Otherwise the static normalization factors (e.g. C1:LSC-AS55_POW_NORM, etc.,) will be used for the denominator.
  6158   Tue Jan 3 15:48:39 2012 kiwamuUpdateLSCpower normalization in LSC

It turned out that the power normalization need a modification.

I will work on it tomorrow and it will take approximately 2 hours to finish the modification.

 

     Concept of Power Normalization         

Koji pointed out that the dynamic power normalization, which I have installed(#6156),  should be placed after the LSC input matrix rather than before the matrix.
Now let us review the concept of the power normalization to avoid some confusions.
We will need two kinds of power normalizations as follows:
  1.  Static power normalization, which should be placed before the input matrix.
  2.  Dynamic power normalization, which should be placed after the input matrix.
 The static power normalization will be applied to each I and Q signals in all the LSC signals and also DCPD signals.
This normalization is supposed to cancel the effects from the incident laser power and depths of the phase modulations.
Because the variations in the laser power and modulation depth are expected to be relatively slow, we will apply static normalizations.
 
 The dynamic power normalization will be applied to the DOFs error signals, for example C1:LSC-DARM_IN and so on.
This normalization is supposed to cancel the effect of the internal states of the interferometer, for example alignments.
In addition to it, this dynamic normalization can expand the linear range of the error signals.

Quote from #6156

Now a power normalization is doable for the LSC error signals.

 

  6170   Wed Jan 4 16:22:30 2012 kiwamuUpdateLSCpower normalization in LSC : modification done

The dynamic power normalization system has been modified such that the normalization happen after the LSC input matrix.

The attached screen shot below tells you how the signals flow.
The red circled region in the picture is the place where the power normalization are performed.
pow_norm.png
 
The dynamic normalization will be activated once you put some numbers into the elements in the matrix.
Otherwise the error signals are always normalized by 1.

Quote from #6158

It turned out that the power normalization need a modification.

I will work on it tomorrow and it will take approximately 2 hours to finish the modification.

 

  4011   Sun Dec 5 22:28:39 2010 ranaSummaryall down cond.power outage

Looks like there was a power outage. The control room workstations were all off (except for op440m). Rosalba and the projector's computer came back, but rossa and allegra are not lighting up their monitors.

linux1 and nodus and fb all appear to be on and answering their pings.

I'm going to leave it like this for the morning crew. If it

  4012   Mon Dec 6 11:53:20 2010 josephb, kiwamuSummaryall down cond.power outage

The monitors for allegra and rossa's seemed to be in a weird state after the power outage.  I turned allegra and rossa on, but didn't see anything.  However, I was after awhile able to ssh in.  Power cycling the monitors did apparently got them talking with the computers again and displaying.

I had to power cycle the c1sus and c1iscex machines (they probably booted faster than linux1 and the fb machines, and thus didn't see their root and /cvs/cds directories).  All the front ends seem to be working normally and we have damped optics.

The slow crates look to be working, such as c1psl, c1iool0, c1auxex and so forth.

Kiwamu turned the main laser back on.

Quote:

Looks like there was a power outage.

 

  4013   Mon Dec 6 11:57:21 2010 KojiSummaryall down cond.power outage

I checked the vacuum system and judged there is no apparent issue.

The chambers and annulus had been vented before the power failure.
So the matters are only on the TMPs.

TP1 showed the "Low Input Voltage" failure. I reset the error and the turbine was lift up and left not rotating.
TP2 and TP3 seem rotating at 50KRPM and the each lines show low pressur (~1e-7)
although I did not find the actual TP2/TP3 themselves.

Quote:

Looks like there was a power outage. The control room workstations were all off (except for op440m). Rosalba and the projector's computer came back, but rossa and allegra are not lighting up their monitors.

linux1 and nodus and fb all appear to be on and answering their pings.

I'm going to leave it like this for the morning crew. If it

 

  7476   Thu Oct 4 08:39:58 2012 SteveUpdateGeneralpower outage

There had to be a power outage. Laser and air condition turned back on. The vacuum is OK

Sorensen DC power supplies were tripped, so they were reset: at AUX OMC South 18V and 28V for RF PS and at 1X1 24V

 

Power Outage confirmed:

** Notification **

 

CALIFORNIA INSTITUTE OF TECHNOLOGY

                 FACILITIES MANAGEMENT

 

**PLEASE POST**

 

 

Building:         Campus

 

Date:             Thursday October 04,2012

 

This morning at 2:17 a.m. much of the City of Pasadena including our Campus experienced a electric power sag of short duration, approximately 1/10 of a second. The cause was a fault on one of Pasadena’s 17KV circuits. Some sensitive equipment have been impacted.

                 

Contact:          Mike Anchondo x-4999

 

Attachment 1: Oct4R2012.png
Oct4R2012.png
  13492   Tue Dec 26 17:24:24 2017 SteveUpdateGeneralpower outage

There was a power outage.

The IFO pressure is 12.8 mTorr-it and it is not pumped. V1 is still closed. TP1 is not running. The Rga is not powered.

The PSL output shutter is still closed. 2W Innolight turned on and manual beam block placed in its beampath.

3 AC units turned on at room temp 84F

Attachment 1: powerOutage.png
powerOutage.png
  13755   Mon Apr 16 22:09:53 2018 KevinUpdateGeneralpower outage - BLRM recovery

I've been looking into recovering the seismic BLRMs for the BS Trillium seismometer. It looks like the problem is probably in the anti-aliasing board. There's some heavy stuff sitting on top of it in the rack, so I'll take a look at it later when someone can give me a hand getting it out.

In detail, after verifying that there are signals coming directly out of the seismometer, I tried to inject a signal into the AA board and see it appear in one of the seismometer channels.

  1. I looked specifically at C1:PEM-SEIS_BS_Z_IN1 (Ch9), C1:PEM-SEIS_BS_X_IN1 (Ch7), and C1:PEM-ACC_MC2_Y_IN1 (Ch27). All of these channels have between 2000--3000 cts.
  2. I tried injecting a 200 mVpp signal at 1.7862 Hz into each of these channels, but the the output did not change.
  3. All channels have 0 cts when the power to the AA board is off.
  4. I then tried to inject the same signal into the AA board and see it at the output. The setup is shown in the first attachment. The second BNC coming out of the function generator is going to one of the AA board inputs; the 32 pin cable is coming directly from the output. All channels give 4.6 V when when the board is powered on regardless of wheter any signal is being injected.
  5. To verify that the AA board is likely the culprit, I also injected the same signals directly into the ADC. The setup is shown in the second attachment. The 32 pin cable is going directly to the ADC. When injecting the same signals into the appropriate channels the above channels show between 200--300 cts, and 0 cts when no signal is injected.
Attachment 1: AA.jpg
AA.jpg
Attachment 2: ADC.jpg
ADC.jpg
  13493   Thu Dec 28 17:22:02 2017 gautamUpdateGeneralpower outage - CDS recovery
  1. I had to manually reboot c1lsc, c1sus and c1ioo.
  2. I edited the line in /etc/rt.sh (specifically, on FB /diskless/root.jessie/etc/rt.sh) that lists models running on a given frontend, to exclude c1dnn and c1oaf, as these are the models that have been giving us most trouble on startup. After this, I was able to bring back all models on these three machines using rtcds restart --all. The original line in this file has just been commented out, and can be restored whenever we wish to do so.
  3. mx_stream processes are showing failed status on all the frontends. As a result, the daqd processes are still not working. Usual debugging methods didn't work.
  4. Restored all sus dampings.
  5. Slow computers all seem to be responsive, so no action was required there.
  6. Burtrestored c1psl to solve the "sticky slider" problem, relocked PMC. I didn't do anything further on the PSL table w.r.t. the manual beam block Steve has placed there till the vacuum situation returns to normal.

@Steve: I noticed that we are down to our final bottle of N2, not sure if it will last till 2 Jan which is presumably when the next delivery will come in. Since V1 is closed and the PSL beam is blocked, perhaps this doesn't matter.

from Steve: there are spare full N2 bottles at the south end outside and inside. I replaced the N2 on Sunday night. So the system should be Ok as is.

I also hard-rebooted megatron and optimus as these were unresponsive to ping.

*Seems like the mx_stream errors were due to the mx process not being started on FB. I could fix this by running sudo systemctl start mx on FB. After which I ran sudo systemctl restart daqd_*. But the DC errors persist - not sure how to fix this. Elogging suggests that "0x4000" errors are connected to timing problems on FB, but restarting the ntp service on FB (which is the suggested fix in said elogs) didn't fix it. Also unsure if mx process is supposed to automatically start on FB at startup.

Attachment 1: 28.png
28.png
  13510   Sat Jan 6 18:27:37 2018 gautamUpdateGeneralpower outage - IFO recovery

Mostly back to nominal operating conditions now.

  1. EX TransMon QPD is not giving any sensible output. Seems like only one quadrant is problematic, see Attachment #1. I blame team EX_Acromag for bumping some cabling somewhere. In any case, I've disabled output of the QPD, and forced the LSC servo to always use the Thorlabs "High Gain" PD for now. Dither alignment servo for X arm does not work so well with this configuration - to be investigated.
  2. BS Seismometer (Trillium) is still not giving any sensible output.
    • I looked under the can, the little spirit level on the seismometer is well centered.
    • I jiggled all the cabling to rule out any obvious loose connections - found none at the seismometer, or at the interface unit (labelled D1002694 on the front panel) in 1X5/1X6.
    • All 3 axes are giving outputs with DC values of a few hundred - I guess there could've been some big earthquake in early December which screwed the internal alignment of the sensing mass in the seismometer. I don't know how to fix this.
    • Attachment #2 = spectra for the 3 channels. Can't say they look very seismicy frown. I've assumed the units are in um/sec.
    • This is mainly bothering me in the short term because I can't use the angular feedforward on PRC alignment, which is usually quite helpful in DRMI locking.
    • But I think the PRM Oplev loop is actually poorly tuned, in which case perhaps the feedforward won't really be necessary once I touch that up.

What I did today (may have missed some minor stuff but I think this is all of it):

  1. At EX:
    • Toggled power to Thorlabs trans monitoring PD, checked that it was actually powered, squished some cables in the e- rack.
    • Removed PDA55 in the green path (put there for EX laser AM/PM measurement). So green beam can now enter the X arm cavity.
    • Re-connected ALS cabling.
    • Turned on HV supply for EX Green PZT steering mirrors (this has to be done every time there is a power failure).
  2. At ITMY table:
    • Removed temporary HeNe RIN/ Oplev sensing noise measurement setup. HeNe + 1" vis-coated steering mirror moved to SP table.
    • Turned on ITMY/SRM Oplev HeNe.
    • Undid changes on ITMY Oplev QPD and returned it to its original position.
    • Centered ITMY reflected beam on this QPD.
  3. At vertex area
    • Looked under Trillium seismometer can - I've left the clamps undone for now while we debug this problem.
    • Power-cycled Trillium interface box.
    • Touched up PMC alignment.
  4. Control room
    • Recover IFO alignment using combination of IR and Green beams.
    • Single arm locking recovered, dither alignment servos run to maximize arm transmission. Single arm locks holding for hours, that's good.
    • The X arm dither alignment isn't working so well, the transmission never quite hits 1 and it undergoes some low frequency (T~30secs) oscillations once the transmission reaches its peak value.
    • Had to do the usual ipcrm thing to get dataviewer to run on pianosa.

Next order of business:

  1. Recover ALS:
    • aim is to replace the vertex area ALS signals derived from 532nm with their 1064nm counterparts.
    • Need to touch up end PDH servos, alignment/MM into arms, and into Fibers at ends etc.
    • Control the arms (with RMs misaligned) in the CARM/DARM basis using the revised ALS setup.
    • Make a noise budget - specifically, we are interested in how much actuation range is required to maintain DARM control in this config.
  2. Recover DRMI locking
    • Continue NBing.
    • Do a statistical study of actuation range required for acquiring and maintaining DRMI locking.
Attachment 1: EX_QPD_Quad1_Faulty.pdf
EX_QPD_Quad1_Faulty.pdf
Attachment 2: Trillium_faulty.pdf
Trillium_faulty.pdf
  13503   Thu Jan 4 14:39:50 2018 gautamUpdateGeneralpower outage - timing error

As mentioned in my previous elog, the CDS overview screen "DC" indicators are all RED (everything else is green). Opening up the displays for individual CPUs, the error message shown is "0x4000", which is indicative of some sort of timing error. Indeed, it seems to me that on the FB machine, the gpstime command shows a gps time that is ~1 second ahead of the times on other FE machines.

Running gpstime on other FE machines throws up an error, saying that it cannot connect to the network to update leap second data. Not sure what this is about...

I double checked the GPS timing module, we had some issues with this in the recent past. But judging by its front panel display, everything seems to be in order...

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/bin/gpstime", line 9, in <module>
    load_entry_point('gpstime==0.2', 'console_scripts', 'gpstime')()
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 356, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2476, in load_entry_point
    return ep.load()
  File "/usr/lib/python3/dist-packages/pkg_resources.py", line 2190, in load
    ['__name__'])
  File "/usr/lib/python3/dist-packages/gpstime/__init__.py", line 41, in <module>
    LEAPDATA = ietf_leap_seconds.load_leapdata(notify=True)
  File "/usr/lib/python3/dist-packages/ietf_leap_seconds.py", line 158, in load_leapdata
    fetch_leapfile(leapfile)
  File "/usr/lib/python3/dist-packages/ietf_leap_seconds.py", line 115, in fetch_leapfile
    r = requests.get(LEAPFILE_IETF)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 60, in get
    return request('get', url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/api.py", line 49, in request
    return session.request(method=method, url=url, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 457, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/lib/python3/dist-packages/requests/sessions.py", line 569, in send
    r = adapter.send(request, **kwargs)
  File "/usr/lib/python3/dist-packages/requests/adapters.py", line 407, in send
    raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', OSError(101, 'Network is unreachable'))

 

 

  13506   Fri Jan 5 21:54:28 2018 ranaUpdateGeneralpower outage - timing error

Rolf came here in the morning, but not sure what he did or if Jamie remotely did something. But the screen is green.

Attachment 1: huh.png
huh.png
  13507   Fri Jan 5 22:19:53 2018 gautamUpdateGeneralpower outage - timing error

Just putting the relevant line from email from Rolf which at least identifies the problem here:

Looks like FB time is actually off by 1 year, as your timing system does not get year info.

There still seems to be something funky with the X arm transmission PDs - I can't seem to get the triggering to switch between the QPD and the Thorlabs PD, and the QPD signal seems to be wildly fluctuating by several orders of magnitude from 0.01-100. The c1iscex FE was pulled out, and it seemed to me like someone was doing some cable re-arrangement at the X end.

I will look into this tomorrow. 

Quote:

Rolf came here in the morning, but not sure what he did or if Jamie remotely did something. But the screen is green.

 

  3163   Wed Jul 7 00:15:29 2010 tara,RanaSummaryPSLpower spectral density from RefCav transmitted beam

I measured the RC transmitted light signals here at the 40m. I made all connections through the PSL patch panel.

Other than two steering mirrors in front of the periscope, and the steering mirror for the RFPD which were used to steer

the beam into the cavity and the RFPD respectively, no optics are adjusted.

We re-aligned the beam into the cavity (the DC level increased from 2 V to 3.83V) (Fig2) (We could not recover the power back to what it was 90 days ago)

and the reflected beam to the center of the RFPD.

 

I measured the spectral density of the signal of the transmitted beam behind RefCav in both time and frequency domain.

This will be compared with the result from PSL lab later, so I can see how stable the signal should be.

I did not convert Vrms/rtHz to Hz/rtHz because I only look at the relative intensity of the transmitted beam which will be compared to the setup at PSL lab. 

 

 

 We care about this power fluctuation because we plan to measure

 photo refractive noise on the cavity's mirros

(this is the noise caused by dn/dT in the coatings and the substrate,

the absorption from fluctuating power on the coating/mirror changes

the temperature which eventually changes the effective length of the cavity as seen by the laser.)

      

      The plan is to modulate the power of the beam going into the cavity,

the absorption from ac part will induce frequency noise which we want to see.

Since the transmitted power of the cavity is proportional to the power inside the cavity.

 Fluctuations  from other factors, for example, gain setting,  will limit our measurement. 

That's why we are concerned about the stability of the transmitted beam and made this measurement.


 

Attachment 1: RIN_rftrans.png
RIN_rftrans.png
Attachment 2: tara.png
tara.png
  3164   Wed Jul 7 10:42:29 2010 KojiSummaryPSLpower spectral density from RefCav transmitted beam

How do you calibrate this to Hz/rtHz?

Quote:

I measured the RC transmitted light signals here at the 40m. I made all connections through the PSL patch panel. No optics/PD were touched.

I measured the spectral density of the signal of the transmitted beam behind RefCav in both time and frequency domain.

This will be compared with the result from PSL lab later, so I can see how stable the signal should be.

We re-aligned the beam into the cavity (the DC level increased from 2 V to 3.83V)

and the reflected beam to the center of the RFPD.

 

 

  9506   Fri Dec 20 20:04:01 2013 SteveUpdateVACpower supply replaced with a short vent

Quote:

Quote:

Instrument rack power supplies checked and labeled at present loads.

The vacuum rack Sorensen is running HOT! Their is only 0.3A load at 24V There is plenty of space around it.

It is alarming to me because all vacuum valve positions are controlled by this 24V

 The temperature went down to room temp with temporary fan in the back. Voltage and current are stable.

Regardless, it will be replaced early next week.

Koji, Steve

 It was a bad experience again with our vacuum system.  The valves went crazy as we rebooted the computer. This was required for the swap in of a good 24V power supply.

The IFO was vented to 27 Torr through the annuloses, VA6, V7, Maglev,VM2 and VM1 (VC2 was open too)

I just opened the PSL shutter after a 4 hours pumpdown.

Condition: annuloses are not pumped, the IFO and the RGA are pumped as Atm2 shows

I will be here tomorrow morning to switch over to vacuum normal. 

More details later

 

 

Attachment 1: 4hrPumpdown.png
4hrPumpdown.png
Attachment 2: pumpdownAfterHickup.png
pumpdownAfterHickup.png
Attachment 3: PSpumpdown.png
PSpumpdown.png
  9508   Fri Dec 20 23:00:41 2013 KojiUpdateVACpower supply replaced with a short vent

I'm leaving the 40m now. IFO is aligned. Everything look good.

- The main volume P1=5e-4, CC1=1.4e-5 is still pumped by TP1 and TP2

- RGA P4<0e-4, CC4 2.1e-7, is pumped by TP3

- The annuluses are isolated.

- RP1/2/3 are off.

  9516   Fri Jan 3 11:18:41 2014 SteveSummaryVACpower supply replaced with a short vent

Quote:

Quote:

 

 The temperature went down to room temp with temporary fan in the back. Voltage and current are stable.

Regardless, it will be replaced early next week.

Koji, Steve

 It was a bad experience again with our vacuum system.  The valves went crazy as we rebooted the computer. This was required for the swap in of a good 24V power supply.

The IFO was vented to 27 Torr through the annuloses, VA6, V7, Maglev,VM2 and VM1 (VC2 was open too)

I just opened the PSL shutter after a 4 hours pumpdown.

Condition: annuloses are not pumped, the IFO and the RGA are pumped as Atm2 shows

I will be here tomorrow morning to switch over to vacuum normal. 

More details later

 

 

 Events of the power supply swap:

1, Tested 24V DC ps from Todd

2, Closed V1, VM1 and all annulos valves to create safety net for the reboot. Turbo pumps left on running.

3, Turned computer off

4, Swap power supplies and turned it on

5, Turning the power on of c1vac2 created caos switching of valves. This resulted in a air vent as shown below.

6, VM1 was jammed and it was unable to close. The IOO beam shutter closed and the IFO was venting with air for a few minutes. Maglev did an emergency shut down. TP2's V4 and TP3' V5 closed. RP1 and RP3                           roughing         pumps turned on, their hose was not connected as usual. The RGA shut down to protect itself.

7, Closed annulos valves, stoped the vent at P1 27 torr as the vacuum control was  manually recovered

8, The Maglev and the annuloses were roughed out 500 mtorr . The Maglev was restarted.

9, The IFO pump down followed std procedure from 27 torr. VM1 was moving again as the pressure differential was removed from it.

 

 Remember: next time at atm .....rough down the cryo volume from 27 torr !

Attachment 1: rebootVENT.png
rebootVENT.png
  9510   Sat Dec 21 10:53:35 2013 SteveUpdateVACpower supply replaced with a short vent-pumpdown completed

The recovery- pumpdown reached valve configuration  vacuum normal at 20 hours, cc1 7.7e-6 Torr

Lesson learned: turn all pumps off, close all valves before you reboot ! like you would prepare for AC power shut down.

 

Attachment 1: 20hrsVacNormal.png
20hrsVacNormal.png
  6860   Sat Jun 23 18:44:15 2012 steveUpdateGeneralpower surge has no effect on the lab

I was notified by CIT Utilities that there was a power surge or short power outage this after noon.

Lab conditions are normal:  c1ioo is down.  The south arm AC was off......I turned it back on.

  7088   Mon Aug 6 09:46:31 2012 steveUpdateIOOpoweroutage turns laser off

. Power outage turned off the PSL Innolight laser on Sunday afternoon.  It  was turned back on and  locked happily right on. The green lasers were not effected.

 

CALIFORNIA INSTITUTE OF TECHNOLOGY

                 FACILITIES MANAGEMENT

            UTILITY & SERVICE INTERRUPTION

 

**PLEASE POST**

 

Building:         CAMPUS WIDE     

 

Date:             SUNDAY, AUGUST 6, 2012          

 

Time:             3:41 PM          

 

Interruption:     ELECTRICAL POWER DISTRIBUTION

  

Contact:          MIKE ANCHONDO, X-4999, OR TOM BRENNAN, X-4984      

 

* THIS PAST SUNDAY AFTERNOON ABOUT 3:40 PM, PASADENA WATER AND POWER

 EXPERIENCED A FAULT ON THEIR POWER DISTRIBUTION SYSTEM.  THIS CAUSED

  A SEVERE VOLTAGE SAG WHICH AFFECTED THE CALTECH CAMPUS. THE FAULT WAS

  NOT ON A CALTECH CIRCUIT.

 

(If there is a problem with this Interruption, please notify

the Service Center X-4717 or the above Contact as soon as possible.

If no response is received we will proceed with the interruption.)

        

                        Jerry Thompson,

                        Interim Director of Campus Operations & Maintenance

 

 

  7800   Sat Dec 8 04:12:38 2012 DenUpdateLSCprcl

 Today I wanted to check that AS and REFL beams are real and contain proper information about interferometer. For this I locked YARM using AS55_I and REFL11_I. Then I compared spectrum with POY11_I locking. Everything is the same. I've also adjusted phase rotations of AS55 (0.2 ->24) and REFL11 (-34.150 -> -43).

Then I've locked MICH and aligned EMTs such that ASDC was close to zero. Then I locked PRCL and aligned PRM. Power buildup was 50. 

IMG_0118.JPG

  8446   Fri Apr 12 02:56:34 2013 DenUpdateLockingprcl angular motion

I compared PCRL and XARM angular motions by misaligning the cavities and measuring power RIN. Divergence angles for both cavities I calculated to be 100 urad.

XARM pointing noise sums from input steering TTs, PR2 and PR3 TTs, BS, ITMX, ETMY.

PRCL noise - from input TT, PRM, PR2 and PR3 TT, BS, ITMX, ITMY.

I would expect these noises to be the same as angular motion of different optics measured by oplves is simular. We do not have oplves on TT but they are present in both passes.

I measured RIN and converted to angle. Sharp 1 Hz resonance at XARM pointing spectrum is due to EMTX, it is not seen by PRCL. Other then that XARM is much quiter, especially at 3 - 30 Hz.

As PRM  is the main difference in two passes, I checked its spectrum. When PRCL was locked I excited PRM in pitch and yaw. I could see this excitation at RIN only when the peak was 100 times higher then background seismic noise measured by oplev.

pointing.png

Attachment 2: oplev_exc.pdf
oplev_exc.pdf
  8447   Fri Apr 12 09:20:32 2013 ranaUpdateLockingprcl angular motion

 How is the cavity g-factor accounted for in this calculation?

  8449   Fri Apr 12 13:21:34 2013 DenUpdateLockingprcl angular motion

Quote:

 How is the cavity g-factor accounted for in this calculation?

 I assume that pointing noise and dc misalignment couples 00 to 01 by a factor theta / theta_cavity

Inside the cavity 01 is suppressed by 2/pi*F*sin(arccos(sqrt(g_cav))).

For the XARM this number is 116 taking g-factor to be 0.32. So all pointing noise couples to power RIN.

Suppression factor inside PRC is 6.5 for g-factor 0.97. This means that 85% of jitter couples to RIN, I accounted for this factor while converting RIN to angle.

I did not consider translational motion of the beam. But still PRC RIN can not be explained by oples readings as we can see exciting optics in pitch and yaw. I suspect this RIN is due to PR3, as it can create stronger motion in yaw than in pitch due to incident angle and translational motion of the mirror. I do not have a number yet.

  8450   Sat Apr 13 03:45:51 2013 ranaUpdateLockingprcl angular motion

 

 Maybe its equivalent, but I would have assumed that the input beam is fixed and then calculate the cavity axis rotation and translation. If its small, then the modal expansion is OK. Otherwise, the overlap integral can be used.

For the ETM motion, its a purely translation effect, whereas its tilt for the ITM. For the PRM, it is also a mostly translation effect as calculated at the PRC waist position (ITM face).

  8451   Sat Apr 13 23:11:04 2013 DenUpdateLockingprcl angular motion

Quote:

For the PRM, it is also a mostly translation effect as calculated at the PRC waist position (ITM face).

I made another estimation assuming that PRCL RIN is caused by translation of the cavity axis:

  • calibrated RIN to translation, beam waist = 4mm
  • measured PRM yaw motion using oplev
  • estimated PR3 TT yaw motion: measured BS yaw spectrum with oplev OFF, divided it by pendulum TF with f0=0.9 Hz, Q=100 (BS TF), multiplied it by pendulum TF with f0 = 1.5 Hz, Q = 2 (TT TF with eddy current damping), accounted for BS local damping that reduces Q down to 10.

PRM and TT angular motion to cavity axis translation I estimated as 0.11 mm/urad and 0.22 mm/urad assuming that TTs are flat. We can make a more detailed analysis to account for curvature.

I think beam motion is caused by PR3 and PR2 TT angular motion. I guess yaw motion is larger because horizontal g-factor is closer to unity then vertical.

Attachment 1: pointing.pdf
pointing.pdf
  8454   Sun Apr 14 17:56:03 2013 ranaUpdateLockingprcl angular motion

Quote:

Quote:

For the PRM, it is also a mostly translation effect as calculated at the PRC waist position (ITM face).

I made another estimation assuming that PRCL RIN is caused by translation of the cavity axis:

  • calibrated RIN to translation, beam waist = 4mm

 In order to get translation to RIN, we need to know the offset of the input beam from the cavity axis...

This should be possible to calibrate by putting a pitch and yaw excitation lines into the PRM and measuring the RIN.

See secret document from Koji.

  8564   Mon May 13 18:44:04 2013 JenneUpdateLockingprcl angular motion

I want to redo this estimate of where RIN comes from, since Den did this measurement before I put the lens in front of the POP PD. 

While thinking about his method of estimating the PR3 effect, I realized that we have measured numbers for the pendulum frequencies of the recycling cavity tip tilt suspensions. 

I have been secreting this data away for years.  My bad.  The relevant numbers for Tip Tilts #2 and #3 were posted in elog 3425, and for #4 in elog 3303.  However, the data for #s 1 and 5 were apparently never posted.  In elog 3447, I didn't put in numbers, but rather said that the data was taken.

Anyhow, attached is the data that was taken back in 2010.  Look to elog 7601 for which TT is installed where. 

 

Conclusion for the estimate of TT motion to RIN - the POS pendulum frequency is ~1.75Hz for the tip tilts, with a Q of ~2.

Attachment 1: TT_Q_measurements.pdf
TT_Q_measurements.pdf TT_Q_measurements.pdf
  14437   Wed Feb 6 10:07:23 2019 ChubUpdate pre-construction inspection

The Central Plant building will be undergoing seismic upgrades in the near future.  The adjoining north wall along the Y arm will be the first to have this work done, from inside the Central Plant.  Project manager Eugene Kim has explained the work to me and also noted our concerns.  He assured me that the seismic noise from the construction will be minimized and we will always be contacted when the heaviest construction is to be done.

Tomorrow at 11am, I will bring Mr. Kim and a few others from the construction team to look at the wall from inside the lab.  If you have any questions or concerns that you want to have addressed, please email them to me or contact Mr. Kim directly at x4860 or through email at eugene.kim@caltech.edu . 

  5591   Fri Sep 30 19:12:56 2011 KojiUpdateGeneralprep for poweroutage

 

 [Koji Jenne]

The lasers were shutdown

The racks were turned off

We could not figure out how to turn off JETSTOR

The control room machines were turned off

FInally we will turn off nodus and linux1 (with this order).

Hope everything comes back with no trouble

(Finger crossing)

  13383   Tue Oct 17 17:53:25 2017 jamieSummaryLSCprep for tests of Gabriele's neural network cavity length reconstruction

I've been preparing for testing Gabriele's deep neural network MICH/PRCL reconstruction.  No changes to the front end have been made yet, this is all just prep/testing work.

Background:

We have been unable to get Gabriele's nn.c code running in kernel space for reasons unknown (see tests described in previous post).  However, Rolf recently added functionality to the RCG that allows front end models to be run in user space, without needing to be loaded into the kernel.  Surprisingly, this seems to work very well, and is much more stable for the overall system (starting/stopping the user space models will not ever crash the front end machine).  The nn.c code has been running fine on a test machine in this configuration.  The RCG version that supports user space models is not that much newer than what the 40m is running now, so we should be able to run user space models on the existing system without upgrading anything at the 40m.  Again, I've tested this on a test machine and it seems to work fine.

The new RCG with user space support compiles and installs both kernel and user-space versions of the model.

Work done:

  • Create 'c1dnn' model for the nn.c code.  This will run on the c1lsc front end machine (on core 6 which is currently empty), and will communicate with the c1lsc model via SHMEM IPC.  It lives at:
    • /opt/rtcds/userapps/release/isc/c1/models/c1dnn.mdl
  • Got latest copy of nn.c code from Gabriele's git, and put it at:
    • /opt/rtcds/userapps/release/isc/c1/src/nn/
  • Checked out the latest version of the RCG (currently SVN trunk r4532):
    • /opt/rtcds/rtscore/test/nn-test
  • Set up the appropriate build area:
    • /opt/rtcds/caltech/c1/rtbuild/test/nn-test
  • Built the model in the new nn-test build directory ("make c1dnn")
  • Installed the model from the nn-test build dir ("make install-c1dnn")

Test:

I tried a manual test of the new user space model.  Since this is a user space process running it should have no affect on the rest of the front end system (which it didn't):

  • Manually started the c1dnn EPICS IOC:
    • $ (cd /opt/rtcds/caltech/c1/target/c1dnn/c1dnnepics && ./startupC1)
  • Tried running the model user-space process directly:
    • $ taskset -c 6 /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -m  c1dnn

Unfortunately, the process died with an "ADC TIMEOUT" error.  I'm investigating why.

Once we confirm the model runs, we'll add the appropriate SHMEM IPC connections to connect it to the c1lsc model.

Attachment 1: c1dnn.png
c1dnn.png
  13390   Wed Oct 18 12:14:08 2017 jamieSummaryLSCprep for tests of Gabriele's neural network cavity length reconstruction
Quote:

I tried a manual test of the new user space model.  Since this is a user space process running it should have no affect on the rest of the front end system (which it didn't):

  • Manually started the c1dnn EPICS IOC:
    • $ (cd /opt/rtcds/caltech/c1/target/c1dnn/c1dnnepics && ./startupC1)
  • Tried running the model user-space process directly:
    • $ taskset -c 6 /opt/rtcds/caltech/c1/target/c1dnn/bin/c1dnn -m  c1dnn

Unfortunately, the process died with an "ADC TIMEOUT" error.  I'm investigating why.

Once we confirm the model runs, we'll add the appropriate SHMEM IPC connections to connect it to the c1lsc model.

I tried moving the model to c1ioo, where there are plenty of free cores sitting idle, and the model seems runs fine.  I think the problem was just CPU contention on the c1lsc machine, where there were only two free cores and the kernel was using both for all the rest of the normal user space processes.

So there are two options:

  • Use cpuset on c1lsc to tell the kernel to remove all other processes from CPU6 and save it just for the c1dnn model.  This should not have any impact on the running of c1lsc, since that's exactly what would be happening if we were running the model in kernel space (e.g. isolating the core for the front end model).  The auxilliary support user space processes (epics seq/ioc, awgtpman) should all run fine on CPU0, since that's what usually happens.  Linux is only using the additional core since it's there.  We don't have much experience with cpuset yet, though, so more offline testing will be required first.
  • Run the model on c1ioo and ship the needed signals to/from c1lsc via PCIe dolphin.  This is potentially slightly more invasive of a change, and would put more work on the dolphin network, but it should be able to handle it.

I'm going to start testing cpuset offline to figure out exactly what would need to be done.

  6892   Fri Jun 29 02:17:40 2012 yutaUpdateIOOprep for the vent - beam attenuating

[Koji, Jamie, Yuta]

We attenuated the incident beam (1.2 W -> 11 mW) to the vacuum chamber to be ready for the vent.
The beam spot on the MC mirrors didn't changed significantly, which means the incident beam was not shifted so much.

What we did:
 1. Installed HWP, PBS(*) and another HWP between the steering mirrors on PSL table for attenuating the beam. We didn't touched steering mirrors(**), so the incident beam to the IFO should be recovered easily, by just taking HWPs and PBS away. The power to the MC was reduced from 1.2 W to 11 mW.

(*) We stole PBSO from the AS AUX laser setup.
(**) Actually, we accidentally touched one of the steering mirrors, but we recovered them. We did the recovery tweaking the touched nob and minimizing the MC reflection. We confirmed the incident beam was recovered by measuring MC beamspot positions(below).

 2. Aligned PBS by minimizing MC reflection, adjusted first HWP so that the incident beam will be ~10 mW, and adjusted last HWP to minimize MC reflection (make the incident beam to the MC be p-polarization).

 3. To do the alignment and adjusting, we put 100% reflection mirror (instead of 10% BS) for the MC reflection PD to increase the power to the PD. That means, we don't have MC WFS right now.

 4. Tweaked MC servo gains to that we can lock MC in low power mode. It is quite stable right now. We didn't lose lock during beam spot measurement.

 5. Measured beam spot positions on the MC mirrors and convinced that the incident beam was not shifted so much (below). They look like they moved ~0.2 mm, but it is with in the error of the MC beam spot measurement.

# filename      MC1pit  MC2pit  MC3pit  MC1yaw  MC2yaw  MC3yaw  (spot positions in mm)
./dataMCdecenter/MCdecenter201206281154.dat     3.193965        4.247243        2.386126        -6.639432       -0.574460       4.815078    this noon
./dataMCdecenter/MCdecenter201206282245.dat     3.090762        4.140716        2.459465        -6.792872       -0.651146       4.868740    after recovered steering mirrors
./dataMCdecenter/MCdecenter201206290135.dat     2.914584        4.240889        2.149244        -7.117336       -1.494540       4.955329    after beam attenuation

 6. Rewrote matlab code sensemcass.m to python script sensemcass.py. This script is to calculate beam spot positions from the measurement data(see elog #6727). I think we should make senseMCdecenter script better, too, since it takes so much time and can't stop and resume the measurement if MC is unlocked.

  6893   Fri Jun 29 03:21:32 2012 yutaUpdateGeneralprep for the vent - others

1. Turned off high voltage power supplies for PZT1/2 (input PZTs) and OMC stage 1/2. They live in 1Y3 rack and AUX_OMC_NORTH rack.

2. Restored all IFO optics alignment to the position where I aligned this afternoon (for SRM, I didn't aligned it; it restored at the saved value on May 26).

3. Centered all the oplevs. They can be used for a reference for alignment change before and after the vent.

I will leave PSL mechanical shutter and green shutters closed just in case.

Some MEDM screenshots below.
MEDMscreenshotswithCOW_20120629.png

  14022   Tue Jun 26 20:59:36 2018 aaronUpdateOMCprep for vent in a couple weeks

I checked out the elog from the vent in October 2016 when the OMC was removed from the path. In the vent in a couple weeks, we'd like to get the beam going through the OMC again. I wasn't really there for this last vent and don't have a great sense for how things go at the 40m, but this is how I think the procedure for this work should approximately go. The main points are that we'll need to slightly translate and rotate OM5, rotate OM6, replace one mirror that was removed last time, and add some beam dumps. Please let me know what I've got wrong or am missing.

[side note, I want to make some markup on the optics layouts that I see as pdfs elsewhere in the log and wiki, but haven't done it and didn't much want to dig around random drawing software, if there's a canonical way this is done please let me know.]

Steps to return the OMC to the IFO output:

  1. Complete non-Steve portions of the pre-vent checklist (https://wiki-40m.ligo.caltech.edu/vent/checklist)
  2. Steve needs to complete his portions of the checklist (as in https://nodus.ligo.caltech.edu:8081/40m/12557)
  3. Need to lock some things before making changes I think—but I’m not really sure about these, just going from what I can glean from the elogs around the last vent
    1. Lock the IMC at low power
    2. Align the arms to green
    3. Lock the arms
    4. Center op lev spots on QPDs
    5. Is there a separate checklist for these things? Seems this locking process happens every time there is a realignment or we start any work, which makes sense, so I expect it is standardized.
  4. Turn/add optics in the reverse order that Gautam did
    1. Check table leveling first?
    2. Rotate OM5 to send the beam to the partially transmissive mirror that goes to the OMC; currently OM5 is sent directly to OM6. OM5 also likely needs to be translated forward slightly; Gautam tried to maintain 45 deg AOI on OM5/6.
    3. A razor beam dump was also removed, which should be replaced (see attachment 1 on https://nodus.ligo.caltech.edu:8081/40m/12568)
    4. May need to rotate OM6 to extract AS beam again, since it was rotated last time
    5. Replace the mirror just prior to the window on the AP table, mentioned here in attachment 3: https://nodus.ligo.caltech.edu:8081/40m/12566
      1. There is currently a rectangular weight on the table where the mirror was, for leveling
  5. Since Gautam had initially made this change to avoid some backscattered beams and get a little extra power, we may need to add some beam dumps to kill ghosts
    1. This is also mentioned in 12566 linked above, the dumps are for back-reflection off the windows of the OMC
  6. Center beam in new path
  7. Check OMC table leveling
  8. AS beam should be round on the camera, with no evidence of clipping on any optics in the path (especially check downstream of any changes)
  4574   Wed Apr 27 18:14:48 2011 kiwamuUpdateLSCpreparation for DRMI locking : RF status

RF_Work_Status.png

POX11 (see this entry) is now listed as REFL11 (on the very top row).

We will rename POY11 to POP11 for DRMI locking.

The files are on https://nodus.ligo.caltech.edu:30889/svn/trunk/suresh/40m_RF_upgrade/.

  2644   Fri Feb 26 15:32:13 2010 steveConfigurationVACpreparation for power outage: vacuum all off

There is a planned power outage tomorrow, Saturday from 7am till midnight.

I vented all annulies and switched to ALL OFF configuration. The small region of the RGA is still under vacuum.

The vac-rack: gauges, c1vac1 and UPS turned off.

Attachment 1: ventd3.jpg
ventd3.jpg
  13806   Wed May 2 10:03:58 2018 SteveHowToSEIpreparation of load cell measurement at ETMX

Gautam and Steve,

We have calibrated the load  cells. The support beams height monitoring is almost ready.

The danger of this measurment that  the beams height changes can put shear and torsional forces on this formed (thin walled) bellow

They are designed for mainly axial motion.

The plan is to limit height change to 0.020" max

0, center oplev at X arm locked

1, check that  jack screws are carrying full loads and set height indicator dials to zero ( meaning: Stacis is bypassed )

2, raise beam height with aux leveling wedge  by 0.010"  on all 3 support point and than raise it an other 0.005"

3, replace levelling wedge with load cell that is centered and shimmed.     Dennis   Coyne pointed out that the Stacis foot has to be loaded at the center of the foot and formed bellow can shear at their limits.

4, lower the support beam by 0.005" ......now full load on the cells

Note: jack screw heights will not be adjusted or  touched.......so the present condition will be recovered

Quote:

We could use similar load cells   to make the actual weight measurement on the Stacis legs. This seems practical in our case.

I have had bad experience with pneumatic Barry isolators.

Our approximate max compression loads are 1500 lbs on 2 feet and 2500 lbs on the 3rd one.

 

 

Attachment 1: loadcellCAL500.pdf
loadcellCAL500.pdf
Attachment 2: 3loadcellwcontr.jpg
3loadcellwcontr.jpg
Attachment 3: loadcellLocation.pdf
loadcellLocation.pdf
Attachment 4: DSC01009.JPG
DSC01009.JPG
Attachment 5: jack_screw.jpg
jack_screw.jpg
Attachment 6: ETMX_NW_foot_STACIS.pdf
ETMX_NW_foot_STACIS.pdf
  13809   Thu May 3 09:56:42 2018 SteveHowToSEIpreparation of load cell measurement at ETMX

[ Dennis Coyne'  precision answer ]

Differential Height between Isolators

According to a note on the bellows drawing (D990577-x0/A), the design life of the bellows at ± 20 minutes rotational stroke is 10,000 cycles. A 20 minute angular (torsional) rotation of the bellows corresponds to 0.186" differential height change across the 32" span between the chamber support beams (see isolator bracket, D000187-x0/B).

Another consideration regarding the bellows is the lateral shear stress introduced by the vertical translation. The notes on the bellows drawing do not give lateral shear limits. According to MDC's web page for formed bellows in this size range the lateral deflection limit is approximately 10% of the "live length" (aka "active length", or length of the convoluted section). According to the bellows drawing the active length is 3.5", so the maximum allowable lateral deflection should be ~0.35".

Of course when imposing a differential height change both torsional and lateral shear is introduced at the same time. Considering both limits together, the maximum differential height change should be < 0.12".

One final consideration is the initial stress to which the bellows are currently subjected due to a non-centered support beam from tolerances in the assembly and initial installation. Although we do not know this de-centering, we can guess that it may be of the order of ~ 0.04". So the final allowable differential height adjustment from the perspective of bellows stress is < 0.08".   Steve:  accumulated initial stress is unknown.  We used to adjust the original jack screws for IFO aligment in the early days of ~1999. This kind of adjustment was stopped when we realized how dangereous it can be. The fact is that there must be unknown amount of accumulated initial stress. This is my main worry but I'm confident that 0.020" change is safe.

So, with regard to bellows stress alone, your procedure to limit the differential height change to <0.020" is safe and prudent.

However, a more stringent consideration is the coplanarity requirement (TMC Stacis 2000 User's Manual, Doc. No. SERV 04-98-1, May 6, 1991, Rev. 1), section 2, "Installation",which stipulates < 0.010"/ft, or < 0.027" differential height across the 32" span between the chamber support beams. Again, your procedure to limit the differential height change to < 0.02" is safe.

Centered Load on the STACIS Isolators

According to the TMC Stacis 2000 User's Manual (Document No. SERV 04-98-1, May 6, 1991, Rev. 1), section 2, "Installation", typical installations (Figure 2-3) are with one payload interface plate which spans the entire set of 3 or 4 STACIS actuators. Our payload interface is unique.

Section 2.3.1, "Installation Steps": "5. Verify that the top of each isolator is fully under the payload/interface plate; this is essential to ensure proper support and leveling. The payload or interface plate should cover the entire top surface of the Isolator or the entire contact area of the optional jack."

section 2.3.2, "Payload/STACIS Interface": "... or if the supporting points do not completely cover the top surface of each Isolator, an interface plate will be needed."

The sketch in Figure 2-2 indicates an optional leveling jack which appears to have a larger contact surface area than the jacks currently installed in the 40m Lab. Of course this is just a non-dimensioned sketch. Are the jacks used by the 40m Lab provided by TMC, or did we (LIGO) choose them? I beleive Larry Jones purchased them.

A load centering requirement is not explicitly stated, but I think the stipulation to cover the entire top surface of each actuator is not so much to reduce the contact stress but to entire a centered load so that the PZT stack does not have a reaction moment.

From one of the photos in the 40m elog entry (specifically jack_screw.jpg), it appears that at least some isolators have the load off center. You should use this measurement of the load as an opportunity to re-center the loads on the Isolators.

In section 2.3.3, "Earthquake Restraints" restraints are suggested to prevent damage from earth tremors. Does the 40m Lab have EQ restraints? Yes, it has

Screw Jack Location

I could not tell where all of the screw jacks will be placed from the sketch included in the 40m elog entry which outlines the proposed procedure.

Load Cell Locations

The sketch indicates that the load cells will be placed on the center of the tops of the Isolators. This is good. However while discussing the procedure with Gautam he said that he was under the impression that the load cell woudl be placed next to the leveling jack, off-center. This condition may damage the PZT stack. I suggest that the leveling jack be removed and replaced (temporarily) with the load cell, plus any spacer required to make up the height difference. Yes

If you have any further question, just let me know.

    Dennis

 

 

Dennis Coyne
Chief Engineer, LIGO Laboratory
California Institute of Technology
MC 100-36, 1200 E. California Blvd.

 

 

 

  13840   Mon May 14 08:55:40 2018 Dennis CoyneHowToSEIpreparation of load cell measurement at ETMX

follow up email from Dennis 5-13-2018. The last line agrees with the numbers in elog13821.

Hi Steve & Gautam,

I've made some measurements of the spare (damaged) 40m bellows. Unfortunately neither of our coordinate measurement arms are currently set up (and I couldn't find an appropriate micrometer or caliper), so I could not (yet) directly measure the thickness. However from the other dimensional measurements, and a measurement of the axial stiffness (100 lb/in), and calculations (from the Standards of the Expansion Joint Manufacturers Association (EJMA), 6th ed., 1993) I infer a thickness of 0.010 inch in . This is close to a value of 0.012 in used by MDC Vacuum for bellows of about this size.

I calculate that the maximum allowable torsional rotation is 1.3 mrad. This corresponds to a differential height, across the 32 in span between support points, of 0.041 in.

In addition using the EJMA formulas I find that one can laterally displace the bellows by 0.50 inch (assuming a simultaneous axial displacement of 0.25 inch, but no torsion), but no more than ~200 times. I might be good to stay well below this limit, say no more than ~0.25 inch (6 mm).

If interested I've uploaded my calculations as a file associated with the bellows drawing at D990577-A/v1.

BTW in some notes that I was given (by either Larry Jones or Alan Weinstein) related to the 40m Stacis units, I see a sketch from Steve dated 3/2000 faxed to TMC which indicates 1200 lbs on each of two Stacis units and 2400 on the third Stacis.

  5089   Tue Aug 2 02:35:23 2011 kiwamuUpdateGeneralpreparation of the vent : status and plan

The vent will take place on Wednesday.

Plan for Tuesday :

  (Morning) Preparation of necessary items for the low power MC (Steve / Jamie)

  (Daytime) Measurement of the MC spot positions (Suresh)

  (Daytime) Arm length measurement (Jenne)

  (Nighttime) Locking of the low power MC (Kiwamu / Volunteers)

 

Plan for Wednesday :

  (Early morning) Final checks on the beam axis, all alignments and green light (Steve / Kiwamu / Volunteers )

  (Morning) Start the vent (Steve)

  (daytime-nighttime) Taking care of the Air/Nitrogen cylinders (Everybody !!)

 

Status of the vent preparation :

 

  (not yet) Low power MC

  (ongoing) Measurement of the arm lengths

  (ongoing) Measurement of the MC spot positions

  (80% done) Estimation of the tolerance of the arm length (#5076)

  (done) Alignment of the Y green beam (#5084)

  (done) Preparation of beam dumps (#5047)

  (done) Health check of shadow sensors and the OSEM damping gain adjustment (#5061)

  (done) Alignment of the incident beam axis (#5073)

  (done) Loss measurement of the arm cavities (#5077)

  5078   Sun Jul 31 22:48:35 2011 kiwamuSummaryGeneralpreparation of the vent : status update

Status update for the vent preparation:

The punchline is : We can not open the chamber on Monday !

 

##### Task List for the vent preparation #####

  (not yet) Low power MC

  (not yet) Measurement of the arm lengths

  (not yet) Alignment of the Y green beam (#5066)

  (not yet) Measurement of the MC spot positions

  (80% done) Estimation of the tolerance of the arm length (#5076)

  (done) Preparation of beam dumps (#5047)

  (done) Health check of shadow sensors and the OSEM damping gain adjustment (#5061)

  (done) Alignment of the incident beam axis (#5073)

  (done) Loss measurement of the arm cavities (#5077)

Quote from #5048

Quote:

The vent will start from 1 st of August ! 

 

ELOG V3.1.3-